id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
11897738
pes2o/s2orc
v3-fos-license
Analysis of antenal sensilla patterns of Rhodnius prolixus from Colombia and Venezuela Antennal sensilla patterns were used to analyze population variation of domestic Rhodnius prolixus from six departments and states representing three biogeographical regions of Colombia and Venezuela. Discriminant analysis of the patterns of mechanoreceptors and of three types of chemoreceptors on the pedicel and flagellar segments showed clear differentiation between R. prolixus populations east and west of the Andean Cordillera. The distribution of thick and thin-walled trichoids on the second flagellar segment also showed correlation with latitude, but this was not seen in the patterns of other sensilla. The results of the sensilla patterns appear to be reflecting biogeographic features or population isolation rather than characters associated with different habitats and lend support to the idea that domestic R. prolixus originated in the eastern region of the Andes Triatominae (Hemiptera, Reduviidae), the insect vectors of Chagas disease (American trypanosomiasis) have been recorded from 30 of the 32 departments of Colombia, from the coast to 2000 m.a.s.l.(Molina et al. 2000).Over 20 species have been reported, of which Rhodnius prolixus (Stål) is the most frequently encountered in domestic habitats, representing 95.4% of domestic Triatominae collected, with house infestation rates above 20% in many municipalities (Corredor et al. 1990, Angulo et al. 1997).R. prolixus is also widespread in Venezuela where, in spite of extensive control campaigns since 1966, it continues to be the most important domestic vector of Chagas disease (Aché & Matos 2001, Feliciangeli et al. 2003). Historical reconstruction, combined with biogeographical studies and available genetic comparisons, suggests that R. prolixus may have been first domesticated in the savanna-like areas (llanos) of Venezuela, and subsequently spread by accidental human intervention (Dujardin et al. 1998, Schofield & Dujardin 1999, Zeledón 2004).In general however, populations of R. prolixus tend to show low levels of genetic or allozyme variability (Dujardin et al. 1998, Monteiro et al. 2000, 2003, Jaramillo et al. 2001), and the aim of the present study was to examine phenetic variability between Colombian and Venezuelan populations using antennal sensilla patterns, which have been shown to be informative characters in studies of population structuring in other species of Triatominae (Gracco & Catalá 2000, Catalá & Dujardin 2001, Carbajal et al. 2002). MATERIALS AND METHODS Areas of study -Adults R. prolixus were collected from domestic habitats in six departments from three biogeographical regions of Colombia selected on the basis of previous reports and the national entomological survey (Angulo et al. 1997, Molina et al. 2000, Ministerio de Salud 2000).These represented the Andean zone (departments of Santander and Tolima), the Sierra Nevada (departments of Magdalena and Guajira), and the llanos (department of Arauca).R. prolixus was also collected from houses in the Venezuelan state of Barinas, which neighbours the department of Arauca (Fig. 1) and from where acute cases of Chagas disease have recently been reported (Añéz et al. 1999). Santander and Tolima in the Andean zone west of the Cordillera are mountainous woodland regions with altitudes from 1000 to 5500 m.a.s.l.The average annual temperature in Santander is 18ºC, compared to 28ºC in Tolima.Magdalena and Guajira in the Sierra Nevada de Santa Marta, mountain at the Caribbean coast in the North of Colombia, isolated from the rest of the Andean range, with a wide diversity of habitats.The lower slopes are used for agriculture, cattle-ranching and tourism and the upper slopes are inhabited by the indiginous Arhuaco and Kogi cultures.The llanos of Arauca and Barinas are part of an extensive savanna-like plain extending east of the Andes in Colombia and Venezuela.The climate is humid-tropical, with average temperatures of 22-28ºC and extensive woodlands with numerous stands of palm trees. The majority of rural houses in the studied areas are of adobe or 'bahareque' (woven stick and mud) with roofs of palm thatch or corrugated metal or fibre-cement sheets.R. prolixus was collected from infested houses during 2003-2005, by timed manual collection (using forceps and a torch to see into crevices in the house structure) (Table I).The collected insects were maintained live or placed into 70% ethanol for transportation to our laboratory.From each collection, adults were selected at random until at least 10 with complete antennae could be included from each of the departments.Magdalena and Guajira were taking as the same department and included in the same biogeographical regions (Fig. 1). Analysis of antennal sensilla -Antennae were cut from the head using fine forceps, and stored in 70% ethanol prior to diaphanisation in 4% NaOH.After clearing, they were individually slide-mounted in glycerine and examined at 40×.Sensilla were classified according to Catalá and Schofield (1994), and counted along the entire length of the pedicel (P) and the two flagellar segments (F1 and F2).Only one antenna was examined from each specimen.Four sensilla types were used for analysis: BR, mechanoreceptive bristles, BA, basiconics, TH, thin walled trichoids, TK, thick-walled trichoids; these sensilla have showns to be significant in differentiating Triatominae populations (Catala & Dujardin 2001, Catalá et al. 2004, 2005).A total of 53 insects (27 males and 26 females) was included in the analysis.For each population, mean sensilla density (±SD) was calculated for each segment (Table II).Logs of these variables were compared by ANOVA or Kruskal-Wallis, with significance of differences estimated by Newman-Keuls with Bonferroni correction for continuity (Sokal & Rohlf 1997) using STATISTICA v6.Discriminant analysis were made between the five populations, and also between populations from the three biogeographical regions, using PADWIN v65 (http:// www.mpl.ird.fr/morphometrics).Latitudinal variation, which has been observed in populations of Triatoma dimidiata (Catalá et al. 2005), was also examined by linear correlation with TH and TK densities on the flagellar segments. RESULTS ANOVA between sexes showed no significant differences between sexes for any sensilla pattern in any of the departments studied (data not shown) so that subsequent analysis were made on groups of 10 (5 males and 5 females) or 11 insects (5 males and 6 females) for each department (Table II). Univariate analysis showed significant differences in densities of pedicel mechanoreceptors (PBR) and flagellar chemoreceptors (F1BA, F1TK, F2TK, F1TH, F2TH), especially those of the first flagellar segment (F1) (Table III).Insects from the Andean region differed from those of the Sierra Nevada only in numbers of F2TH.However, insects from the llanos differed significantly from those of the Andes and those from la Sierra Nevada by densities of five receptor types (Table III). Multivariate discriminant analysis using all variables except F2BA and F1BR (due to heteroscedascity) correctly reclassified 86% of all individuals to their respective department.The population from Magdalena-Guajira was 100% correctly reclassified, followed by those of Barinas (90%), Arauca (88%), Santander (80%), and Tolima (72%).Of the four discriminant functions obtained, the first two contributed 94% of the observed variation (63% for the first, and 31% for the second), and the two variables with highest contribution to variability were PBR and F1TH.Plotting individuals within the discriminant space defined by the first two discriminant functions revealed clear groupings according to biogeographical region (Fig. 2).Similar levels of significance were shown for the Mahalanobis distances between these biogeographical groupings, but with insignificant differences between those of Tolima-Santander or Arauca-Barinas; the Magdalena-Guajira population showed no significant grouping with either of these two. Regression analysis of sensilla density on latitude showed significant increase in TH of the second flagellar segment in accordance with increasing north latitude (Fig. 3A) with a similar decrease in TK (Fig. 3B).However, no such correlation was found for other sensilla types or for TH and TK on the first flagellar segment (data not shown). DISCUSSION The lack of overt sexual dimorphism in Rhodnius, and the relatively simple sensilla patterns, for example shown by the absence of chemoreceptors on the pedicel has been suggested to relate to the abundance of these insects in stable habitats such as domestic environments and palm tree crowns (Schofield 1988, Catalá 1997, Carbajal de la Fuente 2002).As shown by Dujardin et al. (1999), Triatominae tend to show reduced sexual dimorphism in domestic populations, compared to those occupying silvatic habitats.In the case of R. prolixus, the high densities attained in domestic habitats (Rabinovich et al. 1995, Sandoval et al. 2000) presumably facilitate encounters between sexes, reducing any necessity for sexual dimorphism or for specific means of sexual identity. tions in accordance with their biogeographical origin supports the idea that sensilla patterns can act as specific indicators for particular populations.However, since all populations examined were taken from similar domestic habitats, the sensilla patterns appear to be reflecting biogeographic features or population isolation rather than characters associated with different habitats.The analysis of sensilla patterns indicates clear differentiation between R. prolixus populations to the east (Arauca and Barinas) and west (Santander and Tolima) of the Andean Cordillera, as has also been indicated by comparative head morphometrics (Esteban et al. 2000).Populations from Magdalena-Guajira do not group directly with either the west or eastern forms, but appear to be more similar to the western forms in the sense that they are differentiated from the western forms only by the distribution of TH on the second flagellar segment (F2TH).These groupings lend support to the idea that the original forms of domestic R. prolixus developed to the east of the Andes, in the region of the llanos of Venezuela and Colombia, being subsequently spread westwards presumably by accidental human intervention along a route following the pass between the Sierra Nevada and the northern tip of the Andean Cordillera (Schofield & Dujardin 1999).The direction of this presumed dispersal is further supported by the fact that for all sensilla types on all segments (except for the F2TH of the Magdalena-Guajira populations); density is reduced from east to west. The latitudinal variation in the number of F2TK and F2TH, reported for the first time by R. prolixus, was also observed in first flagellar segment in populations of T. dimidiata, and explained by the authors in relation to the greater housing capacity to Ecuador (Catalá et al. 2005) If this interpretation is correct, that domestic R. prolixus has been dispersed westwards into Colombia presumably by accidental human intervention and that the Andean Cordillera remains as a biogeographical barrier between the eastern populations of the llanos and the western populations of the Magdalena valley, then this could suggest that control interventions would be more successful against the western populations, since these are no longer within the original biogeographical region of the species.There is evidence that R. prolixus populations in Central America were also carried there from a Venezuelan source (Dujardin et al. 1998, Zeledón 2004).By contrast, in the llanos of Venezuela and eastern Colombia which are characterized by extensive palm tree stands where silvatic forms of R. prolixus have been reported (Gamboa 1963) the biology of domestic R. prolixus may require additional studies to reorientate binational control interventions. Fig. 1 : Fig. 1: map of Colombia and Venezuela showing the collection places of Rhodnius prolixus in Tolima, Santander, Magdalena, Guajira, and Arauca in Colombia and Barinas in Venezuela. Fig. 3 : Fig. 3: latitudinal variation of the thin walled trichoids TH (A) and the thick walled trichoids (TK) (B) of the flagellum 2 in both sexes of populations of Rhodnius prolixus from Colombia and Venezuela (for Barinas, the highest latitude was taken from TableI).
2017-09-10T12:44:48.157Z
2005-12-01T00:00:00.000
{ "year": 2005, "sha1": "3a471bf88ef1432f7a045469e510e40d28ce6394", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/mioc/a/pDM7SYmcH486hw9ZPss79Gy/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3a471bf88ef1432f7a045469e510e40d28ce6394", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
216128746
pes2o/s2orc
v3-fos-license
First lipid residue analysis of Early Neolithic pottery from Swifterbant (the Netherlands, ca. 4300 – 4000 BC) This paper focuses on the functional analysis of Swifterbant pottery from North-western Europe (ca. 4300 – 4000 BC) through lipid residue analysis. The main aim is to understand the role of pottery in terms of its relation to hunter-fisher-gatherer lifestyle, and the change in available food resources brought about by the arrival of domesticated animal and plant products. We conducted lipid residue analysis of 62 samples from three Swifterbant sites S2, S3 and S4. A combined approach using both GC-MS and GC-C-IRMS of residues absorbed into the ceramic was employed to identify their context. Our results demonstrate that Swifterbant ceramics were used exclusively for processing aquatic resources. We also found no evidence of inter-site variation in the use of pottery or variation based on both typological and technological features of the pottery. We found no evidence for any domesticated resources despite their presence in the faunal and botanical assemblages. Introduction In many parts of Europe, the transition to farming and the start of pottery production occurred at the same time and both innovations are often considered to be part of a 'Neolithic package' (Barker 2006;Gronenborn 2007;B a i l e ya n d Spikins 2008). In contrast, in the western Baltic, so-called Ertebølle pottery was present much earlier than farming and appears to be a forager innovation perhaps derived from contact with ceramic using hunter-gatherers based in the eastern Baltic. The Dutch wetlands also witnessed a somewhat different socio-economic trajectory. Here, pottery production was invented or adopted by hunter-gatherers from ca. 5000 cal BC, but domesticated animals, particularly domesticated cattle, and cereals do not appear in the sequence until ca. 4700 and 4300 cal BC, respectively (Raemaekers 1999(Raemaekers , 2003;L o u w e Kooijmans 2003). These groups are commonly termed the 'Swifterbant culture' due to their distinctive material culture, with sites often located in wetlands, between the Scheldt valley (Belgium) and Lake Dümmer (Lower Saxony, Germany) (Raemaekers 1999;Amkreutz2013). Unlike most other parts of Europe, the adoption of farming in this region did not necessarily lead to large-scale changes in material culture or economic practices. A major economic transition is seen only later, with the introduction of TRB (Trichterbecherkultur)pottery, at ca. 4000 cal BC (ten Anscher 2012;Raemaekers2012) Here, we investigate the relationship between economic practices and material culture by undertaking the first lipid residue analysis of Swifterbant ceramics to determine their use. A key question is whether Swifterbant ceramics were associated with domesticated animal and plant foods once these became available or whether culinary practices remained essentially unchanged and continued to reflect the hunter-fisher-gatherer economy. Our initial research focuses on three contemporaneous sites (S2, S3 and S4) in a small area of the Netherlands known as Swifterbant, the type site for the Swifterbant culture, dating from between 4300 and 4000 cal BC. By this time, cereals and domestic animals had become established in the region and had been incorporated into a broader, pre-existing economy based on fishing, hunting and gathering (leading to a so-called 'extended broad spectrum economy') (cf. Louwe Kooijmans 1993). As well as investigating the role of pottery in these forager-farmer societies, this study also offers an opportunity to examine inter-site variation in pottery given the different domestic (S3 and S4) and funerary/ritual (S2) functions that have been proposed for these sites (Devriendt 2014: 220). Lipid residue analysis on Swifterbant pottery is also relevant to the broader debate regarding the transition to farming and the role of ceramics therein; a debate that in Northern Europe is dominated by the Ertebølle culture. From its inception in the 1970s, the Swifterbant culture has been considered a western branch of the Northern European Ertebølle culture (De Roever 1979), an interpretation that still finds an audience (cf. De Roever 2004;Rowley-Conwy 2013). A competing interpretation is that its emergence was unrelated to the Ertebølle culture (Raemaekers 1997;A n d e r s e n2010;t e nA n s c h e r2012), whereas this discussion has until now been based primarily on the technology and typology of the ceramics, the functional analysis provided here will add new fuel to this fire. The archaeological sites The sites of the Swifterbant cluster ( Fig. 1) are located in Oostelijk Flevoland, the Netherlands. Oostelijk Flevoland is a large polder, a reclaimed floor of a lake, the Ijsselmeer. The sites were discovered when the ditches between the agricultural plots were dug and are part of a covered and wellpreserved prehistoric landscape which consists of a Neolithic creek system and adjacent sand ridges (occupied during the Mesolithic and Neolithic). Swifterbant sites S2, S3 and S4 are located on the banks of the Neolithic creek system. S2 (52°, 35' 3.0" N, 5°, 34' 54.5" E) is located along the main Neolithic creek, while the adjacent S3 (52°, 34' 44.8" N, 5°, 34' 56.8" E) and S4 (52°, 34' 46.5" N, 4°, 34' 57.9" E) 1 , are located along a side branch, 600 m south of S2 (Devriendt 2014) (Fig. 1). Several 14 C dates from the sites confirms that they were occupied ca. 4300-4000 cal BC (Peeters 2007;Devriendt 2013). The pottery from these sites was extensively studied by De Roever (1979,2004). The archaeological remains indicate the exploitation of both domestic animals, such as pig, cattle and sheep/goat, and game animals, such as beaver and otter. The game animals were hunted for their fur and their meat (Zeiler 1997a). The faunal analysis indicates that pig bones, wild and/ or domesticated, dominate the assemblage (Zeiler 1997a). In terms of fish remains, the sites provide clear evidence for both anadromous (sturgeon, grey mullet and eel) and freshwater (pike, perch and catfish) species (Brinkhuizen 1976;Clason 1978). In addition, archaeobotanical analyses indicated the presence of two types of cereals (naked six-row barley [Hordeum vulgare] and hulled emmer wheat [Triticum turgidum ssp. dicoccum]) and several different wild plant species, such as hazelnut, hawthorn, rose-hip, wild apple and 1 The DMS coordinates mentioned in the text correspond to the location of the archaeological sites. The degree of reliability is 1 m for all three sites. These coordinates were generated by Erwin Bolhuis (Groningen Institute of Archaeology) based on the information available online on the Dutch Ministry of Education, Culture and Science, National monument register page (for S2, https://monumentenregister.cultureelerfgoed.nl/monumenten/532464; for S3 and S4 https://monumentenregister.cultureelerfgoed.nl/monumenten/ 532465). Sample selection All three Swifterbant sites mentioned in this paper, S2, S3 and S4, are identified as unstratified midden deposits with no clear contextual information (Huisman and Raemaekers 2014). Therefore, sherds with different typological and technological features were sampled to make the collection as representative as possible. A selection of 62 sherds (S2, n = 14; S3, n =19; and S4, n = 29), all representing individual ceramic vessels, were sampled for lipid residue analysis. During the process of selecting samples, each fragment was studied from the perspective of form, size, decoration, rim diameter and wall thickness (Online Resource 1). The samples were also analysed under the microscope in order to get a clear understanding of the temper (Online Resource 1). Based on the information collected, the sample set consists of 14 base fragments with either pointed or rounded base, 28 rim fragments and 20 body fragments. The average wall thickness for the pottery is 10 mm for all three sites. Of the 28 rim fragments, 4 did not provide rim diameter information due to their small size. For the remainder, the rim diameter varied between 20 and 30 cm with an average of 25 cm although there are 5 samples smaller than 20 cm and 3 samples greater than 30 cm with examples of each appearing at all three sites. Although one of the rim fragments from S2 has more prominent decoration than is usual, overall decoration appears to be uncommon, and where present, simple and matching with the general description of Swifterbant pottery. The base fragments and body sherds show no decoration, with the exception of five body fragments that are decorated with nail impressions (four from S4 and one from S2). In contrast, rim fragments do show decorative patterns mainly on the top of the rim and/or just below the rim, both interior and exterior as well as around the neck again both interior and exterior. The decoration on the top of the rim is a series of spatula or nail impressions, while those below the rim or on the neck area seems to consist of a series of shallow impressions and occasionally, fingertip impressions, which circle the vessel (Fig. 2). In terms of temper, our samples fit into the general scheme of Swifterbant pottery (Raemaekers and de Roever 2010). The majority of sherds from S3 (n = 14, out of 19) and S4 (n =26, out of 29) indicate plant material together with mica, grit and sand (Online Resource 1). The sherds from S2, in contrast, show an even distribution between plant material (n =7)and grit (n = 7) as the most abundant temper. Like S3 and S4, S2 also shows the presence of mica and sand as other tempers. The analysis of the temper does not indicate any correlations with wall thickness or decoration as it was suggested in a previous study on Swifterbant pottery (cf. Raemaekers et al. 2013). The fabric is extremely coarse with no deliberate surface treatment other than occasional hand smoothing. The hand smoothing is more visible on the S2 sherds than it is on the S3 and S4 sherds. Acidified sulphuric acid extraction extraction of lipids Ceramic was drilled from the interior portion of each vessel (n = 62) and analysed using the established standard protocol, one-step methanol/sulphuric acid extraction Correa-Ascencio and Evershed 2014;Papakosta et al. 2015). The outer surface (~0-1 mm) of the sampling area was first removed, using a Dremel drill, to reduce the external contamination to a bare minimum. Then, the sherds were drilled to a depth of up to 5 mm on the interior surface to produce ca. two grams of pottery powder. An internal standard (alkane C34, 10 μL) was added to a subsample of powdered sherd (ca. 1 g) followed by 4 mL methanol. The suspended solution was sonicated for 15 min, then acidified with concentrated sulphuric acid (800 μl) and heated for 4 h at 70°C. Lipids were sequentially extracted with n-hexane (2 mL × 3). The extracts were combined and dried under nitrogen at 35°C. Finally, an additional internal standard (n-hexatriacontane, 10 μg) was added to each sample prior to their analysis by gas chromatography-mass spectrometry (GC-MS) and gas chromatography-combustion isotope ratio mass spectrometry (GC-C-IRMS) in order to obtain molecular and carbon singlecompound isotope results. To control for any contamination introduced during the sample preparation, a negative control, containing no ceramic powder, was prepared and analysed with each sample batch. Gas chromatography-mass spectrometry (GC-MS) GC-MS analysis was carried out on an Agilent 7890A series GC attached to an Agilent 5975C Inert XL mass-selective detector. A splitless injector was used and maintained at 300°C. The column was inserted into the ion source of the mass spectrometry directly. Helium was used as the carrier gas, with a constant flow rate at 3 mL/min. The ionisation energy was 70 eV, and spectra were obtained by scanning between m/z 50 and 800. Samples (n = 62) were analysed by using an Agilent DB-5ms (5%phenyl) methylpolysiloxane column (30 m × 0.25 mm × 0.25 μm). The temperature was set to 50°C for 2 min. This was followed by a rise of 10°C per minute up to 350°C. The temperature was then held at 350°C for 15 min. Compounds were identified by comparing them with the library of mass spectral data and published data. All samples (n = 62) were also analysed by using a DB-23ms (50%-cyanopropyl)-methylpolysiloxane column (60 m × 0.25 mm × 0.25 μm) in simulation (SIM) mode to increase the sensitivity for the identification of isoprenoid fatty acids and ω-(o-alkylphenyl) alkanoic acids (APAAs), which can be used to characterise aquatic foods ;A d m i r a a le ta l .2018). The temperature was set to 50°C for 2 min. This was followed by a rise of 4°C per minute up to 140°C, then 0.5°C per minute up to 160°C and then 20°C per minute up to 250°C. The temperature was then held at 250°C for 10 min. Scanning then proceeded with the first group of ions (m/z 74, 87, 213, 270), equivalent to 4,8,12trimethyltridecanoic acid (TMTD) fragmentation; the second group of ions (m/z 74, 88, 101, 312), equivalent to pristanic acid; the third group of ions (m/z 74, 101, 171, 326), equivalent to phytanic acid; and the fourth group of ions (m/z 74, 105, 262, 290, 318, 346), equivalent to ω-(o-alkylphenyl) alkanoic acids of carbon length C16 and C22. Helium was used as the carrier gas with a constant flow rate at 2.4 mL/ min. Ion m/z 101 was used to check the relative abundance of two diastereomers of phytanic acids. Quantifications for the peak measurements were calculated by the integration tool on the Agilent ChemStation enhanced data analysis software. Gas chromatography-combustion isotope ratio mass spectrometry (GC-C-IRMS) Forty-two samples which had lipid concentration over 5 μgg −1 were analysed by GC-C-IRMS in duplicates based on the existing protocol (Craig et al. 2012), in order to measure stable carbon isotope values of two fatty acid methyl esters, methyl palmitate (C 16:0 ) and methyl stearate (C 18:0 ). Samples were analysed by using Delta V Advantage isotope ratio mass spectrometer (Thermo Fisher, Bremen, Germany) linked to a Trace Ultra gas chromatograph (Thermo Fisher) with a GC Isolink II interface (Cu/Ni combustion reactor held at 1000°C; Thermo Fisher). All samples were diluted with hexane. Then 1 μL of each sample was injected into DB5ms fused-silica column (60 m × 0.25 mm × 0.25 μm; J&W Scientific). The temperature was fixed at 50°C for 0.5 min. This was followed by a rise by 25°C per minute to 175°C, then by 8°C per minute up to 325°C. The temperature was then held at 325°C for 20 min. Ultrahigh-purity-grade helium was used as the carrier gas with a constant flow rate at 2 mL/ min. Eluted products were ionized in the mass spectrometer by electron ionization and the ion intensities of m/z 44, 45 and 46 were recorded for automatic computation of 13 C/ 12 Cratio of each peak in the extracts (Heron et al. 2015). Isodat software (version 3.0; Thermo Fisher) was used for the computation, based on the comparison with a standard reference gas (CO 2 ) with known isotopic composition that was repeatedly measured. The results of the analyses were recorded in ‰ relative to an international standard, Vienna Pee Dee belemnite (VPDB). N-alkanoic acid ester standards of known isotopic composition (Indiana standard F8-3) were used to determine the instrument accuracy. The mean ± standard deviation (SD) values of these n-alkanoic acid ester standards were − 29.60 ±0.21‰ and − 23.02 ± 0.29‰ for the methyl ester of C 16:0 (reported mean value vs. VPDB − 29.90 ± 0.03‰)an dC 18:0 (reported mean value vs. VPDB − 23.24 ± 0.01‰), respectively. Precision was determined on a laboratory standard mixture injected regularly between samples (28 measurements). The mean ± SD values of n-alkanoic acid esters were − 31.65 ±0.27‰ for the methyl ester of C 16:0 and − 26.01 ± 0.26‰ for the methyl ester of C 18:0 . Each sample was measured in replicate (average SD is 0.07‰ for C 16:0 and 0.13‰ for C 18:0 ). Values were also corrected subsequent to analysis to account for the methylation of the carboxyl group that occurs during acid extraction. Corrections were based on comparisons with a standard mixture of C 16:0 and C 18:0 fatty acids of known isotopic composition processed in each batch under identical conditions. Results of molecular analysis (GC-MS) Based on the molecular analysis of the samples, 98% of the samples yielded sufficient lipids required for interpretation (i.e. > 5 μgg −1 ) (Evershed 2008;Craig et al. 2013) with an average of 243 μgg −1 (ranging from 3 to 6186 μgg −1 ). The variation between ranges of lipid preservation exists in all three sites. Samples with lipid yields lower than 5 μgg −1 were not analysed by GC-C-IRMS. In general, the molecular analysis results indicate a high abundance of saturated palmitic (C 16:0 ) and stearic (C 18:0 ) acids in all the samples together with the carbon range changing from C 12 to C 28 . The palmitic/stearic acid ratios (P/S ratios) of all the samples are listed Online Resource 1.Although palmitic (C 16:0 ) and stearic (C 18:0 ) acids are present in both animal and plant sources, stearic acid is generally found in higher concentration in terrestrial animals than aquatic and plant food sources (Craig et al. 2007;Papakosta et al. 2015). Higher relative amounts of palmitic acid (C 16:0 )(P/Sratios> 1) in almost all the Swifterbant samples suggest that these vessels were used for processing aquatic food resources or plant products rather than terrestrial animal products. Forty-five of all the samples yielded unsaturated fatty acids ranging between C 16:1 and C 22:1 . Only five samples indicated presence of dicarboxylic acids all with carbon chain length nine. Based on the experimental study, dicarboxylic acids ranging between C 8 and C 11 are formed during the heating of aquatic oils . A total of eleven samples contained cholesterol indicating presence of animal fats (Evershed 1993). Although cholesterol may be derived from vessel use, it may also be a contaminant arising during handling of the sherds. Thirty-one of 62 samples contained ω-(o-alkylphenyl) alkanoic acids (APAAs), with carbon atoms ranging from 18 to 22, and isoprenoid fatty acids, including TMTD (4,8,12trimethyltridecanoic acid), pristanic acid (2,6,10,14tetramethylpentadecanoic acid) and phytanic acid (3,7,11,15tetramethylhexadecanoic acid). These data meet the established criteria for identifying aquatic lipids in the ancient pottery Hanseletal.2004;Craig et al. 2007;Heron et al. 2015); Heron et al. 2015). In addition, APAAs are formed by heating of polyunsaturated fatty acids obtained in aquatic organisms; therefore, must have been derived from primary use of the vessels (Hansel et al. 2004;Craig et al. 2007). Two samples yielded only C 18 ,C 20 and/or C 22 APAAs with no isoprenoid acids. They are also considered an evidence of aquatic products because C 20 and C 22 APAAs are formed from long-chain polyunsaturated fatty acids (C 20 and C 22 ) which are not present in terrestrial animal fats (Hansel et al. 2004). Another four samples yielded partial aquatic biomarkers containing C 18 APAA and isoprenoid acids (Online Resource 1). None of the samples yielded plant derived lipids (e.g. phytosterols) (Online Resource 1). Interestingly, scanning electron microscope (SEM) analysis on the carbonized surface deposits (foodcrust) collected from pottery from the S3 site has indicated the processing plant material (Raemaekers et al. 2013), albeit relating to different sherds than those analysed here. SEM analysis on S3 vessels identified plant fragments such as chaff and leaf tissues of emmer (Triticum dicoccum)as they survived the food processing and cooking stages. The SEM results indicated that plant products were cooked with other food sources, as one vessel also contained fish scale remains (Raemaekers et al. 2013). Given the evidence of the use of the emmer in the foodcrusts from S3 vessels, the absence of plant biomarkers in our results may come as a surprise. As plant foods have low lipid content, they may be overprinted by other animal fats and may therefore be very difficult to detect through lipid residue analysis (Colonese et al. 2017;Hammann and Cramp 2018). This opens up a new discussion on whether Swifterbant vessels are used for mixing freshwater fish and plant food sources. Resolving this requires further combined lipid residue and SEM analyses. Isotopic identification of individual fatty acids (GC-C-IRMS) Forty-two samples with sufficient fatty acid yields (< 5 μgg −1 ) were analysed by GC-C-IRMS in order to determine the carbon stable isotopes values of their C 16 and C 18 fatty acids. The data from the samples are listed in Dataset-1 (Online Resource 1) and plotted in Fig. 3a against reference ranges of authentic modern animal fats collected from the Western Baltic. In Fig. 3b,t h eδ 13 Cv a l u e so ft h eC 16:0 acid are plotted against Δ 13 C values (difference between δ 13 C 18:0 and δ 13 C 16:0 ) which allows us discrimination of ruminant adipose, non-ruminant and dairy fats (Craig et al. 2012. In general, the carbon isotope values from all three sites provided δ 13 CvaluesofC 16:0 and C 18:0 fatty acids consistent with freshwater organisms (Fig. 3a), confirming the r e s u l t so ft h em o l e c u l a ra n a l y s i s .T h em a j o r i t yo ft h e samples which plot in this area (21 out of 35) have fully aquatic biomarkers (Online Resource 1), verifying that they were used for processing aquatic products, mainly freshwater fish. Two samples (S305 and S328) from S3 plot within the range of modern porcine and marine fats (Fig. 3a). Wild and possibly domesticated pig (S. scrofa/Sus domesticus) are the most abundant terrestrial species at S3 (Zeiler 1997a,p . 9 9 ) . There is no evidence for marine mammals at the Swifterbant sites, and there are only two marine fish species, thin-lipped grey mullet (Mugil capito Cuvier) and flounder (Platichths flesus L.) representing a very small percentage (1% of in situ material, n =6 1 1 ;0 . 4 %o fs i e v e dm a t e r i a l ,n = 3825) of the total fauna material found in S3 (Brinkhuizen 1976;Clason 1978). In addition, both of these marine species are known to swim far upstream into freshwater environments (Brinkhuizen 1976;Clason1978;Zeiler 1997a). Sturgeon (Acipenser sturio L.), an anadromous fish that migrates from the sea to the rivers in springtime to spawn and would be expected to have a marine carbon isotope signature, is also present in Swifterbant sites (Brinkhuizen 1976;Clason and Brinkhuizen 1978)b u t again at a very small percentage (< 1%) (Zeiler 1997a). Based on these, it is clear that marine species were not a major part of the diet at Swifterbant S2, S3 and S4 and that there was no deliberate exploitation of the coastal areas for fishing or sea mammal hunting. Thus, it is unlikely that these ceramic vessels were used to process marine resources. As only one of these two samples contained fully aquatic biomarkers (S328) (Online Resource 1), a more plausible hypothesis is that this residue contains a mixture of freshwater and porcine derived lipids. None of the samples had Δ 13 C values lower than − 1‰, the value that is an indicator for ruminant fat (Evershed et al. 2002;C o p l e ye ta l .2003;C r a i ge ta l .2012)( F i g .3b; Online Resource 1). It is known that ruminant animals, especially domesticated cattle, were present in all three Swifterbant sites (Raemaekers 1999), and they must have been part of the diet. However, based on the molecular and isotopic results of the samples, it is likely that ruminant products were processed and cooked in different ways rather than using pottery. Finally, the isotope values clearly indicate that there are no dairy products in any of the Swifterbant pots analysed, as the Δ 13 C values of the samples are all higher than − 3.3‰ (Fig. 3b). It should be noted that even a minor contribution of ruminant fat would be expected to be detected given there is a strong bias against aquatic oils when mixed with ruminant fats due to the differences fatty acid concentration between these products (Cramp et al. 2019). Relationship between form and function The starting point for this analysis was the pilot study that was carried out on 32 vessels from Swifterbant S3 (Raemaekers et al. 2013). The combination of scanning electron microscope (SEM) and organic residue analysis using direct temperatureresolved mass spectrometry (DTMS), a form of in-source pyrolysis mass spectrometry, distinguished two functional groups. The first group of grit-tempered, thin-walled and relatively well-made pots, contained emmer wheat based on the SEM analysis, whereas the second group of plant-tempered, thick-walled and relatively poorly made pots showed no such evidence (Raemaekers et al. 2013). The lipid residue data presented here seemingly contradicts this previous study. Although we tested different pots, we see no variation in vessel function by typological or technological features (Online Resource 1). According to the lipid residue evidence, Swifterbant pottery was used for processing freshwater fish regardless of vessel form, size, decoration or temper. In reconciling these studies, we need to take into account that the functional differences proposed in the Raemaekers et al. (2013) pilot study were revealed only by SEM analysis rather than by DTMS and that processing of fish and cereals either together or sequentially could provide an explanation. Our current study underlines the relevance of combining lipid residue analysis and SEM analysis for the functional interpretation of ceramics, and it clearly outlines an avenue for future research. Comparison between pottery use and other evidence for subsistence strategies Based on analysis of the zooarchaeological and archaeobotanical remains, the subsistence economy at all three sites appears to have relied on a mixture of aquatic and terrestrial animal and plant resource, pointing to an economic pattern based on hunting-fishing-gathering, horticultural-scale cereal cultivation and small-scale animal husbandry (Cappers and Raemaekers 2008;Huisman et al. 2009;Huisman and Raemaekers 2014). Other dietary evidence such as stable isotope analysis of human bones from two of the Swifterbant sites (6 human teeth from S2 and 4 human teeth from S3) indicates a high intake of aquatic foodstuffs together with a definite terrestrial input (Smits and van der Plicht 2009;Smits et al. 2010: Table 1). Evidence of butchery found on S3 pig/ wild boar and cattle bones also supports this evidence (Zeiler 1997b). We conclude that while there is a bias against the identification of plant foods through lipid residue analysis, carcass fats from pigs and cattle should be readily identifiable, and therefore, pigs and cattle must have been processed and cooked in different ways. Significantly, we found no evidence for dairy products which are readily identifiable in prehistoric pottery from other sites in Northern and other areas of Europe (Craig et al. 2011;Cramp et al. 2019;Heron et al. 2015). The use of pottery vessels was instead focused on processing freshwater fish which were selected from a much wider range of animal resources available. Inter-site variation There are important differences between the three sites. Most striking is the difference in the presence of burials. S2 has nine burials, whereas S3 has no burials and S4 has only a single inhumation ). Another difference is the presence of postholes. Site S3 yielded many postholes which are interpreted to be the remnants of a rebuilt house (c. 4.5 × 8 m). Site S4 yielded only few postholes and these could not be attributed to a structure (Geuverink 2020). Site S2 produced only one line of postholes, and these did not correspond to a house plan (De Roever 2004). In addition, Devriendt proposes that S3 and S4 had a domestic or residential function on the basis of the dominance of scrapers in the flint tool assemblage, whereas S2 has many more retouched blades (Devriendt 2014). Some of these blades must have been imported as finished products, because they are larger than the flint cores found. One hypothesis was that S3 and S4 were domestic sites, where one might expect a full range of foods to have been cooked in the ceramic vessels, whereas S2 was a special-function site, where vessels use was primarily related to the burial ritual. It was not possible to support this hypothesis on the basis of our analysis. The lipid residue analysis does not indicate any inter-site functional variation in the Swifterbant pottery. Interregional perspective: Swifterbant vs Ertebølle While both Swifterbant (5000-4000 cal BC) and Ertebølle (4800-4000 cal BC) were contemporary, the relationship between these groups is the subject of on-going discussion, notably based on similarities and differences in ceramic vessels (De Roever 1979;Raemaekers 1997;D eR o e v e r2004; Andersen 2010; Louwe Kooijmans 2010; ten Anscher 2012; Rowley-Conwy 2013). Along with pointed-based pottery present in both cultural groups, the Ertebølle pottery repertoire also includes elongated bowls (blubber lamps) used for illumination (Heron et al. 2013) which are completely absent in Swifterbant assemblages. Later comparisons have focused on other material cultures, such as lithic tools as well as subsistence practices, which have highlighted greater differences between these two cultures (Deckers 1982;Raemaekers 1997;Raemaekers 1998;Stilborg 1999;A n d e r s e n2010; Ballin 2014). An important difference is that compared with the Swifterbant, there is very little evidence for domesticated plants and animals at any Ertebølle sites, and the occasional find is interpreted to be the result of contact with nearby farmers (Krause-Kyora et al. 2013). With the new data we generated from the lipid residue analysis of Swifterbant S2, S3 and S4 pottery assemblages, we now can contribute to the discussion from the perspective of pottery use. Lipid residue analyses indicates Late Mesolithic Ertebølle pottery (ca.4600-3950 BC) from both coastal and inland sites had a broad range of functions including processing of aquatic resources, both marine and freshwater (Craig et al. 2007), but also terrestrial animal fats, particularly ruminant fats (Craig et al. 2007;Philippsen et al. 2010;Heron et al. 2013;Philippsen and Meadows 2014;Papakosta et al. 2019). A recent study (Papakosta et al. 2019) shows mixing of aquatic and terrestrial food products in the Ertebølle pots based on their isotope values. Stable isotope analysis of carbonised surface deposits (foodcrust) from inland Ertebølle sites also suggests a mixture of freshwater and terrestrial ingredients and is not able to rule out the presence of terrestrial plants (Philippsen et al. 2010;Philippsen and Meadows 2014). Moreover, phytoliths from garlic mustard seed were also found in Ertebølle pottery at Neustadt and Stenø , although no evidence for cereals in Ertebølle pottery has so far been recorded. The residue analysis undertaken on Ertebølle pottery contrasts with our results from the three Swifterbant sites. Swifterbant pottery, at least based on evidence from these three sites had a more specialised function associated with freshwater fish. We, therefore, conclude that these different cultures did not share the same kind of approach towards the use of pottery, even when sites located in similar wetland environments are compared, e.g. Store Åmose basin and Ringkloster in Denmark (Craig et al. 2011), although most of the Ertebølle lipid residue data are from coastal settlement sites. Unfortunately, comparable Swifterbant coastal settlements are absent due to erosion of the coastal zone preventing a more detailed comparison. Conclusion The first combined molecular and isotopic analysis of lipids provides clear evidence for the processing of freshwater fish at all three studied Swifterbant sites. The homogeneity of the results is striking and shows that variation in size, decoration and temper is not mirrored in the use history of the vessels. Currently we have no evidence for different uses of vessels across the three sites, i.e. between 'domestic sites' (S3 and S4) and the 'ritual site' (S2). The absence of ruminant fats and dairy products in the Swifterbant pottery is quite clear and in sharp contrast to European Neolithic pottery, where these products are readily detected (e.g. Cramp et al. 2019). While it may be that any differences are only manifest in the use of plant foods which are difficult to detect through lipid analysis, it may also be a true reflection of homogeneity in Swifterbant pottery use. This possibility opens up other avenues of research, rethinking the production, exchange and use of pottery and the role pottery played in the expression of social identities and cultural preferences as has been debated previously (Taché and Craig 2015;Robsonetal.2018). Additional analysis of Swifterbant pottery from different sites is clearly needed to contribute to the debate regarding the function of the hunter-gatherer pottery in Northern Europe, nevertheless the data presented here provide a significant advance in our knowledge for this period and region and points to different culinary practices to contemporary hunter-gatherers in adjacent regions. Funding information This project is part of a Marie Sklodowska-Curie European Joint Doctoral Training Program, funded by the European Union's EU Framework program for Research and Innovation Horizon 2020 under the Grant Agreement No. 676154 (ArchSci2020 program). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-04-26T13:40:32.114Z
2020-04-26T00:00:00.000
{ "year": 2020, "sha1": "b8a24c85b8ec0ac2c3aacd26e0e311b394d039ce", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12520-020-01062-w.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "6537e5a0f6c535e992be36d78c48bb846f9788af", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Geography" ] }
81746512
pes2o/s2orc
v3-fos-license
The value of middle cerebral artery to umbilical artery ratio by Doppler velocimetry in low risk postdate pregnancies * Department of Obstetric/Gynecology, College of Medicine, University of Sulaimani, Sulaimani, Iraq. ** Department of Health System, Kirkuk, Iraq. Introduction Postdate pregnancy is a common problem. Its incidence has been reported to be between 4-14% with an average of 10.5%. A safe limit for the continuation of pregnancy beyond the expected date of delivery cannot be established, and there is also controversy on whether the risk of fetal hypoxia can be accurately predicted in these pregnancies. Rayburn and Chang suggested that the risk of postmaturity starts at 40 weeks. Postdate pregnancies have been associated with increased perinatal morbidity and mortality up to 42 weeks and these risks increase after 42 weeks (post-term pregnancy). Increased incidence of induction of labor, instrumental delivery, cesarean section, shoulder dystocia, lower Apgar score, congenital malformation, meconium aspiration, and fetal asphyxia have been associated with these pregnancies. These problems can be decreased by routine antepartum fetal surveillance prior to the onset of spontaneous labor. The current methods of fetal surveillance like Background and objective: Placental insufficiency is the primary cause of intrauterine growth restriction in normally formed fetuses and can be identified using middle cerebral artery to umbilical artery ratio Doppler velocimetry, and provide an estimate of downstream placental vascular resistance and placental blood flow. There is a strong association between reduced end-diastolic umbilical artery blood flow velocity and increased vascular resistance in umbilical placental microcirculation. Doppler ultrasound can assess the uteroplacental blood flow just before labor. This study aimed to investigate the use of the fetal cerebroumbilical ratio to predict the intrapartum fetal compromise in appropriately grown fetuses. Methods: A comparative cross-sectional study set at Sulaimania Maternity Teaching Hospital, Sulaimania, Iraq, from January to June 2015. The study recruited 121 cases, fetal biometry and Doppler indices were measured before established labor. The intrapartum and neonatal outcome details recorded. Results: Infants delivered by cesarean section for fetal compromise had significantly lower cerebroumbilical ratio than those born by spontaneous normal (none assisted) vaginal delivery and by cesarean section for other intrapartum causes. Infants with cerebroumbilical ratio <10 percentile were more likely to be delivered by cesarean section for fetal compromised than those with a cerebroumbilical ratio > 10 percentile. A cerebroumbilical ratio >90 percentile appears protective against cesarean section for fetal compromise. Amniotic fluid index of < 5 was associated with an increased cesarean section for fetal indication. Conclusion The cerebroumbilical ratio can identify fetuses at high risk of intrapartum fetal compromise. As a confounding variable, the amniotic fluid index was a useful tool for surveillance in prolonged pregnancy. Introduction Postdate pregnancy is a common problem.Its incidence has been reported to be between 4-14% with an average of 10.5%. 1 A safe limit for the continuation of pregnancy beyond the expected date of delivery cannot be established, and there is also controversy on whether the risk of fetal hypoxia can be accurately predicted in these pregnancies.Rayburn and Chang suggested that the risk of postmaturity starts at 40 weeks. 2Postdate pregnancies have been associated with increased perinatal morbidity and mortality up to 42 weeks and these risks increase after 42 weeks (post-term pregnancy). 3ncreased incidence of induction of labor, instrumental delivery, cesarean section, shoulder dystocia, lower Apgar score, congenital malformation, meconium aspiration, and fetal asphyxia have been associated with these pregnancies. 3,4hese problems can be decreased by routine antepartum fetal surveillance prior to the onset of spontaneous labor. 5,6The current methods of fetal surveillance like https://doi.org/10.15218/zjms.2018.022 2 nonstress test (NST), amniotic fluid index (AFI), biophysical score (BPS), umbilical artery (UA) S/D ratio and middle cerebral artery (MCA) plasticity index (PI) cannot accurately predict fetus at risk of the adverse perinatal outcome. 3,6Various studies have investigated MCA, and UA (cerebroumbilical (CU) ratio) in post-term pregnancies with high risk complicating factors like chronic hypertension, pregnancy induced hypertension (PIH) and diabetes, and found it to accurately predict fetal compromise. 7,8These conditions, however, are known to affect the vascular bed and placental circulation, the blood flow to the fetus.Very few studies have been done on the value of middle cerebral artery to umbilical artery ratio (CU ratio) in determining the perinatal outcome in low risk postdate pregnancies. 8The incidence of postdate pregnancy is higher in first pregnancies and in women who have had a previous postdate pregnancy.Genetic factors may also play a role.One study showed an increased risk of postdate pregnancy in women who were, themselves, born postdate. 9An ultrasound examination performed in the first half of pregnancy is the most reliable method of calculating the date the baby is due, especially in women with long or irregular menstrual cycles. 10The American College of Obstetricians and Gynecologists has stated that it is only necessary to start antenatal fetal monitoring after 42 weeks (294 days) of gestation, although many obstetric care providers will start fetal testing at 41 weeks.Many experts recommend twice weekly testing, including measurement of amniotic fluid volume.Testing may include observing the fetus' heart rate using CTG, or observing the baby's activity by biophysical profile. 11everal studies have suggested that oligohydramnios detected by ultrasound is a useful test for detecting placental insufficiency, 12,13 Although AFI is of limited value in identifying fetuses at risk of compromise. 14A good liquor volume is a reassuring sign that the fetus has not been subjected to chronic hypoxia in the antenatal period 15 and this was reflected in our study.Doppler velocimetry: The presence of forward flow during diastole is seen in arteries supplying low resistance vascular beds.Diastolic components disappear or reverse as the peripheral impedance increases. 16Different indices have been proposed to define the properties of the Doppler spectrum.The most common in the obstetric application being the pulsatility index, resistance index and the Systolic /Diastolic (S/D) ratio. 16The closer the measurement site is to the placenta, the less is the wave reflection and the greater the end -diastolic flow.Consequently, the Doppler waveform that represents arterial flow velocity demonstrates progressively declining pulsatility and the indices of pulsatility from the fetal to the placental end of the cord. 17eta-analysis of the previously published randomized controlled studies clearly indicates that UA velocimetry assessment decreases the perinatal mortality from intrauterine growth retardation (IUGR) without any increase in the rate of unnecessary obstetric interventions in high -risk pregnancies.Knowledge of Doppler flow velocimetry of the fetal Middle cerebral artery (MCA) may assist in perinatal diagnosis and management of complicated pregnancies and high risk pregnancy.A low index of pulsatility in the middle cerebral artery associated with fetal compromise has been described. 18ecause the fetal Middle cerebral artery to umbilical artery (MCA/UA) ratio incorporates data not only on placental status but also on the fetal response, it is potentially more advantageous in predicting the perinatal outcome.Doppler data combining both umbilical and cerebral velocimetry provide additional information on fetal consequences of the placental abnormality. 19The current data, however, question the benefits of using CU Arterial Doppler velocimetry ratio as a routine screening test for fetal hypoxia or acidosis in low-risk pregnancies. 20This study was designed to study the Doppler waveforms in UA and MCA, and CU ratio in uncomplicated postdate pregnancies, and to correlate these findings with the perinatal outcome.It also aimed to determine the cutoff value of CU ratio for predicting adverse perinatal outcome in these pregnancies. 166 A comparative cross-sectional study of 121 intrapartum women was done after considering the inclusion and exclusion criteria.RI and PI index of the umbilical, middle cerebral arteries and the ratio of cerebro-umbilical arteries were obtained from all the 121cases.Data were analyzed.The mean + SD of maternal age, and BMI of the studied group seen in Table 1. Results elevated at about 45 degrees.Fetal biometry and AFI and FHR were recorded.In addition, the resistant index (RI) and pulsatility index (PI) of the umbilical artery (UA) and middle cerebral artery (MCA) were recorded using a manual trace of one proper waveforms.Each parameter was recorded twice and a mean of these values used for data analysis.Umbilical artery Doppler studies were obtained from a mid-segment of the umbilical cord.All US examinations were undertaken by using (SIEMENS -SONOLINE G50) Machine.Labour was then managed according to local protocols and guidelines.Outcome measures for this study were collected including intrapartum data were diagnosis of fetal compromise (based on abnormal FHR during electronic fetal heart monitoring or CTG abnormalities), and presence of meconium stained liquor and mode of delivery, and neonatal outcome were assessed by examining birth weight, Apgar score at the 1 st and 5 th minutes, admission to neonatal care unit.Data analysis was performed using the statistical package for the social sciences (version 20), and Microsoft Word and Excel have been used to generate graphs and tables. Methods One hundred and twenty one women with postdate pregnancy (40 weeks pulse) were recruited at Maternity teaching hospital in Sulaimania, Iraq from first January 2015 to first June 2015.All patients recruited before active labor (cervical dilatation < 4cm), had singleton pregnancies and were identified as low risk and uneventful pregnancy.Information about maternal age, ethnicity, parity, gestation at the onset of labor, body mass index (BMI), smoking, preexisting maternal medical disorders, history of previous fetal growth restriction, stillbirth, or neonatal death were documented.The inclusion criteria included singleton cephalic presenting pregnancy, more than 40 weeks and less than 42 weeks, no current medical disease or medical disease complicating pregnancy, either presented in spontaneous labor or been induced for labor, and cervical dilation less than 4 cm.The exclusion criteria included pregnancy <40week, cervical dilatation >4cm, malpresentation, membrane rupture, multiple pregnancy, preeclampsia, previously identified fetal growth restriction, known fetal anomaly, and evidence of intrauterine infection.All women had an ultrasound (US) examination performed in a supine position with the head of the bed 5).A significant positive correlation was found in this study between the PI ratio and 1 st minute Apgar score and 5 th minute Apgar score (Table 6). A significant positive correlation was found in this study between the RI ratio and 1 st minute Apgar score and 5 th minute Apgar score, (Table 4).Highly significant differences were found in PI ratio in women who had a vaginal delivery, meconium 21 Another study by Farrell that performed in 1999 found that UA PI is known to be elevated in cases of fetal growth restriction.However, its use in early labor does not appear to be a good predictor of adverse perinatal outcome. 22n 1999 a study that performed by Bahado-Singh on 203 fetuses at risk for intrauterine growth restriction, there was a statistically significant increase in perinatal morbidity and mortality in cases with an abnormal cerebroumbilical ratio. 23In addition, some studies were performed in high risk and some in low risk population.In the mentioned study two groups of mothers have been followed, one group had normal pregnancies without risk factors, this acted as a control group, and the other was on IUGR.The middle cerebral artery PI were not significant between the two groups P = 0.3, but for umbilical artery PI were significant P = 0.03.Similarly, CU was significant between two groups P = 0.01.This shows the importance of C/U as a predictor of fetal compromise. 24In this study, we have observed that there was a strong relation of meconium stain liquor with the C/U ratio.This finding agrees with the finding of the study done by Tomas Prior that studied in 2013, 25 which revealed a higher rate of meconium stained liquor in infants with the lowest C/U ratio (0.76).While Lam 25 in 2005, who evaluated the use of AFI, MCA PI, UA PI, and C/U ratio in surveillance of postdates pregnancy, also found that MCA PI was the only parameter that had a statistically significant correlation with grade three or thick meconium stained liquor. 25Another study that performed by Mishra in 2013 26 showed that Doppler flow velocimetry of the fetal MCA may assist in perinatal diagnosis and management of complicated pregnancies.A low index of pulsatility in the middle cerebral artery associated with fetal compromise.Because the MCA/UA ratio incorporates data not only on placental status but also on the fetal response; it is potentially more advantageous in predicting the perinatal outcome.Doppler data combining both umbilical and cerebral velocimetry provide additional information on fetal consequences of the placental abnormality. 26In another study that performed in 2013 by Rajesh M. found that the cerebral/umbilical pulsatility ratio (C/U ratio) has been recognized as the more sensitive and specific indicator of likelihood of IUGR and adverse perinatal. 27The result of this study suggests that the fetal C/U ratio, measured before established labor, can predict the diagnosis of intrapartum fetal compromise, and is a better predictor of adverse perinatal outcome than an abnormal MCA PI or UA PI alone.The mean values of CU RI and CU PI ratio were 1.2and 1.4 respectively.Placio et al. 28 observed mean CU ratio to be 1.36 at 41 weeks and 1.27 at 42 weeks.In the present study, a cutoff value of CU ratio of 1 was obtained and used for correlating perinatal outcome in these postdate pregnancies.The CU ratio of 1 for predicting adverse perinatal outcome in the present study was similar to the value established by Devine et al. 29 The authors observed that CU ratio in their study had high specificity and positive predictive value for determining the adverse perinatal outcome.This difference appears to be because they included high risk postdate pregnancies with complicating factors like diabetes, chronic hypertension, and PIH unlike in the present study where only uncomplicated postdate pregnancies were enrolled. Parameters a good predictor for adverse perinatal outcome during intrapartum care and can be used as a tool to guide the mode of delivery in the low risk pregnancy.Can be considered as a useful technique for fetal surveillance in prolonged pregnancy. Table 1 : Mean age, birth weight, and BMI of study population. Table 2 : Means + SD of specific parameters in studied group. Table 3 : RI ratios in the parameters of studied group. Table 4 : Correlation coefficient between RI ratio and other parameters in studied group. Table 5 : PI ratios in the parameters of studied group. Table 6 : Correlation coefficient between PI ratio and other parameters in studied group.
2019-01-25T21:13:57.702Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "8c27226102cbf3aa9778a5ea73717686bc5fc8cf", "oa_license": "CCBYNCSA", "oa_url": "https://zjms.hmu.edu.krd/index.php/zjms/article/download/211/183", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8c27226102cbf3aa9778a5ea73717686bc5fc8cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248216328
pes2o/s2orc
v3-fos-license
On the smallest open Diophantine equations This paper reports on the current status of the project in which we order all polynomial Diophantine equations by an appropriate version of"size", and then solve the equations in that order. We list the"smallest"equations that are currently open, both unrestricted and in various families, like the smallest open symmetric, 2-variable or 3-monomial equations. All the equations we discuss are amazingly simple to write down but some of them seem to be very difficult to solve. As you can see, in knot theory researchers ordered all knots by a natural parameter, number of crossings, and then try to answer questions of interest systematically for all knots with the given number of crossings. A similar approach is taken in many other areas of mathematics: if a problem is difficult or undecidable in general, researchers order the instances of the problem in some natural way, and then try to "solve" at least "small" instances. For example, the Halting problem has been solved for all 2-symbol Turing machines with 2, 3 and 4 states, so the next open case is 5-state machines. In contrast, Diophantine equations are not studied in order, and often attract interest just because some time ago a famous mathematician wrote down an equation and asked to solve it. One reason for this is that it is not obvious how to order all Diophantine equations is a natural way. It is easy to arrange in order some specific infinite families of equations. For example, Fermat Last Theorem can be treated as one exponential equation or as an infinite family of polynomial Diophantine equations ordered by the value of parameter n. Similarly, equations (1) can be naturally ordered by parameter k. However, it is less easy to order all polynomial Diophantine equations, that is, all equations of the form P(x 1 , . . . , x n ) = 0, where P is a polynomial with integer coefficients. For this, we need to assign to every Diophantine equation a "size" parameter, such that for any bound B there is only a finite number of equations of size at most B. Many natural notions of "size" or "height" do not satisfy this condition. For example, if we define the height of the equation (2) as the maximum (or sum) of absolute values of coefficients of P, then there are infinitely many equations of height 1, e.g. x n = 0, n = 1, 2, . . . . It is natural to define the "size" of equation (2) as a sum of sizes of monomials of P, so it is left to define the size of a monomial ax k 1 1 . . . x k n n . If we consider an equation as an input to a computer program which then tries to solve it, then standard way to measure the size of the input is the number of bits needed to describe it. For simplicity, let us assume that we do not use the power symbol, and write x k 1 1 as x 1 x 1 . . . x 1 (k 1 times) and so on. Then we need d = k 1 + · · · + k n symbols to write x k 1 1 . . . x k n n , where d is the degree of the monomial. We also need about log 2 |a| symbols to write the coefficient a in binary, so we can define the length of the monomial as l = log 2 |a| + d. This is not an integer, but ordering the monomials by l is equivalent to ordering them by H = 2 l = |a|2 d , which is an integer. Hence, let us define the size of equation (2) as provided that polynomial P consists of k monomials with integer coefficients a 1 , . . . , a k and degrees d 1 , . . . , d k , respectively. For example, for the equation (1) with k = 3 we have H = 2 3 + 2 3 + 2 3 + 3 = 27. This notion of size has been suggested by anonymous mathoverflow user Zidane who asked in 2018 what is the smallest open Diophantine equation, see [36]. Note that H is always a non-negative integer, and, for any B, there is a finite number of equations with H ≤ B. Hence, we may list all equations with sizes H = 0, 1, 2, . . . , try to solve them in this order, and report the smallest equations we cannot solve. This is exactly the purpose of this paper. In [16], this project is implemented for the Hilbert 10th problem, that is, the problem of determining whether an equation has any integer solution. In this paper, we consider more general problem of determining all integer solutions, but also re-iterate the open questions posted in [16]. Section 2 considers the most general problem of solving the equations completely, including the description of the solution set if it is infinite. Section 3 considers this problem for the 2-variable equations only. In Section 4, we consider an easier problem of determining whether the solution set is finite, and if so, list all the solutions. In Section 5, we study an even easier problem of determining whether a given equation has any integer solution or not. For each of these problems, we have identified the smallest equations for which the problem is non-trivial, and challenge the readers to try to solve these equations. Describing all solutions: polynomial families We have exactly one equation 0 = 0 of size H = 0, and exactly two equations ±1 = 0 of size H = 1. More generally, for every H > 0, we have two equations ±H = 0 with no variables and no solutions. If we do not count these as "equations" and insist that every equation should have at least one variable, then the smallest equations are ±x = 0 of size H = 2 and the nest smallest are ±x ± 1 = 0 of size H = 3. All these equations are trivial to solve. More generally, for any equation in one variable a m x m + · · · + a 1 x + a 0 = 0 with integer coefficients a m , . . . , a 0 , any non-zero integer solution must be a divisor of a k , where k is the smallest integer such that a k = 0. This gives an algorithm to list all integer solutions of (4). Moreover, this can be done in polynomial time, see [9]. Further, we will consider only equations in at least two variables. The smallest such equations are ±x ± y = 0 and ±xy = 0 of size H = 4. To avoid solving essentially the same equations multiple times, we call two equations equivalent if one can be transformed into another after multiplication by −1 and/or substitutions in the form x i → −x i . Obviously, it suffices to consider only one equation from each equivalence class. With this convention, the only equations of size H = 4 we need to consider are and Both these equations have infinitely many integer solutions, which can be presented in a parametric form. For equation (5), the solutions are (x, y) = (u, −u), where u is an integer parameter. This is a special case of a polynomial family, as defined below. Definition 2.1. We say that a subset S ⊂ Z n is a polynomial family if there exist polynomials P 1 , . . . , P n is k variables u 1 , . . . , u k such that (x 1 , . . . , x n ) ∈ S if and only if there exists integers u 1 , . . . , u k such that x i = P i (u 1 , . . . , u k ), i = 1, . . . , n. In this terminology, the solution set of the equation (5) is a polynomial family with k = 1 parameter u, P 1 (u) = u and P 2 (u) = −u. More generally, the solution set of any equation in the form can be represented as polynomial family x i = u i , i = 1, . . . , n − 1, x n = −Q(u 1 , . . . , u n−1 ). The solution set to the equation (6) is (x, y) = (0, u) or (x, y) = (u, 0) for any integer u. Note that it is not a polynomial family but a union of two polynomial families. Obviously, if we can represent the solution set as a union of any finite number of polynomial families, then we can classify the equation as "solved". Note that if the solutions sets of equations P 1 = 0 and P 2 = 0 are finite unions of polynomial families, then the same is true for the equation From now on, we will exclude the equations of the forms (7) and (8) from further analysis. For H = 5, we would like to mention equation It is a one-variable equation (4), and we have already discussed and excluded all such equations. However, it is interesting as the smallest equation for which the left-hand side is always odd and therefore cannot be equal to 0. More generally, if there exists an integer m ≥ 2 such that P(x 1 , . . . , x n ) is never divisible by m, then equation (2) has no integer solutions. Hence, we may exclude such equations. The only non-excluded equation of size H = 5 is xy + 1 = 0 (10) whose solutions are (x, y) = (1, −1) and (x, y) = (−1, 1). There are no other integer solutions because in every potential solution (x, y) both x and y must be a divisor of 1. More generally, any equation of the form for some integer c can be solved by enumerating divisors d of c, and, for every divisor, solve the system of equations P 1 = d, P 2 = c/d. From now on, we will exclude the equations of the form (11) solvable by this method. Families (4), (7) and (11) cover all the equations of size H ≤ 7. For H = 8, there are some equations not considered so far, for example, equation whose only integer solution is x = y = 0 because this is the only real solution. More generally, we can exclude all equations (2) in n variables whose set of real solutions is a bounded region in R n , because any bounded region has at most a finite number of integer points, and we may use the direct substitution to check which of these points are the solutions to (2). Another equation not excluded so far is xy + 2z = 0. With z ′ = 2z, (13) reduces to the equation xy + z ′ = 0 with extra condition that z ′ is even. The solution to the latter is a polynomial family (x, y, z ′ ) = (u, v, −uv). By considering all possible parities of u, v, we can represent it as the union of 4 polynomial families (i) (x, y, In each case, the parity of z ′ is determined: z ′ is even in cases (i)-(iii) but odd in case (iv). By substituting z = z ′ /2 in formulas (i)-(iii), we obtain parametric solutions to (13). Exactly the same argument implies, more generally, the following observation. . . , a n x n + b n ) = 0, where a 1 , . . . , a n are non-zero integers and b 1 , . . . , b n are arbitrary integers. Proposition 2.2 allows us to do arbitrary linear substitutions in the form ax i + b i → x ′ i . For example, any equation of the form ax n + Q(x 1 , . . . , where a = 0 is an integer, reduces to (7) by substitution ax n → x ′ n and can therefore be excluded from further analysis. All equations we have considered so far are trivial. The smallest equation that deserves to be given to students as an exercise is the equation The question is, of course, how to represent all solutions as a polynomial family. To solve this exercise, the students should note that any integers y and z can be represented as y = uv 2 and z = u ′ w 2 with u, u ′ squarefree. Now, for yz to be a perfect square we must have u = u ′ , hence z = uw 2 , and then x = uvw. Conversely, for any u, v, w (not necessarily square-free), the triple (x, y, z) = (uvw, uv 2 , uw 2 ) is a solution to (15). Another easy exercise is to write as a polynomial family the set of all solutions to the equation To solve it, denote u 1 the greatest common divisor of x and z (assuming that x and z are not both 0). Then x = u 1 u 2 and z = u 1 u 3 , where u 2 and u 3 are co-prime integers. Then u 2 y = u 3 z implies that z is divisible by u 2 and can be written as z = u 2 u 4 for integer u 4 . Then y = u 3 u 4 , and we obtain a polynomial family of solutions (x, y, z, t) = (u 1 u 2 , u 3 u 4 , u 1 u 3 , u 2 u 4 ). It is left to note that the remaining case x = z = 0 is also covered by this parameterization. This finishes the analysis of all equations of size H ≤ 8. With H = 9, the problem suddenly jumps from student-exercise level to research level. The question whether the set of integer solutions of the equation is a polynomial family has been first asked by Skolem [30] in the 1930's, remained open for over 70 years, and has been answered by Vaserstein [32] in 2010, who proved that it is indeed a polynomial family with 46 parameters. As a corollary of this result, Vaserstein also showed that, for any integer c, the solution set of the equation is the union of a finite number of polynomial families. In particular, this covers the equations x 2 − yz = ±1, and finishes the analysis of all equations of size H ≤ 9. Vaserstein also proved that, for any c, the solution set of the equation where Q is a quadratic form, is the union of a finite number of polynomial families. For example, with Q = 0 and c = 2, this covers the equation xy + zt = 2 of size H = 10. Vaserstein's solution to (17)- (19) immediately allows to solve many other equations. For example, let x, y, z be a solution to the equation of size H = 10. Because x 2 + x = x(x + 1), the prime factors of y are distributed between x and x + 1, hence y can be written as y = ab, where a and b are divisors of x and x + 1, respectively. With c = x a and d = x+1 b , we get db − ac = (x + 1) − x = 1, which is exactly (17), hence the set of such (a, b, c, d) is a polynomial family. Then (x, y, z) = (ac, ab, cd) is also a polynomial family. More generally, any equation of the form can be rewritten as 4adyz = 4a 2 This allows us to conclude that the solution set to (21) is a finite union of polynomial families by Proposition 2.2 and exclude such equations from further analysis. More generally, we may exclude all equations that, after linear substitutions as in Proposition 2.2, can be reduced to the already solved ones. After this, the only remaining equations of size H ≤ 11 are trivial two-variable quadratic equations like 2 , and also x 2 + x ± 1 = |x| 2 , hence x 2 + x ± 1 cannot be a perfect square. By checking cases |x| ≤ 2, we may easily list all the solutions. For H = 12, we meet similar equations y 2 = x 2 + x ± 2 and y 2 = x 2 + 2x, that can be solved by exactly the same method. A (trivial but) notable equation is whose only integer solution is (x, y) = (0, 0). If it would have a non-zero integer solution (x, y), then rational number t = y x would satisfy t 2 = 2, hence the non-existence of non-zero integer solutions to (22) is equivalent to the irrationality of √ 2, a famous old problem with rich history. Similarly, equation x 2 + xy − y 2 = 0 reduces to non-existence of rational solutions to t 2 + t − 1 = 0. Next, equation reduces to the question for which x the ratio x 2x+1 can be an integer. It is easy to see that this is possible only for x = −1 and x = 0, leading to the solutions (x, y) = (−1, −1) and (x, y) = (0, 0). The more general equation reduces to the question when is an integer, which can be easily answered in full generality, see [16]. Finally, all integer solutions to the equation This finishes the analysis of 2-variable equations of size H = 12. Among the 3-variable ones, the most famous is the equation whose integer solutions are known as Pythagorean triples. A standard approach to solving this equation is noting that if z = 0 then x = y = 0 and otherwise the equation can be written as (x/z) 2 + (y/z) 2 = 1 and reduces to finding rational points on the circle x 2 + y 2 = 1. To find them, we can choose any one rational point, say (x, y) = (1, 0), and draw all possible lines through this point with rational slope k. Any such line intersects the circle at (1, 0) and in another point which can be easily seen to be a rational point. This way we get parameterization of all rational points with rational parameter k. From this, we can easily write down all integer solutions to (24). The answer is the union of two polynomial families ( We continue our analysis of equations of size H = 12 by discussing equations with two monomials, which are quite easy. For example, if (x, y, z) satisfy the equation x 2 y − z 2 = 0, then y must be a perfect square, hence the solution is (x, y, z) = (u, v 2 , uv). As another example, consider equation We can write y = u 3 1 u 2 2 u 3 and z = u 3 4 u 2 5 u 6 with u 2 , u 3 , u 5 , u 6 co-prime and square-free. Then yz is a perfect cube only if u 5 = u 3 and u 6 = u 2 , resulting in the answer (x, y, z) = (u 1 u 2 u 3 u 4 , u 3 1 u 2 2 u 3 , u 3 4 u 2 3 u 2 ). A bit more difficult is the equation The cases y = 0 and z = 0 can be considered separately, so we assume that yz = 0. Let u 1 = gcd(y, z), so that y = u 1 u 2 and z = u 1 v 2 with u 2 , v 2 coprime. Then x 2 u 2 = tv 2 . Hence t is divisible by u 2 , let us write t = u 2 v 1 . Then x 2 = v 1 v 2 , but this is the equation (15). Its solution is . By similar analysis, we may solve the remaining two-monomial equations There are three equations left of size H = 12 that we have not discussed so far. Equation has a family of solutions (x, y, z) = (0, 0, t), otherwise xy = 0 and −z = x+y . Equation can be reduced, after substitution x − y = x ′ and x + y = y ′ , to the equation x ′ y ′ = zt, which is equation (16) with the constraint that either (i) x ′ and y ′ are both even, or (ii) x ′ and y ′ are both odd. In each case, the solution is a finite union of polynomial families by Proposition 2.2. Finally, all integer solutions to the equation can be parametrized as x = k(uv − wr), y = k(ur + vw), z = k(u 2 + w 2 ), t = k(v 2 + r 2 ) for integers u, v, w, r, k. The proof uses the fact that the set of Gaussian integers Z[i] = {a + bi, a, b ∈ Z}, where i = √ −1, is a unique factorization domain. We then rewrite the equation as (x + iy)(x − iy) = zt, and analyse it as a two-monomial equation. See [3, p. 159] for details. The discussion above can be summarised in the following Proposition. determine whether the set of integer solutions is a finite union of polynomial families. The second problem is that there are equations of size H = 13 whose solution sets are known to be not finite unions of polynomial families. These are the equations and which are known as Pell equation and negative Pell equation, respectively. These equations are interesting because if a pair (x, y) of positive integers solves any of them, then ratio y x is an approximation to √ 2 with a good trade-off between the quality of the approximation and the size of the denominator. For this reason, these equations has been studied since ancient times. It is easy to check that both equations have infinitely many integer solutions. For example, equation (25) has a solution (x 0 , y 0 ) = (1, 0), and direct substitution shows that if (x n , y n ) is a solution, then so is This gives an infinite sequence of solutions defined by recurrence relations. A bit more difficult to show that this sequence gives all solutions with x ≥ 0 and y ≥ 0 (and then all integer solutions can be obtained by changing signs), but this is also well-known, see, for example, Theorem 3.2.1 in [2]. Similarly, all the nonnegative solutions to (26) can be obtained starting from (x 0 , y 0 ) = (1, 1) and applying the same recurrence relations. As noted in [32], these solutions set are too sparse to be described by polynomial families. This raises the fundamental philosophical issue of what does it mean to "solve" a general polynomial Diophantine equation. What if the solution set is infinite, but cannot be described by polynomial families, or recurrence relations, or in any other structured way? To avoid this issue, we may either consider restricted families of equations for which the meaning of "solve" is well-defined, or consider general equations but do not aim to find all solutions. The first route is explored in the next section, in which we study equations in 2 variables. Describing all solutions: equations in 2 variables In this section, we will restrict our attention to 2-variable equations, for which many powerful results and techniques are available. We first remark that there is a general algorithm that, given integers a, b, c, d, e, f as an input, solves the general 2-variable quadratic equation The algorithm is implemented online at [1]. For an equation (27), it lists all the solutions if there are finitely many of them, and otherwise describes all solutions as a union of polynomial families or in the form of linear recurrence relations. Based on this, we can exclude all quadratic equations (27) from further analysis. Together with the previously excluded families, this eliminates all the equations of size H ≤ 13 with two exceptions: and Equations of the form y 2 = x 3 + k are known as Mordell's equations and are well studied. It is known that there is a finite number of integer solutions for each k = 0, and there is an algorithm that, given k, outputs all the solutions. See [15] for the description of the algorithm and for the explicit list of all solutions in the range |k| ≤ 10, 000. More generally, there are known practical algorithms for finding integer solutions to the equations in the form under some minor conditions on the integer coefficients a, b, c, d, e that guarantee that the solution set is finite. One such algorithm is implemented in an open-source and free to use computer algebra system SageMath [37], that can be run online at https://sagecell.sagemath.org/. To solve (30) , which means that the only integer solutions to (28) are (x, y) = (−1, 0), (0, ±1), and (2, ±3). In a similar way, we find that the only integer solution to (29) is (x, y) = (1, 0). This finishes the analysis of the equations of size H ≤ 13. Starting with H ≥ 14, we will exclude all the equations in the form (30). After this, the only remaining equation of size H = 14 is This equation is not directly in the form (30), but can be easily reduced to it. Indeed, after multiplication by 4y and adding 1 to both sides, we can rewire the equation as 4y 3 + (4x 2 y 2 + 4xy + 1) = 1, or (2xy + 1) 2 = −4y 3 + 1. With new variable z = 2xy + 1, this simplifies to z 2 = −4y 3 + 1. Now multiply both sides by 16 to get (4z) 2 = (−8y) 3 + 16, or with new variables Y = 4z = 4(2xy + 1) and X = −8y. Note that if x, y are integers then so is X, Y. Now, command sage : EllipticCurve([0, 0, 0, 0, 16]).integral points() shows that the only integer solutions to (33) are (X, Y) = (0, ±4). For these solutions, y = −X/8 = 0 happen to be an integer, and substitution y = 0 in (32) returns x = 0. Hence, (x, y) = (0, 0) is the only integer solution to (32). Computer algebra system Maple has a command Weierstrassform that helps to transform a broad range of equations to the form (30). In this example, command Weierstrass f orm(x + x 2 y + y 2 , x, y, X, Y) , X], that shows that the equation can be transformed to X 3 − 1/4 + Y 2 after substitutions X = y, Y = xy + 1/2. The remaining steps can be easily done by hand. A combination of Weierstrassform and EllipticCurve commands allows to solve all 2-variable equations of size H ≤ 15, and many equations of higher size. The algorithms we have discussed so far are the special cases of much more general algorithms applicable to much broader classes of 2-variable equations To introduce them, we need a few definitions. A polynomial P with integer coefficients is called absolutely irreducible if it cannot be written as a product P = P 1 · P 2 of non-constant polynomials, even if we allow complex coefficients. It is known that if P is irreducible over Q but not absolutely irreducible, then all integer solutions to (34) can be determined easily, see e.g. [16]. On the other hand, if P is reducible over Q, then equation (34) reduces to equations of the same form for each of the factors. Hence, we may assume that P in (34) is absolutely irreducible. In this case, the set of all complex solutions to (34) forms a connected surface. The genus g of such surface is the maximum number of cuttings that can be made along non-intersecting closed simple curves on the surface without making it disconnected. The genus-degree formula where d is the degree of P, implies that all quadratic polynomials have genus 0, while all cubic polynomials have genus at most 1. Poulakis [26,27] developed practical algorithm to solve all 2-variable equations of genus g = 0. The algorithm can decide whether a given equation has finite or infinite number of solutions, list all solutions in the former case, and describes them in the parametric form and/or using recurrence relations in the latter case. Hence, it suffices to consider equations with g ≥ 1. In this case, the fundamental theorem of Siegel [29] states that there is always a finite number of integer solutions. The combination of Poulakis and Siegel theorems makes the problem of solving (34) completely well-defined. In 1970, Baker [5] developed an effective upper bound for the absolute value of all possible solutions to (34) as an explicit function of the coefficients of P, provided that g = 1. This gives an algorithm to list all the solutions of an arbitrary genus 1 equation. In particular, by the genus-degree formula (35), this result covers all 2-variable cubic equations. While Baker's bounds are enormous and the corresponding algorithm is impractical, a practical method for finding all integer solutions to genus 1 equations was later developed by Stroeker and Tzanakis [31]. Further, Baker [4] developed in 1969 a general method for solving equations in the form where P(x) is a polynomial of arbitrary degree that has at least three simple (possibly complex) zeros. As proved in [16], this implies the method to determine all integer solutions to the equation where a(x), b(x) and c(x) are arbitrary polynomials with integer coefficients. Indeed, if (37) has an integer solution, then b 2 (x) − 4a(x)c(x) must be a perfect square, and we can apply Baker's algorithm to determine all such x, see [16] for details. This allows us to focus on the equations of degree at least 4 that are at least cubic in each of the variables. The simplest examples of such equations are, say, y 3 = x 4 + 1 or y 3 = x 4 + x + 1. However, such equations are covered by another theorem of Baker [4], who developed an algorithm for listing all the solutions of the equation provided that m ≥ 3, and P(x) is a polynomial with integer coefficients of degree at least 3 with at least two simple zeros. In 1984, Brindza [7] showed the the conditions on P can be significantly relaxed. It is easy to see [16] that this result also implies the algorithm for solving equations of the form Baker's and Brindza's methods for solving equations (36) and (38) are impractical even for the equations with small coefficients. However, there are practical methods for which we do not have proof that they work in general, but which seem to work for any individual equation in this form. For example, Bruin and Stoll [8] decided the solvability in rationals of all the equations (36) where P is a square-free, has degree at most 6, and has integral coefficients of absolute value at most 3. More recently, Hashimoto and Morrison [20] determined the set of all rational solutions for a large family of the equations in the form (38). Based on this, we eliminate equations of the form (39) from further analysis. After this, the smallest noneliminated ones are of size H = 26 and the next-smallest are In 1887, Runge [28] proved if P ∈ F then equation P = 0 has at most finitely many integer solutions. In 1992, Walsh [33] developed an effective upper bound for the size of possible solutions, which implies the existence of an algorithm for listing all the solutions. See [34] for a practical implementation of this algorithm. Note that equations (40) and (41) satisfy (C1), because in this case n = m = 3, there is a non-zero coefficient a 31 = 1, and, for i = 3 and j = 1, we have ni + mj = 3 · 3 + 3 · 1 > 3 · 3 = mn. Walsh's theorem allows us to exclude all equations that satisfy either (C1) or (C2) from further analysis. This excludes all equations of size H ≤ 27, and all equations of size H = 28 with three exceptions: and The listed equations does not satisfy (C1) because for them we have m = 4, n = 3, and there is no non-zero coefficient a ij with 3i + 4j > 12. Equality 3i + 4j = 12 holds for coefficients a 40 and a 03 , and polynomials x 4 ± y 3 are irreducible, hence (C2) also fails. Equation (42) has genus 2. Computer algebra system Magma has built-in method (called Chabauty) for finding all rational solutions for some genus 2 equations, and the method happens to work for this particular equation, returning that the only rational solution is x = y = 0. Equations (43) and (44) have genus 3, and this Magma function is not applicable to them. By Siegel1's Theorem [29], the sets of their integer solutions are finite. The direct search returns solutions (x, y) = (−1, 0), (0, 0), and (1, 1) for (43) and (x, y) = (0, −1), (0, 0), (0, 1), (1, −1), (1, 0) and (1, 1) for (44), but the problem is to prove that no other solutions exists. We leave this to the reader as open questions. Open Question 3.1. Find all integer solutions to (43). Finding the solution set if it is finite Now let us return to Diophantine equations in 3 or more variables. In this case the problem of "solving" the equation is, in general, not well posed: if the solution set is infinite but not a finite union of polynomial families and cannot be described by recurrence relations, then was counts as an "acceptable description" of this solution set? For the sets with no obvious "structure" this problem is more philosophical than mathematical, and we will not discuss it further. Instead, we will focus on the following problem, which is completely well-defined. Problem 4.1. Given a polynomial Diophantine equation, decide whether its solution set is finite, and if so, list all the solutions. Note that proving that the solution set is infinite completely solves Problem 4.1, and no further analysis is required. In Section 2, we have completed the analysis of all equations of size H ≤ 12, so we may move to H = 13. We first remark that there are many equations like that has infinitely many integer solutions for some fixed value of one of the variables (for (45), take y = 1 and x = z). For such equations, Problem 4.1 is trivial hence they may be excluded. After this, the only non-excluded equation of size H ≤ 13 is This equation has at most a finite number of integer solutions for any fixed x, y, or z, but still has infinitely many integer solutions. Indeed, for any integer t, we have a solution x = 2t 2 , y = 2t, z = 2t 2 + 1. Integer solutions to (45) and (46) are known as "almost polynomial triples" and "nearly polynomial triples", respectively. See [11] for the complete description of the solution sets to these equations. More generally, for any integer a, equation has infinitely many integer solutions. Indeed, we can rewrite the equation as x 2 − a = y 2 − z 2 = (y − z)(y + z). For simplicity, assume that y − z = 1, so that x 2 − a = y + z = 2z + 1, from which we can find z = (x 2 − a − 1)/2. Now, if a = 2k − 1 is odd, take x = 2t, z = 2t 2 − k, and y = z + 1 = 2t 2 − k + 1, while if a = 2k is even, take x = 2t + 1, z = 2t 2 + 2t − k, and y = z + 1 = 2t 2 + 2t − k + 1. As a side note, we remark that a complete description of the solution set to (47) with a = 3 has been an open question until Vaserstein [32,Example 15] proved that it is the union of two polynomial families. More generally, if there exist polynomials Q 1 (t), . . . , Q n (t) with integer coefficients, not all constant, such that P (Q 1 (t), . . . , Q n (t)) ≡ 0 (48) then equation P(x 1 , . . . , x n ) = 0 has infinitely many integer solutions. In general, deciding the existence of such polynomials is a quite non-trivial problem. However, we can at least verify (48) for polynomials with small degree and coefficients, and exclude the equations for which we managed to find the corresponding Q i . This method allowed us to solve all the remaining equations of size H ≤ 16. The first equation of size H = 17 we discuss is We will prove that it has infinitely many integer solutions. Equivalently, there exists infinitely many integers x such that x 3 − 1 is the sum of two squares. Identity shows that if two integers can be represented as sum of squares, then so is their product. Because But this is equation (47) with a = −2 which has infinitely many integer solutions. In fact, this leads to explicit parametric family x = 3 + 8t + 12t 2 + 8t 3 + 4t 4 , y = 5 + 20t + 38t 2 + 40t 3 + 24t 4 + 8t 5 and z = −1 − 8t − 28t 2 − 44t 3 − 44t 4 − 24t 5 − 8t 6 of the solutions to (49), but the coefficients in these polynomials are too large for the direct search. This is the reason why (49) has not been excluded automatically and required explicit argument. Another equation of size H = 17 that requires attention is We will prove that it has no integer solutions. For this, we will need a well-known fact [16] that all odd prime factors of a sum of squares z 2 + 1 must be congruent to 1 modulo 4. Hence the same is true for the odd prime factors of positive integers y and x 2 − y. Because the product of any number of such primes is again 1 modulo 4, this implies that if z 2 + 1 is odd, then both y and x 2 − y are congruent to 1 modulo 4, but then x 2 is congruent to 2 modulo 4, a contradiction. If z 2 + 1 is even, its prime factorization contains exactly one factor of 2, which goes to either y or x 2 − y, resulting in x 2 being 3 modulo 4, again a contradiction. This finishes the analysis of equations of size H = 17. For H = 18, we start with the equation similar to (49), Unlike x 3 − 1, x 3 − 2 does not factorise, so different technique is required. We will present a solution given by Max Alekseyev in the comment to mathoverflow question 1 . Let x = t 2 + 2 for some integer t. Then x 3 − 2 = (t 2 + 2) 3 − 2 = (t 3 + 3t) 2 + (3t 2 + 6). It is left to select y = t 3 + 3t and note that equation z 2 = 3t 2 + 6 has infinitely many integer solutions. This can be checked directly or using the Gauss theorem [24, p. 57], that states that the general quadratic equation with integer coefficients is not a perfect square, and ∆ = 4ac f + bde − ae 2 − cd 2 − f b 2 = 0 has either no integer solutions or infinitely many of them. In our case, D = −4 · 3 · (−1) = 12 > 0, ∆ = 0, and there is an integer solution, say z = 3, t = 1. This finishes the proof. Another equation of size H = 18 is This equation requires completely different techniques called Vieta jumping. The idea is that if (x, y, z) is any solution to (52), then t = x is a solution to the quadratic equation and this equation has another solution t ′ = −yz − x = y 2 −2 x , which is also an integer. Hence, any solution (x, y, z) to (52) produces another solution (−yz − x, y, z), and, by a similar argument, one more solution (x, −xz − y, z). The technique suggests to consider a solution with |x| + |y| + |z| minimal and either prove that there is no such solution, or find such minimal solution and then produce infinitely many other solutions by the transformations above. To apply this technique to general equation P(x 1 , . . . , x n ) = 0, let us denote S the set of all variables x i for which the equation can be written as a i x 2 i + Q i x i + R i = 0 where |a i | = 1 and Q i and R i are polynomials in other variables. We can then use any computer algebra system to solve optimization problem of maximizing t over (x 1 , . . . , x n , t) ∈ R n+1 subject to constraints If the optimal value t * to this optimization problem is infinite, then the method does not work for this equation. But if t * < ∞, then we have min{|x|, |y|, |z|} ≤ t * < ∞ for any solution (x, y, z) with |x| + |y| + |z| minimal. So, we next check, for each integer t such that |t| ≤ t * and each i = 1, . . . , n, whether the equation has any integer solutions with x i = t. If there are no such solutions, then the equation has no integer solutions at all. If there are such solutions, we next check whether any of them produces an infinite chain of solutions via Vieta jumping. In the rest of the paper, we will exclude the equations solvable by this method. A more interesting equation that require a new idea is To solve it, recall that a positive integer is the sum of two squares if and only if all its prime factors congruent to 3 modulo 4 enters its prime factorization an even number of times, see e.g. [16]. In particular, this implies that if a and b do not share prime factors congruent to 3 modulo 4, and ab is the sum of two squares, then so are both a and b. Now note that x( . Let x be any integer such that 2x 2 + 3x − 1 is a perfect square (there are infinitely many such integers). Then x is not divisible by 3 (otherwise 2x 2 + 3x − 1 would be 2 mod 3 and could not be a perfect square), hence x and x 3 + 3 are co-prime. But their product x(x 3 + 3) is the sum of two squares, hence x 3 + 3 is a sum of two squares as well. The same method allows to solve many other similar equations, such as y 2 + z 2 = x 3 + x + 1, y 2 + z 2 = x 3 − x − 1, y + y 2 + z 2 = x 3 − 1, etc. (The last equation after multiplication by 4 can be rewritten as (2y + 1) 2 + (2z) 2 = 4x 3 − 3, so it suffices to prove that 4x 3 − 3 is the sum of squares infinitely often, and then the same method applies). We will exclude any further equations solvable by this method. This finishes the analysis for H ≤ 19. The only new equations of size H = 20 are homogeneous quadratic equations like The only integer solution to this equation is (x, y, z) = (0, 0, 0). Indeed, if there is any other solution then we can divide it by any common factor and obtain a new solution for which (x, y, z) are co-prime. However, the sum of squares x 2 + y 2 is divisible by 3 only if both x and y are divisible by 3. But in this case x 2 + y 2 is divisible by 9, hence z is divisible by 3, a contradiction with the co-primality assumption. Famous Hasse-Minkowski theorem (Hasse principle) states that if a homogeneous quadratic equation has non-zero real solutions but no non-zero integer solutions, then this can always be proved by divisibility analysis modulo some p as above. This allows us to exclude such equations as well. For H = 21, the only equation of different type is So far we have used only the information about prime factors of sum of two squares, while this equation requires the analysis of prime factors of other quadratic polynomials, in this case x 2 + 2 and 2z 2 − 1. As shown in [16], all odd prime factors of x 2 + 2 must be 1 or 3 modulo 8, while all prime factors of 2z 2 − 1 must be 1 or 7 modulo 8. A combination of these facts imply that if (x, y, z) solves (54), then all prime factors of x 2 + 2 are congruent to 1 modulo 8. But then x 2 + 2 must be itself congruent to 1 modulo 8, which is a contradiction. We refer to [16] how to apply this method in general, but here will not list further equation solvable in this way. For H = 22, we start to meet equations like that require the analysis of which integers can be represented in the form y 2 + yz + z 2 . Let S be the set of all such integers. It is known that S is also the set of integers representable as 3y 2 + z 2 , and also the set of integers n such that every prime p of the form p = 3k + 2 enters the prime factorization of n in the even power. We need to prove that x 3 − x belongs to S for infinitely many x. Choose any odd x such that 2x 2 − 2x − 4 = 3t 2 for some integer t (there are infinitely many such x). and (x − 2) do not share any prime factors in the form p = 3k + 2, this implies that x 3 − x ∈ S. The same method allows to solve other equations of this type, such as y 2 + yz + z 2 = x 3 + x and y 2 + yz + z 2 = x 3 − 2. The next equation we discuss is y(z 2 − y) = x 3 + 2. We present a solution given by Mathoverflow user Tomita 2 . By considering the equation as quadratic in y, we conclude that it has infinitely many integer solutions if and only if the determinant D = (−z 2 ) 2 − 4(x 3 + 2) = z 4 − 4x 3 − 8 is a perfect square infinitely often. Now assume that x = −3t 2 − 2t − 2 and z = 3t + 1 for some integer t. Then D = (3t + 1) 4 − 4(−3t 2 − 2t − 2) 3 − 8 = (12t 2 + 8t + 25)(3t 2 + 2t + 1) 2 . It is left to remark that 12t 2 + 8t + 25 is a perfect square for infinitely many integers t. The same method solves another equation, We need D = x 2 z 2 − 4x 3 + 8 to be a perfect square. Select x = 6t 2 + 1 and z = 6t, then D = 4(6t 2 − 1) 2 (3t 2 + 1). It is left to note that there are infinitely many integers t such that 3t 2 + 1 is a perfect square. However, we currently do not see how to use these (or other) methods to solve similar equations and These are the only remaining open equations of size H ≤ 22. A computer search for polynomials x = Q(t) and z = R(t) with small degree and coefficients returns no polynomials for which the same method works. Hence, we need either a deeper search for polynomials with large coefficients, or a new idea. We will leave these equations to the readers as open questions. Another nice class of equations we may consider are symmetric equations, ones that are invariant after cyclic shirt of variables. The smallest symmetric equation not directly solvable by the methods described above turns out to be the equation Finally, we may also restrict the number of monomials. It is easy to solve all 2-monomial equations [16], hence the first interesting case is 3-monomial ones. The smallest 3-monomial equation which seems to be not solvable by the described methods is the equation of size H = 42. This equation has obvious solutions (x, y, z) = (1, ±1, −1). Note that any integer n can be represented in the form x 3 y 2 if and only if for every prime number p dividing n, p 2 also divides n. Such integers are called powerful numbers. So, the question is to find all integers z such that z 3 + 2 is a powerful number. Open Question 4.5. Find all integer solutions to the equation (59). Existence of solutions: Hilbert 10th problem In addition to Problem 4.1, one may consider the following problem with Yes/No answer. Hilbert's 10th problem asks for a general method for solving Problem 5.1 for all Diophantine equations. Building on the work of Davis, Putnam and Robinson [10], Matiyasevich [21] proved in 1970 that no such general algorithm exists. See excellent recent surveys of Gasarch [14,12,13] for a detailed discussion for which Diophantine equations Problem 5.1 can be solved, and in which cases it is known to be undecidable. For all the equations we left open in the previous sections, Problem 5.1 is trivial because these equations have some obvious small solutions. In [16], we found the smallest Diophantine equation for which Problem 5.1 is currently open. This is the equation of size H = 31. The same question can also be asked for restricted families of equations. Among the 2-variable equations, the smallest open are y 3 + xy + x 4 + 4 = 0, and of size H = 32. The smallest open symmetric equation is of size H = 39. Finally, the smallest open 3-monomial equation is of size H = 46. In addition, we may consider the smallest open equations with respect to alternative measures of size. As noted in the introduction, a natural measure of "length" of a monomial M of degree d with coefficient a is l(M) = log 2 |a| + d. Then we can define the length l(P) of a polynomial P consisting of k monomials with coefficients a 1 , . . . , a k and degrees d 1 , . . . , d k , respectively, as Then, instead of ordering the equations by H, we may order them by length l, or, equivalently, by an integer Note that the formula for L(P) is the same as the formula (3) for H(P), except that the summation is replaced by a product. As established in [16], the shortest equations for which Problem 5.1 is open are the equations y(x 3 − y) = z 4 + 1, 2y 3 + xy + x 4 + 1 = 0 (69) and x 3 y 2 = z 4 + 2 that have length l = 10. Conclusions We have ordered all Polynomial Diophantine equations by a parameter H defined in (3) and tried to solve the equations in that order. We have considered the following problems in the decreasing level of difficulty. • Completely solve the equation: list all solutions if there are finitely many and describe all solutions (for example, as a union of polynomial families) if the solution set is infinite. • Determine whether the solution set is finite, and if yes, list all the solutions. • Check whether an equation has any integer solution. For each of the problems, we have identified the smallest equations for which the problem is open. In some cases, we also identified the smallest open equations is certain families, such as the smallest open 2variable, symmetric, or 3-monomial equations. The list of current smallest open equations can also be found on Mathoverflow [17,18,19], where the plan is to always keep the list up-to-dated. We suspect that some of the open equations listed in this paper are relatively easy, and are suitable for the first research project of a graduate or even undergraduate student. On the other hand, we are confident that some of our equations are quite difficult and may stimulate the development of new methods and techniques in number theory.
2022-04-16T14:03:42.935Z
2022-04-18T00:00:00.000
{ "year": 2022, "sha1": "0bb0e738bd1e76ca2270d68c3de2986b67c8ae54", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e914e9e8c72fbb720d89117132d701def3027258", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
252093984
pes2o/s2orc
v3-fos-license
Enhanced Multiscale Attention Network for Single Image Dehazing Under severe weather conditions, the quality of the images taken outside is directly affected by floating atmospheric particles. To keep the quality of the images, haze removal methods play a critical role. The most difficult part of haze removal is removing the haze that spreads over the entire image. Many CNN-based methods have been proposed to remove the haze, and can be divided into two types. One is to use a multi-scale structure and the other is to stack layers. The former causes image degradation due to the loss of some of the original information in an image and the latter increases computational complexity due to not reducing the resolution. In addition, a large number of parameters is required to secure the expressive power of the model, which leads to a huge amount of memory. To tackle these problems, we tried to 1) downsample the image while saving parameters and maintaining the quality of the generated image, and 2) consider the information in the entire image to remove the haze. For the first problem, we tried to solve this by using a feature extractor that has been used in other tasks, learning to optimize the output image in low-resolution, and preparing kernels with various dilation rates to expand the receptive fields. For the second problem, we use the attention structure to determine which part of the image features should be focused on from the entire feature map. By incorporating such modules, our method achieves better results on both synthetic and real-world images when compared with state-of-the-art methods. I. INTRODUCTION In recent years, the demand for high quality images has been color of the image. The physical haze model [5], [6], [7] is 28 expressed by the following equation. 29 I (x) = t(x)J (x) + (1 − t(x))A (1) 30 where, I , J , t, and A are the input haze image, the clean 31 image, the transmission coefficient, and the ambient light, 32 The associate editor coordinating the review of this manuscript and approving it for publication was Amin Zehtabian . respectively. Recently, due to the development of hardware, 33 deep learning methods are widely used for single image 34 dehazing and there are mainly two types of methods. One is 35 to directly output a clean image in an end-to-end manner [8], 36 [9], [10], [ is to output a haze free image by estimating the transmission 38 coefficient and ambient light through a network and substi-39 tuting these values into Eq. (1). Unlike other degradation 40 factors, haze is greatly affected by distance and it is very 41 difficult to train the network without considering the distance 42 information. However, the depth information is required to 43 obtain the correct label of the transmission map, which is 44 overload and processing speed reduction. 95 To address these issues, we propose Enhanced Multi-Scale 96 Attention Network (EMSAN) while reducing the parameters. 97 Furthermore, various modules, such as attention, are added 98 to achieve more global and advanced feature extraction and 99 processing. 100 To expand the receptive field, it is necessary to reduce the 101 size of the image and process it, but information included 102 in the image is lost due to downsampling during encoding. 103 To achieve this trade-off, we propose a Mixed Encoder(ME). 104 ME is an encoder consisting of a pre-trained VGG16 and 105 Dense Net. By using these networks as the encoder, the 106 ME not only avoids missing information during encoding, 107 but also performs high-level feature extraction and prevents 108 color distortion. Additionally, we propose the Multi Output 109 Branch(MOB) structure to improve the accuracy of low-scale 110 branches in a multi-scale network, which can reduce the dete-111 rioration. MOB generates low resolution output images from 112 the low-scale features through a Refine Block [8]. By taking 113 the loss between those output images and the downsam-114 pled ground truth, the feature extraction at lower scales is 115 optimized. 116 In addition, we have incorporated a new block structure, 117 the Enhanced Feature Attention module inspired from the FA 118 module proposed in FFA-Net [10]. This module uses dilated 119 convolution, pixel attention, and channel attention to consider 120 the entire feature map. The previous vertical pixel and chan-121 nel attention structures are likely to cause bottlenecks in one 122 of them, so we address this problem by parallelizing them. 123 Processing at the original resolution has a significant 124 impact on the final output. Therefore, instead of using a 125 structure based on the convolutional layers, which consider 126 local regions, we develop a structure based on attention so 127 that the information of the entire image can be considered. 128 The attention part is structured around the Multi-head Self 129 Channel Attention(MH-SCA) block. The conversion of the 130 Fully Connected layer of Channel attention to Multi-head 131 Self Attention allows the processing according to the seman-132 tic features of the image. 133 We summarize the contributions of our work as follows. 226 where d denotes the number of dimensions. In Eq. (2), 228 Q T K represents the inner product of Q and K , and the value 229 is calculated based on the similarity between the query and 230 key. 231 Self Attention [27] is a method to get Q, K , V from the 232 same vector in Scaled Dot-Product Attention. This makes it 233 possible to understand the relationship between words in a 234 sentence or between pixels in an image. When Q, K , V are calculated from the same vector as in Self 244 Attention, the similarity to oneself inevitably increases, mak-245 ing it difficult to see the relationship between each element. 246 Therefore, this method compresses the head to change the 247 viewing position. 249 The overall structure of the proposed method is shown in 250 Fig. 2. The feature map generated using Mixed Encoder(ME) 251 is processed and upsampled in a branch composed of 252 Enhanced Feature Attention(EFA) blocks. The feature map 253 is added to the extracted features one scale higher by 254 the trade-off between the number of parameters and the per-285 formance improvement. 286 The structure of the Enhanced Feature Attention (EFA) 287 module is an enhancement of the Feature Attention (FA) 288 module proposed in FFA and is shown in Fig. 3. EFA is 289 divided into two parts: the dilation part and the attention part. 290 The dilation part is structured using dilated convolutions to 291 expand the receptive field of the model. To enable feature 292 extraction over a wide range of feature sizes, dilated con-293 volutions with dilation rates of 1, 2, 4, 8, 16 are connected 294 in parallel. In the attention part, channel attention and pixel 295 attention are processed in parallel, and they are concatenated 296 and propagated to the next block by convolution. This is 297 reasonable because both channel attention and pixel attention 298 can see the same feature map, and it prevents one of them 299 from becoming a bottleneck. 330 Self Channel Attention(SCA) introduces the concept of Self 331 Attention to conventional Channel Attention. In conventional 332 channel attention, a feature map is processed by Global Aver-333 age Pooling(GAP), and the resulting vector is passed through 334 fully connected layers to generate a new vector as weights for 335 each channel. If all channels are considered to determine the 336 attention weights, then it can negatively affect the unrelated 337 channels because some channels have correlations with each 338 other while others do not. By using Self Attention on the 339 vectorized feature map by Global Average Pooling as shown 340 in Fig. 6, it is possible to consider the relationship between 341 channels. The feature map is updated according to the fol-342 lowing formula. The overall loss function is expressed as where L p , L s , and L ms represent the perceptual loss, smooth tual loss is expressed as follows. where φ j (Ĵ ) represents the VGG16 feature maps from the Smooth L 1 loss L s uses the L 1 norm, which is less sensitive 384 to outliers than the MSE loss. In addition, the gradient is made 385 smoother, making it less prone to gradient explosion, and is 386 expressed as follows [29]. where α, β j , γ j are the default parameters computed by sub-405 ject experiments. 407 In this section, we conduct an ablation study to demonstrate 408 the effectiveness of each of the proposed components by 409 constructing models with and without these modules. We also 410 compared our method quantitatively and qualitatively with 411 conventional dehazing methods. 412 A. In order to investigate the effectiveness of the various archi-443 tectures proposed in this study, we carried out an ablation 444 study using the NH-HAZE dataset. In addition to the base 445 model (Fig. 7a) and the proposed network (Fig. 2), three other 446 VOLUME 10, 2022 models were developed and tested, as shown in Fig. 7, to bet-447 ter demonstrate the effects of each component. 448 The base model is shown in Fig. 7a. The encoder is a pre- to MH-SCA block. 457 The results of experiments using these models are shown 458 in We also show visual comparisons with SOTA methods 472 in Fig. 8, Fig. 9 and Fig. 10. Although there is not much 473 difference in appearance in the synthetic image dataset SOTS 474 as indicated by Fig. 8, the proposed method is able to recover 475 objects in distant areas with dense haze and similar in color 476 to the haze. 477 Fig. 9 shows the results of the real image dataset Dense-478 HAZE. As can be seen from Fig. 9, most of the methods are 479 not able to produces high quality images due to the dense 480 haze, but EMSAN is the closest to the ground truth in terms of 481 the object structure. This shows that ME successfully extracts 482 the background features. Multi-Head Self Channel Attention Block. By devising this 514 new network structure, the proposed EMSAN is able to 515 achieve higher quantitative and qualitative evaluations than 516 previous methods. Future work is to explore more lightweight 517 dehazing networks while improving performance.
2022-09-07T15:17:02.653Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b16a83cd333b252e77bb2b382b998b6862a3b485", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09875258.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "707256a2b07af0a2b0f01910ebc2b10ef3885053", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
215620683
pes2o/s2orc
v3-fos-license
Metabolic reprograming of LPS-stimulated human lung macrophages involves tryptophan metabolism and the aspartate-arginosuccinate shunt Lung macrophages (LM) are in the first line of defense against inhaled pathogens and can undergo phenotypic polarization to the proinflammatory M1 after stimulation with Toll-like receptor agonists. The objective of the present work was to characterize the metabolic alterations occurring during the experimental M1 LM polarization. Human LM were obtained from resected lungs and cultured for 24 hrs in medium alone or with 10 ng.mL-1 lipopolysaccharide. Cells and culture supernatants were subjected to extraction for metabolomic analysis with high-resolution LC-MS (HILIC and reverse phase -RP- chromatography in both negative and positive ionization modes) and GC-MS. The data were analyzed with R and the Worklow4Metabolomics and MetaboAnalyst online infrastructures. A total of 8,741 and 4,356 features were detected in the intracellular and extracellular content, respectively, after the filtering steps. Pathway analysis showed involvement of arachidonic acid metabolism, tryptophan metabolism and Krebs cycle in the response of LM to LPS, which was confirmed by the specific quantitation of selected compounds. This refined analysis highlighted a regulation of the kynurenin pathway as well as the serotonin biosynthesis pathway, and an involvement of aspartate-arginosuccinate shunt in the malate production. Macrophages M1 polarization is accompanied by changes in the cell metabolome, with the differential expression of metabolites involved in the promotion and regulation of inflammation and antimicrobial activity. The analysis of this macrophage immunometabolome may be of interest for the understanding of the pathophysiology of lung inflammatory disesases. Introduction Lung macrophages are in the first line of defense against inhaled pathogens. They are major effectors of the immune response as they express membrane receptors, including Toll-like receptors (TLRs), able to recognize conserved microbial ligands [1]. Once activated, TLRs induce the production of a pattern of cytokines, chemokines and mediators, such as metabolites from the arachidonic acid pathway, involved in the inflammatory response and characteristics of a macrophage engagement towards a M1 polarization state [1,2]. Lipopolysacharide (LPS) is the archetypal TLR ligand for the induction of M1 macrophage polarization [1]. Like allergens (e.g. ragweed pollen, house dust extract, and cat dander) or air pollutants, LPS binds to and activates TLR4, the first subtype among the TLR family to be identified in humans. Beyond this role in cell signaling, TLRs also play a role in the primary function of macrophages, consisting in the phagocytosis and killing of pathogens [3]. For phagocytosis, the production of hydrolytic enzymes and reactive oxygen species (ROS) is required, the latter also resulting from oxidase or lipoxygenase enzyme metabolism [4]. The enzymatic machinery is therefore modulated following the presence of microorganisms, and more generally by the external environment, and consequently the resulting production of metabolites varies, being likely to adapt to the different stimuli to which macrophages can be exposed [5]. Cell metabolism and macrophages functions are tightly linked, as already shown in murine bone-marrowderived macrophages. Hence, the presence of PGE 2 , one on the main arachidonic acid derivative, is necessary to the LPS-induced production of the precursor of the pro-inflammatory cytokine IL-1β [6]; LPS also induces an increased glycolysis and succinate production, necessary to the increase of IL-1β expression and also driving the production of ROS [7,8]; finally, the TLR4 agonist MPLA induces mouse resistance to systemic infection with Staphylococcus aureus and Candida albicans by reprogramming macrophage metabolism, with increased glycolysis and oxidative phosphorylation and rewiring of malate/NADH shuttling [9]. Therefore, the understanding of macrophage metabolic reprogramming has become a key focus in fields such as infection [10], inflammation [11], cancer [12] and immune disorders [13]. With respect to lung pathogenesis, macrophages also play a role in infections and inflammatory diseases such as asthma and chronic obstructive pulmonary disease (COPD), where they can undergo a phenotypic differentiation [14][15][16][17]. In these cases, TLRs are of prime importance for their role in the recognition of pathogens during infections and in microbe-induced acute exacerbations of asthma and COPD [18,19]. However, the only reports of macrophage metabolic reprograming in lung diseases until now were in a mouse model of fibrosis [20], in alveolar macrophages from LPS treated mice [21] and in smoker's or/and Mycobacterium tuberculosis-infected alveolar macrophages [22]. Since metabolic changes associated with the stimulation of TLRs in human lung macrophages were not yet described, the objective of the present study was to perform an extensive intra-and extracellular metabolomic characterization of LPS-induced alterations in human lung macrophages using a combined untargeted liquid chromatography high-resolution mass spectrometry and gas chromatography mass spectrometry approach [23]. Patient population The use of resected lung tissue was approved by the regional investigational review board (Comité de Protection des Personnes Ile de France VIII, Boulogne-Billancourt, France) and the patients undergoing surgical lung resection gave their written informed consent. Lung tissue was obtained from 10 patients with the following demographic characteristics: (median .25]; current tobacco smokers/ex-tobacco smokers/pipe smoker: 4/6/1; packyears: 30 ; and % FEV1 predicted: 109% [81-121]. One patient was suffering from COPD (as defined by a post-bronchodilator FEV1/FVC ratio <0.7; GOLD 2 stage) and none had undergone chemotherapy or radiotherapy prior to surgical lung resection. Cell culture Human lung macrophages were isolated and cultured as previously described [1]. 2 million cells were cultured for 24 hrs in medium alone or with 10 ng.mL -1 LPS. Supernatants were collected and centrifuged at 2000 rpm for 5 min at 4˚C, then frozen at -80˚C. Adherent macrophages were washed twice with sterile cold PBS. For LC-MS analysis, the macrophages were collected with 500 μL of methanol/water (1:1); for GC-MS analysis 1 mL of acetonitrile/isopropanol/water (3:3:2) was used. The plates were left at -80˚C for 20 min then scraped with a pipette tip to recover the cells and samples were kept at -80˚C until analysis. Metabolomic analysis Liquid chromatography-high resolution mass spectrometry. Sample preparation was adapted from the method described by Bligh and Dyer [24]. For supernatants, 500 μL of methanol/water (1:1) were added to 100 μL of culture medium. Then, for both supernatants and cells, 500 μL of chloroform were added. The mixture was sonicated for 10 min, stir for 10 min and centrifuged at 10000 rpm for 5 mins to achieve a biphasic separation with the upper phase containing polar compounds and the lower fraction nonpolar compounds. 2x200 μL of each phase were collected in Eppendorf tubes and dried under vacuum. One of each 4 dried extracts was reconstituted with 75 μL of each of the following mixtures: formate/acetonitrile (20:80 v/v) and carbonate/acetonitrile (20:80 v/v) for extracts from the upper phase subsequently analysed with hydrophilic interaction liquid chromatography (HILIC); formate/acetonitrile (80:20 v/v) and carbonate/acetonitrile (80:20 v/v) for extracts from the lower phase subsequently analysed with reverse phase (RP) chromatography. Quality control samples were prepared by pooling 5 μL of each extracted sample. LC-HRMS analysis was adapted from a previously described method [23] and each sample was injected four times, with HILIC and RP chromatrography, both in the negative and positive ionization modes. Chromatography was performed with an UltiMate 3000 Quaternary Rapid Separation Pump (Thermo Scientific Dionex, Les Ulis, France) and the separation was performed under gradient elution using 4 different mobile phase systems consisting of mixtures of acetonitrile with either solvent A (10 mM pH 3.8 ammonium formate) for the positive ionization mode or solvent B (20 mM pH 9.2 ammonium carbonate) for th e negative ionization mode. A SeQuant 4.6 mm x 150 mm, 5 μm i.d. ZIC-pHILIC column (AIT France, Houilles, France) was used for HILIC chromatography. For the positive ionization mode, gradient started with 5% solvent A until 3 min, then increased to reach 95% at 25 min and maintained for 5 more mins, then back to 5% and equilibrated for 10 mins. For the negative ionization mode, gradient started with 8% solvent B until 3 min, then increased to reach 92% at 25 min and maintained for 5 more mins, then back to 8% and equilibrated for 10 mins. Flow rate was 0.3 mL/min, curve gradient parameter set at 5, oven temperature 40˚C and total run time 40 mins. A Hypersil Gold C18 column 2.1 mm x 100 mm, 1.9 μm i.d. (Thermo Scientific Dionex) was used for RP chromatography. For the positive ionization mode, gradient started with 90% solvent A until 3 min, then decreased to reach 5% at 25 min and maintained for 8 more mins, then returning to 90% A for 5 mins. For the negative ionization mode, gradient started with 90% solvent B until 3 min, then decreased to reach 5% at 25 min and maintained for 8 more mins, then returning to 90% B for 5 mins. Flow rate was 0.3 mL/min, oven temperature 40˚C and total run time 40 mins. Mass spectrometry was performed with an hybrid quadrupole-orbitrap Q-Exactive mass spectrometer (Thermofisher) equipped with an heated electrospray ionization (ESI) source operating in positive (ESI+) or negative (ESI-) ionization modes. The ESI and acquisition parameters for the different modes are shown in S1 Table. Auxiliary gas heater temperature was set at 100˚C, resolution at 70,000 and AGC target at 10 6 . The acquisition scan-range was split into 3 segments as previously described [23]: m/z 60-300; 300-600; 600-900. Xcalibur software (Thermofisher) was used for system controlling and data acquisition. Gas chromatography-mass spectrometry. Samples were prepared according to the method described by Fiehn [25]. For supernatants, 1 mL of acetonitrile/isopropanol/water (3:3:2) was added to 30 μL of medium. The supernant and cell samples were then vortexed for 10 s, shaken for 5 mins, centrifuged at 14 000 rpm for 2 mins. 450 μL were recovered and dried under vacuum. The residue was reconstituted with 450 μL of acetonitrile/water 50:50 (1:1) degassed with nitrogen and centrifuged for 2 mins at 14 000 rpm. The residue was reconstituted with 10 μL of methoxyamine (MEOX) (20 mg/mL in pyridine), vortexed for 30 s, and kept at 30˚C for 90 mins. Finally, 75 μL of N-methyl-N-trimethylsilyl-trifluoroacetamide (MSTFA) spiked with a mixture of FAME internal standards (10 μL of FAMEs for 1 mL of MSTFA) were added and the extract kept at 37˚C for 30 mins. After derivation the samples were immediately transferred into injection vials. Gas chromatography was performed on a Trace 1300 system (Thermofisher) with an Uptibond 5 Premium column 30 m x 0.25 mm x 0.25 mm (Interchim, Montluçon, France) and helium as a carrier gas at a flow rate of 1 mL/min. The injection volume was 1 μL. The temperature ramp was as follows: 60˚C for 1 min, then an increase up to 325˚C at 10˚C/min. This temperature was maintained for 10 mins and total acquisition time was 37.5 mins. The detection was performed with a TSQ8000 mass spectrometer (Thermofisher). The electron impact (EI) ion source was held at 230˚C and an energy of 70 eV was used. Acquisition was performed in the full scan mode (m/z 50-600) with an acquisition rate of 20 spectrum/s. Xcalibur software (Thermofisher) was used for system controlling and data acquisition. Data processing. LC-MS raw files were first converted to mzML and centroidized with msConvert [26], then processed using IPO [27] and XCMS (v1.50.1) [28] packages running under R. The CentWave algorithm [29] was used for automatic peak picking, with parameters optimized with IPO. For GC-MS, the raw files were converted to CDF format. The data were analyzed with Workflow4Metabolomics using the metaMS, XCMS and CAMERA packages [28, 30, 31] that extract the peaks, align them, correct the analytical drift and perform annotation of adducts and isotopes. Features detected in biological samples with a mean intensity less than 3-fold the intensity observed in blank samples, or features detected in blank samples only were filtered out to limit the number of false positive peaks. Features with a CV greater than 30% in the quality control samples were also filtered out. Batch correction, quality control checks and statistics were performed with Workflow4Metabolomics [32, 33]. Statistical analysis was also performed with Workflow4Metabolomics and MetaboAnalyst 4.0 using uni-or multivariate analysis and pathway analysis based on Mummichog [34]. Levels of metabolite annotation were defined as follows: level 0, which is the strongest level of annotation and includes stereochemistry discrimination; level 1 that requires the use of a chemical standard and at least two orthogonal techniques (e.g., accurate mass and retention time); level 2 is confirmation by a class-specific standard; level 3 by one parameter (e.g., accurate mass); level 4 is the feature-level without annotation [35]. HMDB, The Golm Metabolome Database and NIST [36, 37] were used for database queries. Untargeted metabolomic analysis For the intracellular metabolome, the combination of the different LC-MS and GC-MS methods allowed the detection of 42,573 features in control and LPS-treated samples (Table 1). HILIC was the most contributive, followed with RP chromatography and GC-MS and a total of 8,741 features were remaining after the filtering steps. For the extracellular metabolome, 45,258 features were first detected, with 4,356 remaining after filtering (Table 2). RP chromatography allowed the detection of the highest number of features, followed with HILIC and GC-MS. Multivariate analysis Models were built with Partial Least-Squares-Discriminant Analysis (PLS-DA) for each condition and are shown in Fig 1A and 1B for the intracellular and extracellular metabolomes, respectively. The models for the intracellular metabolome allowed the distinction between control and treated cells with taking into account 2 components for the four analytical conditions. Each of these 2 components explained between 7.1 and 26.1% of the variability. On the other hand, with respect to the extracellular metabolome, 2 components were enough to discriminate control and treated cells for HILIC in the positive ionization mode, but 3 components were suggested by cross-validation to discriminate samples categories in the other analytical conditions. Although all models well fitted the data (R2Y > 0.9), their prediction capacity was poor with Q2Y < 0.3, except for the analysis of the extracellular content with HILIC in the positive ionization mode (Q2Y = 0.5). The features contributing the most to each model (assessed with Variable Importance in Projection (VIP) score) were then used for hierarchical clustering and for pathway analysis. Hierarchical clustering for the intracellular and extracellular metabolomes is depicted in Fig 2A and 2B, showing that an excellent categorization was achieved between control and LPS-treated macrophages for the majority of the different analytical conditions. Two categories of features, i.e. up-or down-regulated in control or LPS groups, are clearly distinguishable for the analysis of the intracellular content with HILIC in the positive and negative ionization mode. Pathway analysis Pathway analysis for the LC-MS analysis of the intracellular content revealed differential expression of arachidonic acid metabolites in LPS-treated macrophages (Fig 3). Furthermore, GC-MS analysis suggested modulations in metabolites from tryptophan metabolism and Krebs cycle. Metabolites identified by the statistical analysis to be the most contributing to the different models are shown in Table 3, with most features corresponding to arachidonic acid metabolites. Since the production of arachidonic acid derivatives after stimulation of lung macrophages with LPS is already well documented [38-42], we then focused on the exploration of tryptophan metabolism and Krebs cycle pathway. Targeted metabolic profiling Specific targeted LC-MS methods were developed for the quantitative analysis of selected compounds from tryptophan metabolism and Krebs cycle and the results are depicted in Fig 4. For tryptophan metabolism, a LPS-induced decrease in the tryptophan concentration and an increase in concentrations of coumpounds from the kynurenine pathway (kynurenine and quinolinic acid) were observed in both the intracellular and extracellular compartments (quinolinic acid increase was statistically significant in the extracellular content only). Accordingly, the concentration of hydroxytryptophan, a metabolite of the other tryptophan degradation pathway leading to serotonin synthesis, was also increased in the intracellular content. For Krebs cycle metabolites, an increase in the concentration of malate was observed (statistically significant for the intracellular content only), whereas succinate and fumarate were found unaltered. In a similar model with murine bone-marrow derived macrophages [43], the increased malate production was explained by the induction of the arginosuccinate shunt, involving arginosuccinate, fumarate and malate. In line with this, intracellular arginosuccinate was measured in human lung macrophages and a 336% increase in the production was observed (S1 Fig). Discussion This study reports for the first time the metabolomic analysis of human primary lung macrophages, to assess the effect of LPS, the archetypal TLR agonist, on the cell metabolome. The methodological approach combined high-resolution LC-MS, with a split scan-range acquisition method providing improved feature detection when compared to classical methods [23], and GC-MS to provide the most extensive metabolome coverage. Metabolomics studies were previously performed with macrophages to assess the effects of drugs, plasticizers, pollutants and nanomaterials in mouse macrophage cell lines [44][45][46][47], to study the interactome in IFN-γ or/and LPS-primed murine macrophages [48] or the cellular metabolism in HIV-infected human monocyte-derived macrophages [49]. Since the availability of human primary lung macrophages is limited and prolonged culture cannot be readily performed, alternative cellular models are commonly used as surrogate to study macrophage biology. Most frequently used surrogates consist in blood monocytes, monocyte-derived macrophages or phorbol ester-differentiated cell lines (e.g. U937, THP-1, HL60). However, important phenotypic differences were already reported between such surrogates and primary macrophages. For example, distinct patterns and expression levels of G-protein coupled receptors (including cytokine receptors) and ion channels were found between monocytes, cell lines and human alveolar macrophages [50]; peripheral blood monocytes exhibit no suppression by LPS of 5-lipoxygenase metabolism and no induction of iNOS compared to alveolar macrophages [41], LPS-stimulated human alveolar macrophages produce more PGE 2 than do blood monocytes [42] and the LPS-induced cytokine production is also higher in primary lung macrophages than in MDMs [51], highlighting the interest of confirming with primary cells the results obtained with available surrogates. In addition to molecules from arachidonic acid metabolism, pathway analysis revealed a role for the tryptophan metabolism and Krebs cycle during macrophage M1 polarisation. The role of eicosanoids in the macrophage inflammatory regulation and its resolution is already well established, with increased LPS-induced prostaglandin and leukotriene production [38-40, 52], the inhibitory role of PGE 2 and regulating role of 15-lipoxygenases in cytokine production [2,[53][54][55]. Hydroxy-tryptophan, kynurenine and quinolinic acid are downstream metabolites of tryptophan metabolism and were quantified in the present study. Hydroxytryptophan is the precursor of serotonin, whereas kynurenine and quinolinic acid belong to the kynurenine pathway. Hence, the depletion of tryptophan is the consequence of both the LPS-induced increase in indoleamine-2,3-dioxygenase (IDO), leading to the formation of kynurenine metabolites, as described in human pulmonary macrophages [56] and to the increase in tryptophan hydroxylase (TPH) activity, also occurring in macrophages in inflammatory conditions [57]. The effectors of the kynurenine pathway, expressed in epithelial cells and alveolar macrophages, were previously shown to be critical regulators of acute pulmonary inflammation in a murine model of lung transplantation [58]. In infectious disease models, IDO expression is increased by respiratory syncytial virus in human monocyte-derived dendritic cells and by IFN-γ and HIV in human monocyte-derived macrophages [59,60] and quinolinate is increased by HIV in monocyte-derived macrophages [49]. In the clinics, increased IDO and TPH activities were strongly associated to 30-day death and or intensive care unit admission and/or 18 month mortality in patients with COPD exacerbations [61]; an increase in serum kynurenine and a decrease in tryptophan was observed in patients with pneumonia with correlations between IDO activity / kynurenine levels and severity or mortality [62], while kynurenine levels were associated with 28-day mortality in critically ill adult patients [63]. Evidence showing the involvement of tryptophan in the regulation of macrophage polarization is also available. For instance, the role of the kynurenine pathway in inducing changes in macrophage phenotypes was previously investigated in the murine macrophage cell line RAW 264.7 and the murine fibrosarcoma cell line MC57, showing a role for IDO in cell adhesion, metalloproteinase expression and in the expression and activity of the cyclooxygenase enzymes [64]. Culture of RAW264.7 macrophages in a tryptophan-deficient medium induced a 54% reduction in cell proliferation as compared with cells cultured in RPMI, which was restored by tryptophan supplementation. In these cells, tryptophan deficiency was also responsible of an increase in cell death and apoptosis, which was also reversible by tryptophan supplementation [65]. With respect to the production of signaling molecules, macrophages from indoleamine 2,3-dioxygenase 2 knockout mice produced higher amounts of IL-1α, IL-6, IL-10, MCP-1, MIP-1α, MIP-1β and RANTES after LPS stimulation than macrophages from wildtype mice. In line with this, the preincubation of IFN-γ-primed induced pluripotent stem cell-derived human macrophages with INCB024360, an IDO1 inhibitor significantly impaired bacterial killing, which is a key feature of M1-polarized macrophages [66]. RAW264.7 macrophage cells and primary murine alveolar macrophages also showed increased IL-6, TNF-α, IFN-β, and/or IL-1β production in the presence of 1-methyltryptophan another IDO pharmacological competitive inhibitor, following influenza infection [67]. On the other hand, overexpression of IDO enzyme in the murine macrophage cell line RAW264.7 suppressed IL-6, G-CSF, MCP-1, and MIP-1β production [68] while treatment of these cells with the metabolites indole-3-acetate and tryptamine significantly attenuates the expression of TNF-α, IL-1β, and MCP-1 [69]. The production of molecules involved in microbes killing was also affected since LPS and IFN-γ stimulated RAW264.7 cells cultured in tryptophan-deficient medium demonstrated a significant reduction in iNOS expression in comparison to control cells, while cells cultured in the presence of tryptophan expressed significantly higher amounts of iNOS, leading to marked released amounts of NO [65]. Our results also strongly support a major role for tryptophan metabolism in lung macrophages during inflammation, suggesting the interest of a pharmacological approach to modulate this pathway in inflammatory lung diseases. Then, a large increase in the production of malate, one metabolite from the Krebs cycle, was observed in response to LPS, as previously reported in bone-marrow derived mouse macrophages [43,48], whereas succinate and fumarate levels were unchanged. Other groups reported either very modest or larger increases in succinate after stimulation with LPS of murine bone-marrow derived macrophages, with key roles for succinate in the induction of IL-1β through HIF-1α [7,43]. In their M1 polarisation model of murine bone-marrow derived macrophages, Jha and colleagues reported that malate accumulation was related to the induction of the arginosuccinate shunt, a pathway connecting the Krebs cycle with the urea cycle, involving aspartate, arginosuccinate, malate and fumarate. In their model, inhibiting this pathway with aminooxyacetic acid induced a concentration-dependant inhibition of the production of nitric oxide and IL-6, important effectors of antimicrobial activity and inflammatory reaction [43]. In line with these findings, we observed a 336% increase in the production of arginosuccinate in LPS-stimulated human lung macrophages, also suggesting a potential link between metabolism and immune functions in primary human cells. All the changes in metabolite concentrations measured during the targeted analysis were consistent between the intracellular and extracellular compartments, suggesting that intracellularly produced metabolites may play an autocrine/paracrine role by being released in the cell environment in response to an inflammatory stimuli. The main limit of the study is related to the patient population and sample size, which was limited to assess the effect of covariates such as age, smoking status or COPD. These covariates are known to affect the response of lung macrophages to TLR agonists; however, these changes greatly vary from one study to another. For example, some studies report increases in the LPSinduced cytokine production of alveolar macrophages from smoking or COPD patients [70,71] whereas opposite findings were also reported by other groups [72,73] and in other cases, current smoking status had no effect on the production of cytokines in response to LPS [74,75]. Altogether, these findings supports the concept whereby the LPS-induced production of cytokines by lung macrophages obtained from COPD, smokers, and healthy adults are similar, although this cannot be directly extrapolated to metabolomic analysis. In conclusion, we described the use of an extensive combined GC-and LC-MS strategy for the metabolomic profiling of the LPS-induced M1 human lung macrophage polarization. The non-targeted analysis revealed the involvement of the arachidonic pathway, tryptophan metabolism and Krebs cycle during the M1 polarisation. Targeted analysis of selected compounds confirmed these findings and allowed the quantification of the identified metabolites and allowed to precise a role for the aspartate-arginosuccinate shunt. Knowing the role of macrophages in inflammatory lung diseases, a further detailed investigation of alterations occurring in these pathways in cells from patients with asthma or COPD should be of particular interest. Supporting information S1 Table. Mass spectrometry parameters for HILIC and reverse phase
2020-04-09T09:13:56.587Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "7f22496b2765e075c166bea2b0bf2b0440cd11a3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0230813&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21e2b5cc098f419266ed716aa119e2691307d368", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
258907431
pes2o/s2orc
v3-fos-license
Sphingosine-1-phosphate derived from PRP-Exos promotes angiogenesis in diabetic wound healing via the S1PR1/AKT/FN1 signalling pathway Abstract Background Sphingosine-1-phosphate (S1P), a key regulator of vascular homeostasis and angiogenesis, is enriched in exosomes derived from platelet-rich plasma (PRP-Exos). However, the potential role of PRP-Exos-S1P in diabetic wound healing remains unclear. In this study, we investigated the underlying mechanism of PRP-Exos-S1P in diabetic angiogenesis and wound repair. Methods Exosomes were isolated from PRP by ultracentrifugation and analysed by transmission electron microscopy, nanoparticle tracking analysis and western blotting. The concentration of S1P derived from PRP-Exos was measured by enzyme-linked immunosorbent assay. The expression level of S1P receptor1–3 (S1PR1–3) in diabetic skin was analysed by Q-PCR. Bioinformatics analysis and proteomic sequencing were conducted to explore the possible signalling pathway mediated by PRP-Exos-S1P. A diabetic mouse model was used to evaluate the effect of PRP-Exos on wound healing. Immunofluorescence for cluster of differentiation 31 (CD31) was used to assess angiogenesis in a diabetic wound model. Results In vitro, PRP-Exos significantly promoted cell proliferation, migration and tube formation. Furthermore, PRP-Exos accelerated the process of diabetic angiogenesis and wound closure in vivo. S1P derived from PRP-Exos was present at a high level, and S1PR1 expression was significantly elevated compared with S1PR2 and S1PR3 in the skin of diabetic patients and animals. However, cell migration and tube formation were not promoted by PRP-Exos-S1P in human umbilical vein endothelial cells treated with shS1PR1. In the diabetic mouse model, inhibition of S1PR1 expression at wounding sites decreased the formation of new blood vessels and delayed the process of wound closure. Bioinformatics analysis and proteomics indicated that fibronectin 1 (FN1) was closely related to S1PR1 due to its colocalization in the endothelial cells of human skin. Further study supported that FN1 plays an important role in the PRP-Exos-S1P-mediated S1PR1/protein kinase B signalling pathway. Conclusions PRP-Exos-S1P promotes angiogenesis in diabetic wound healing via the S1PR1/protein kinase B/FN1 signalling pathway. Our findings provide a preliminary theoretical foundation for the treatment of diabetic foot ulcers using PRP-Exos in the future. Background The impaired healing capacity of diabetic wounds leads to chronic wounds, amputation and even death, resulting in severe public and family burdens [1,2].Therefore, prompt promotion of diabetic wound healing is vital to reduce amputation and mortality events [2].Wound healing requires complicated and integrated biological processes involving cell proliferation and migration, collagen deposition, extracellular matrix (ECM) remodelling and vascularization [3,4].Insufficient oxygen supply and blood flow occur in the lowerextremities of diabetic patients, contributing to microcirculation disorders, angiogenesis dysfunction and tissue formation failure [5][6][7].Thus, the regeneration and remodelling of blood vessels are the key to promoting diabetic wound healing. Both the International Work Group of Diabetic Foot (IWGDF) and our previous research indicate that plasmarich platelet (PRP) therapy is a promising treatment for diabetic foot ulcers due to the neovascularization, antiinfection and anti-inflammatory effects [8][9][10][11].However, the application of PRP therapy for patients with diabetic foot is limited due to the scarcity of autologous platelets and possible immune-related adverse reaction to allogeneic platelets [12].Recently, studies involving exosomes have attracted much attention in the regeneration field.Therefore, the efficacy and mechanism of exosomes derived from PRP (PRP-Exos) for diabetic wound healing are worth exploring because of their low immunogenicity and good stability.As an ideal delivery system, exosomes bring the cargoes packaged by the phosphate bilayer to targeted sites and protect the contents from degrading and turning over [8,13].Moreover, accumulating evidence suggests that PRP-Exos as the condensed product of PRP, have similar functional substances but at a higher concentration [14,15].Based on our previous study, the isolation and identification of PRP-Exos have been successfully achieved [16].It is well known that exosomes consist of abundant proteins, microRNAs (miRNAs) and growth factors, but the contribution of phosphate lipids is less well defined.Recently, some researchers have demonstrated that sphingosine-1 phosphate (S1P) can be separated and identified from PRP-Exos through rapid-resolution liquid chromatography and tandem mass spectrometry, and S1P was considered the key regulator maintaining the integrity of endothelial cells [17].To investigate the role of S1P in PRP-Exos, we tested the level of S1P by enzyme-linked immunosorbent assay (ELISA) in our preliminary study.To date, some evidence suggests that vascular endothelial growth factor (VEGF), platelet-derived growth factor-BB and transforming growth factor-β released from PRP are more enriched in PRP-Exos than in supernatants of activated PRP (PRP-AS) after ultracentrifugation [18][19][20].Interestingly, we found that S1P was also present at a significantly higher level in PRP-Exos than in PRP-AS. S1P is a bioactive lipid synthesized by sphingosine kinase 1/2 and is thought to be the intracellular second messenger that transmits small molecule signals to targeted sites by binding to G protein-coupled receptors on the cell surface via S1P receptors (S1PR1-5) [21][22][23].Although S1P is considered a vital phosphate lipid in the regulation of vascularization and regeneration of blood vessels [23,24], the role and possible mechanism of S1P derived from PRP-Exos (PRP-Exos-S1P) in diabetic wound healing remain unknown.It is well known that S1PR4 and S1PR5 are expressed mainly in the nervous and immune systems, yet S1PR1-3 are highly expressed in skin tissues.Therefore, it is essential to combine S1P with S1PR1-3 for the regulation of angiogenesis and homeostasis.Based on our preliminary study, we found that S1PR1 was significantly elevated compared with S1PR2 and S1PR3 in the skin of diabetic lower extremities. Fibronectin 1 (FN1) is an abundant protein in the basement membrane ECM that is involved in a variety of biological processes ranging from tissue survival to skin reepithelialization and angiogenesis [25], and is suggested to be the key regulator in diabetic wound healing according to our proteomic sequencing analysis.In addition, FN1 is also an essential part of intracellular components, establishing cell adhesion, cell migration and cytoskeletal organization [26].Bioinformatic data from the Human Protein Atlas (www.proteinatlas.org)demonstrate that the endothelial cells of human skin are rich in FN1 [27] and are present at the same expression site as S1PR1-3.Moreover, accumulating evidence suggests that S1P binding to S1PR1 is considered the key mediator activating the PI3K/protein kinase B (AKT) signalling pathway to regulate angiogenesis and vascularization [28,29].Some researchers have noted that FN1 has a close association with the PI3K/AKT pathway [28].In light of the colocalization of S1PR1 and FN1, with the data from Gene Cards (www.genecards.org) in Figure S3 (see online supplementary material), we hypothesize that PRP-Exos-S1P, by binding to S1PR1, mediates the regulation of FN1 via the PI3K/AKT signalling pathway, which in turn improves the ECM of blood vessels.Accordingly, this research employed a diabetic model both in vivo and in vitro with or without PRP-Exos intervention to explore the effects on angiogenesis, wound healing and potential mechanisms.In the present study, we demonstrate that PRP-Exos-S1P promotes angiogenesis and wound healing in a diabetic model via the S1PR1/AKT/FN1 signalling pathway.Our findings provide a preliminary theoretical foundation for the clinical application of PRP-Exos for diabetic foot ulcers. Activation of platelets Whole blood (1500 ml in total) was donated by 15 healthy volunteers (age: 25 ± 5 years; gender: 8 males and 7 females).All volunteers provided written informed consent following the approval of the Ethical Committee Board of Chongqing Emergency Medical Center (number: AF/04/02.0,date: 9 June 2021).PRP was isolated by a fully automatic blood separator (CS-3000 plus; Baxer International Inc, Deerfield, IL, USA), with an average concentration of 1048 x 10 9 platelets/l.Fresh PRP was activated by a prepared mixture (1000 IU of thrombin dry powder and 1 ml of 10% calcium gluconate solution) at a ratio of 10 : 1.The mixed liquid was shaken thoroughly and incubated at 37 • C for 1 h. Separation of PRP-Exos The detailed preparation protocol of PRP-Exos was described in our previous study [16].After PRP activation finished, a series of gradient centrifugations at 500, 1500 and 2500 g for 10 min each at 4 • C was conducted.Then, the supernatants were collected and passed through a 0.45 μm filter to remove the large residues.Exosome isolation was conducted by highspeed centrifugation (Optimal L-100XP, Beckman) twice for 70 min at 100,000 g.After the first phase was finished, the supernatants were collected and named PRP-AS.Then, the precipitates were washed with sterile phosphate buffered saline (PBS, pH = 7.2-7.4)and spun for another 70 min at 100,000 g.Finally, the precipitated exosomes, named PRP-Exos, were resuspended in 100 μl of sterile PBS and stored at −80 • C. Identification of PRP-Exos Transmission electron microscopy was used to observe PRP-Exo morphology.Nanoparticle tracking analysis was used to measure the size distribution of PRP-Exos.Exosomespecific surface biomarkers, such as cluster of differentiation 63 (CD63), flotillin and TSG101, and the platelet-specific biomarkers CD41 and calnexin were examined by western blotting. Inhibitor and agonist The AKT phosphorylation inhibitor LY294002 (S1005, Selleck) and agonist SC79 (S7863, Selleck) were used to study the PI3K/AKT signalling pathway.After HUVECs were seeded onto 6-well plates and grown to 90% confluence, the previous medium was substituted with medium mixed with LY294002 or SC79.The effects of LY294002 and SC79 were analysed by western blotting. ELISA ELISA kits were used to measure the concentration of S1P (JM-0653H, JingMei) in PRP-Exos, PRP-AS and normal control groups (NC) and the concentration of VEGF-A (EK0539, Boster) in the supernatants of HUVECs transfected with FN1 siRNA (si-FN1) or negative control small RNA sequences.Before measurement of S1P in PRP-Exos, three cycles of freezing and thawing were necessary to guarantee that S1P was fully released from PRP-Exos.All the following steps were performed according to the manufacturer's instructions. HUVEC transfection The S1PR1 shRNA (shS1PR1) plasmids and si-FN1 were synthesized by Tsingke Biotechnology Company (Beijing, China).Lipofectamine™ 3000 (L3000015, Invitrogen) was used for transfection.After HUVECs were seeded onto 6well plates and grown to 70% confluence, shS1PR1 plasmids or si-FN1 were transfected into HUVECs.All subsequent steps were performed according to the Lipofectamine TM 3000 instructions.To construct a stable knockdown of S1PR1 in HUVECs, puromycin (ST551, Beyotime) was used to screen wild-type HUVECs.The effects of shS1PR1 plasmids and si-FN1 were verified by western blotting.The sequences of shS1PR1 and si-FN1 are listed in Table 1. Table 1.The sequences of shS1PR1 and si-FN1 ID Forward (5 -3 ) Reverse (5 -3 ) HUVEC viability and proliferation HUVEC viability was measured using a cells counting kit-8 (CCK-8) (C6005, NCM Biotech), and all subsequent steps were performed according to the CCK-8 instructions.CCK-8 was read at 450 nm absorbance using an iMark™ microplate absorbance reader (1 681 135, Bio-Rad).HUVEC proliferation was measured using a BeyoClick™ 5-ethynyl-2 -deoxyuridine (EdU) cell proliferation kit with Alexa Fluor 555 (C0075S, Beyotime).After HUVECs were seeded onto 12-well plates with clean and flat slides on the wells and grown to 70% confluence, PRP-Exos, PRP-AS and NC were added to the plates.The slides were removed and fixed with 4% paraformaldehyde for 20 min.All subsequent steps were performed according to the EdU kit instructions.Images were captured by an immunofluorescence (IF) microscope (ZEISS, Germany).The positive rate =cells labelled with Alexa Fluor 555/cells labelled with 4 ,6-diamidino-2phenylindole (DAPI) was used to evaluate the proliferative capacity. HUVEC migration After HUVECs were seeded onto 24-well plates and grown to 90% confluence, cell scratches were made with 200 μl pipette tips, and then PRP-Exos, PRP-AS and NC were added to the plates.The results were observed within 40 h by a Lionheart FX automated microscope (Biotek, USA). HUVECs were seeded onto the upper chambers of transwell plates (3422, Corning) at a concentration of 4 x 10 5 /ml per 100 μl of the serum-free medium.The lower chambers were filled with 10% fetal bovine serum medium mixed with PRP-Exos, PRP-AS or NC.After incubation at 37 • C for 16 h, the transwell plates were removed, fixed with 4% paraformaldehyde for 20 min and then stained with crystal violet dyeing solution for 15 min.The stained upper wells were wiped off with clean cotton swabs before photos were taken using a microscope system. Tube formation Tube formation experiments based on matrix gels (356 234, Corning) were conducted to evaluate the capacity of HUVECs to form blood vessels.Briefly, 70 μl liquid gels were evenly embedded in the per well of a 48-well plates on ice, and then the plates were transferred to an incubator at 37 • C for 1 h.After the gel was fully solidified into a thin, transparent substance, HUVECs were seeded onto the gel at a concentration of 5 x 10 5 /ml per 100 μl and mixed with PRP-Exos, PRP-AS and NC.Tube formation was observed and recorded by microscopy within 6 h. Flow cytometry Flow cytometry was used to analyse HUVEC cell cycle treated with PRP-Exos, PRP-AS and NC.All steps were performed according to the instructions of the cell cycle and cell apoptosis analysis kit (C1052, Beyotime).Measurement of the cell cycle distribution was performed using a flow cytometer (Beckman, USA). Q-PCR array Total RNA was extracted with TRIzol™ reagent (15 596 018, Invitrogen), and cDNA was synthesized from 1 μg of total RNA by a high-capacity cDNA synthesis kit (4 368 814, Invitrogen).Next, real time quantitative PCR (RT-qPCR) was performed using the iTaq™ Universal SYBR Green Supermix kit (1 725 120, Bio-Rad).The results were analysed by CFX™ 96 manager (Bio-Rad, USA).Then, the expression of each mRNA was quantitated through normalization to the internal reference glyceraldehyde-3-phosphate dehydrogenase (GAPDH), and the 2 − CT method was applied to determine the relative quantity.All the forward and reverse primers are listed in Table 2. Western blotting analysis Protein was extracted from HUVECs by RIPA solution (P0013B, Beyotime) combined with phenylmethanesulfonyl fluoride (PMSF) and a phosphatase inhibitor.Then, all the protein samples were mixed with 1X SDS-PAGE protein loading buffer (P0015A, Beyotime).After all the samples were denatured at 100 • C for 10 min, electrophoresis was performed using a 10% SDS-PAGE gel preparation (Bio-Rad, USA) at 80 V for 30 min and 120 V for 90 min.Then, the proteins were blotted onto a di-fluoride polyvinylidene fluoride (PVDF) membrane at 240 mA for 2 h in ice water.Next, the PVDF membrane was blocked in QuickBlock™ buffer (P0252, Beyotime) for 40 min and then incubated with diluted primary antibodies (Table 3) at 4 • C overnight.After washing three times with tris buffered saline with tween (TBST), horseradish-peroxidase-conjugated (HRPconjugated) antibodies were added to the PVDF membrane.After three washes, the immunoreactive bands were visualized using ChemiDoc Touch (Bio-Rad, USA).We used a Spectra Animals Development of the diabetic model Wild-type C57/BL6 male mice were fed a high-fat diet (60% fat) at 3 weeks old.After 4 weeks of a high-fat diet, an intraperitoneal injection of 20 mg/kg streptozotocin (Sigma-Aldrich) mixed with citrate buffer (pH = 3.2-3.5)was administered to the mice.Before the intraperitoneal injection regimen began, fasting for 12 h overnight (with free access to water) was necessary, and injection continued for five consecutive days.After five streptozotocin injections, the fasting serum glucose was tested and recorded.A blood glucose level >16.7 mM was defined as a successful diabetic mouse model [30].The present animal study was approved by the Animal Ethics Committee of the Chongqing Emergency Medical Center. Development of the diabetic wound model After the diabetic model was established successfully, a high-fat diet was still given until the establishment of the diabetic wound model began.All diabetic mice were anaesthetized by isoflurane inhalation.Before a full-thickness wound was made on their dorsal skin by using 6 mm punch biopsy needles, the hair on the back was cleaned fully.(Figure 6a).The process of wound healing was observed and imaged on days 0, 3, 7 and 11.The mice were sacrificed after the operation on days 0, 3, 7 and 11.Wound edges and the surrounding normal skin were harvested for further histological investigation. Histology The wound edges with partial surrounding normal skin were removed and fixed with 4% paraformaldehyde after the operation was finished on days 0, 3, 7 and 11.A series of graded ethanol was used to embed the samples in paraffin.Hematoxylin & eosin (H&E) staining and Masson trichrome staining were performed and imaged by a microscope system.Full H&E staining side scanning was conducted to observe complete and clear views.In addition, IF staining for CD31 and FN1 was performed, and cyanine-3-labelled (Cy3-labelled) goat anti-mouse IgG along with Alexa Fluor 488-labelled goat anti-rabbit IgG were used.Images were acquired using an IF microscope (ZEISS, Germany). Statistics All data are presented as the means ± standard deviations.We used the independent sample t test to evaluate the difference between the two groups.In diabetic wound groups, two-factor repeated measures analysis of variance Results Identification of PRP-Exos PRP-Exos exhibited typical cup-or sphere-shaped morphology (Figure 1a).Nanoparticle tracking analysis showed that PRP-Exos ranged from 50-150 nm in diameter, matching the distinctive distribution of exosomes (Figure 1b).Western blotting was used to measure the exosome surface biomarkers CD63, flotillin and TSG101 (p < 0.001) and the platelet-specific biomarkers CD41 and calnexin (p < 0.01) (Figure 1c, d).All the experiments above confirmed that exosomes were successfully isolated from PRP. Evaluation of wound healing treated with PRP-Exos Digital photos and mode pictures showed the process of diabetic wound healing following treatment with PRP-Exos, PRP-AS and NC at days 0, 3, 7 and 11 (Figure 2a, b), indicating that the wound area was significantly smaller in the PRP-Exos group than in the PRP-AS or NC group at the same observation time.Quantification of wound closure rates showed that the effect of PRP-Exos was significantly greater than that of the equivalent PRP-AS or NC (p < 0.01) (Figure 2c).The horizontal black lines in the H&E staining pictures indicate a new epithelial layer (Figure 2d).Quantification of the degree of re-epithelization demonstrated that the effect of PRP-Exos was significantly higher than that of the equivalent PRP-AS or NC (p < 0.01) (Figure 2e).Masson trichrome staining for measurement of collagen deposition volume showed that there was no significant difference (p > 0.05) among the PRP-Exos, PRP-AS and NC groups (supplement 1, parts a and b, see online supplementary material).IF staining for CD31 showed that the formation level of new blood vessels was significantly higher in the PRP-Exos group than in the PRP-AS or NC group (Figure 2f, g).All these experiments suggested that PRP-Exos exerted significant angiogenesis-induced and reepithelization-induced effects on the promotion of diabetic wound healing. Measurement of S1P in PRP-Exos and S1PR in vivo Quantification of S1P in PRP-Exos was performed using ELISA.The results demonstrated that the concentration of S1P in PRP-Exos was significantly higher than that in PRP-AS (p < 0.01), and the level of S1P was directly proportional to the level of PRP-Exos (Figure 3a).Measurement of S1PR1-3 mRNA levels in human and mouse skin was conducted by Q-PCR.The results showed that the S1PR1 level was significantly elevated in diabetic skin compared with nondiabetic skin (p < 0.0001).However, S1PR2 and S1PR3 remained not significantly different in either group (Figure 3b, c).Furthermore, S1PR1 was highly expressed in the endothelial cells of human skin, as the Human Protein Atlas suggested (Figure 3d).Western blotting and quantification in wild-type HUVECs showed that S1PR1 expression was significantly higher in the high glucose group than in the normal glucose group (p < 0.001).However, S1PR2 and S1PR3 still remained not significantly different (Figure 3e, f). Assessment of HUVEC function To explore the optimal conditions for HUVECs treated with PRP-Exos, endocytosis assays and CCK-8 assays were conducted.The data showed that PRP-Exos labelled with PKH26 (a fluorescent exosome label) were carried into HUVECs within 6 h (Figure 4a).The CCK-8 assay showed that HUVEC viability peaked at the 18th hour (p < 0.0001) after treatment with PRP-Exos (Figure 4b).HUVEC proliferation was analysed by EdU assay (Figure 4c), demonstrating that PRP-Exos significantly promoted HUVEC proliferation ability compared with equivalent PRP-AS (p < 0.05).Flow cytometry was conducted to monitor the cell cycle (Figure 4d), indicating that the percentage of S phase cells in the PRP-Exos group remained not significantly different from that in the PRP-AS group.Cell scratch and transwell assays were conducted to assess HUVEC migration.Continuous detection in living cells over 40 h revealed the process of HUVEC migration, and the black horizontal lines represent the degree of wound closure (Figure 4e).Quantification of wound closure rates showed that HUVEC migration in the PRP-Exos group was significantly promoted compared with that in the equivalent PRP-AS group (p < 0.05) (Figure 4f).The above result was consistent with the transwell assay The results showed that the segments length of the tube in the PRP-Exos group was significantly longer than that in the PRP-AS group (p < 0.0001) (Figure 4i, j).After the impressive facilitation of tube formation activity induced by PRP-Exos was observed, we performed an additional study (supplement Figure S2e-h).The results showed that the segments length of the tube was significantly shorter in HUVECs treated with shS1PR1 than in those treated with vector (p < 0.0001) (Figure 4k, l).Consistent results (p < 0.0001) were observed in vector + S1P vs shS1PR1 + S1P and vector + PRP-Exos vs shS1PR1 + PRP-Exos. FN1 and p-AKT are involved in PRP-Exos-S1P-induced angiogenesis According to the proteomics sequencing analysis (Figure 5a), FN1 was significantly lower in diabetic skin than in nondiabetic skin (p < 0.0001).The data from the Human Protein Atlas showed that FN1 was presen at a high level in the endothelial cells of human skin (Figure 5b).In light of the colocalization of S1PR1 and FN1, along with our informative work (supplement 3, part a, see online supplementary material), we hypothesized that the PRP-Exos-S1P-mediated S1PR1/AKT signaling pathway is closely related to FN1.Interestingly, western blotting showed that FN1 (p < 0.001), p-AKT (p < 0.01) and VEGF-A levels (p < 0.01) were Evaluation of wound healing treated with adenovirus-shS1PR1, PRP-Exos and S1P The process of diabetic wound healing after treatment with adenovirus-GFP, adenovirus-shS1PR1, adenovirus-GFP + S1P, adenovirus-shS1PR1 + S1P, adenovirus-GFP + PRP-Exos and adenovirus-shS1PR1+ PRP-Exos at days 0, 3, IF staining for CD31 and FN1 was conducted for further study.The results showed that the CD31 (p < 0.01) and FN1 (p < 0.05) levels were significantly lower in the shS1PR1 group than in the GFP group (Figure 6f-h).Consistent outcomes were seen in shS1PR1 + S1P vs GFP + S1P and shS1PR1 + PRP-Exos vs GFP + PRP-Exos.Thus, we hypothesized that FN1 is involved in PRP-Exos-S1P-induced angiogenesis in vivo. FN1 directly regulates VEGF-A levels To explore the relationship between FN1 and VEGF-A, si-FN1 and si-NC were used.Western blotting and quantification (Figure 8a, b) showed that VEGF-A levels were significantly lower in the si-FN1 group than in the si-NC group.(p < 0.0001).In addition, ELISA was performed to test VEGF-A levels in the supernatants of HUVECs treated with si-FN1 (Figure 8c), indicating that VEGF-A levels were also significantly lower in the si-FN1 group than in the si-NC group (p < 0.01).Moreover, IF staining and quantification of VEGF-A (Figure 8d, e) further confirmed that VEGF-A levels were significantly lower in the si-FN1 group than in the si-NC group (p < 0.01). Discussion Unlike normal wounds, diabetic wounds are stuck in oxygeninsufficient and blood-insufficient conditions and are prone to becoming chronic wounds [7,31].Thus, the regeneration of blood vessels in impaired wounds and the release of angiogenesis-induced cytokines are considered in the present study [7,32].Currently, PRP exerts an outstanding effect on diabetic wound repair, but some limitations still exist.Our study indicates that PRP-Exos promote angiogenesis and accelerate wound closure in a diabetic mouse model.Although PRP-Exos contain an abundance of content, S1P in PRP-Exos is not often mentioned.In this study, the role of S1P derived from PRP-Exos is investigated and reported for the first time.Furthermore, we elucidated the molecular mechanism of PRP-Exos in angiogenesis regulation by verifying the role of FN1 [22,33,34].In the past, FN1, as a component of the ECM, was considered to have a biomechanical role in providing substrates for cell migration and adhesion via direct interactions with cell-surface receptors.Our study supports that FN1, as a downstream regulator, is involved in the PI3K/AKT pathway, exerting an angiogenesis-induced effect and directly regulating VEGF-A expression. One of the key findings in our study is that we confirm that S1P is mainly enriched in PRP-Exos, not PRP-AS.Our quantitative analysis by ELISA for S1P in PRP-Exos makes the results more convincing.In addition, we established the shS1PR1 model under high glucose conditions both in vivo and in vitro.The S1PR1/AKT/FN1 signalling pathway responsible for delayed wound healing is explored for the first time (Figure 9).PRP-Exos-S1P has been shown to partly facilitate new blood vessel formation and wound closure by binding to S1PR1 in a diabetic wound model.Our findings verify that the knockdown of In light of scarce resources and inconvenient preparation of autologous PRP, as well as the potential immune reaction to allogeneic PRP, we explored the exosomes derived from PRP for acceleration of wound closure that could overcome these weaknesses and inconvenience.Here, PRP-Exos were successfully harvested with a standard preparation method.Although there is a long way to go for the wide application of PRP-Exos in the regeneration field, PRP-Exos are still a promising and ideal 'biological material' due to their special structure, capacity for cross-organ and cross-species communication and convenient transportation and preservation [35,36].Thus, PPR-Exos combined with biomaterials have been successfully applied and reported.A specialized transforming growth factor-β-loaded PRP-Exos hydrogel could enhance ischaemic wound healing in vivo and in vitro [37].In addition, it was also reported that PRP-Exos facilitated recovery after muscle strain injury in vivo [38].A similar experiment showed that PRP-Exos exerted osteoprotective action by activating the Wnt/β-catenin pathway [20]. Although the present study suggests that PRP-Exos represent a promising therapeutic approach for diabetic wounds, another important issue still has to be addressed in future research.The specific mechanism between FN1 and VEGF-A should be further investigated.In addition, it is also worth noting that streptozotocin-induced, highglucose animal models were used in this study, which cannot fully mimic diabetes in humans.Thus, further clinical studies should be conducted to evaluate the efficacy and safety of PRP-Exos in patients with diabetic foot ulcer. Conclusions To our knowledge, this is the first time that S1P derived from PRP-Exos has been shown to promote angiogenesis and wound healing in a diabetic model by binding to S1PR1.PRP-Exos-S1P facilitates vascularization and wound closure through the S1PR1/AKT/FN1 signalling pathway. Our findings highlight the importance of using PRP-Exos as a biological therapy for diabetic wound healing. Figure 3 . Figure 3. Measurement of S1P in PRP-Exos and S1PR in vivo.(a) Concentration of S1P in PRP-Exos, PRP-AS and NC groups was measured by ELISA.(b) Relative mRNA levels of S1PR1-3 in human skin were analysed.(c) Relative mRNA levels of S1PR1-3 in the mouse skin were analysed.(d) Data from the Human Protein Atlas verified that S1PR1 was rich in endothelial cells of human skin.(e, f) Western bloting and quantification showing the relative levels of S1PR1-3 in HUVECs; * * * * p < 0.0001, * * p < 0.01, ns no significant difference.S1P sphingosine-1-phosphate, S1PR sphingosine-1-phosphate receptor, NG normal glucose, HG high glucose, HP high permeation, NC normal control, PRP platelet-rich plasm, PRP-AS activated supernatants of PRP, PRP-Exos exosomes derived from PRP Figure 4 .Figure 5 . Figure 4. Assessment of HUVEC function.(a) Endocytosis assay indicated that exosomes labelled with PKH26 were carried into HUVECs within 6 h; F-actin represents the cytoskeleton.(b) CCK-8 assays were conducted to find the optimal time for HUVECs treated with PRP-Exos; H0 refers to zero time.(c) EdU assays were conducted for evaluation of HUVEC proliferation.Positive rate = (cells labelled with Alexa Fluor 555/cells labelled with DAPI) x 100%.(d) HUVEC cell cycle were measured by flow cytometry.Percentage of S phase cells was determined.(e, f) Cell scratch assays were observed in living cells at 0, 10, 20, 30 and 40 h.(g, h) Transwell assays were conducted for evaluation of HUVEC migration.(i-l) Tube formation assays were conducted and the segments length was used for evaluation of HUVEC tube formation capacity. * * * * p < 0.0001, * * p < 0.01, * p < 0.05, ns no significant difference.S1P sphingosine-1-phosphate, shS1PR1 sphingosine-1-phosphate receptor 1 shRNA, NC normal control, PRP platelet-rich plasm, PRP-AS activated supernatants of PRP, PRP-Exos exosomes derived from PRP 7 and 11 is shown in Figure6a, b.Quantification of wound closure rates (Figure6c) indicated that the knockdown of S1PR1 significantly inhibited the process of wound healing compared with the GFP group at the same observation time (p < 0.0001).Similar outcomes were found in the GFP + S1P vs shS1PR1 + S1P (p < 0.01) and GFP + PRP-Exos vs shS1PR1 + PRP-Exos (p < 0.001) groups (Figure6d, e). Figure 7 . Figure 7. PRP-Exos-S1P regulates FN1 via the AKT signalling pathway.(a-d) The relative levels of FN1, p-AKT and VEGF-A in HUVECs after different treatments were measured by western blotting in the LY294002 vs NC and LY294002 + PRP-Exos vs PRP-Exos groups.(e-h) The relative levels of FN1, p-AKT and VEGF-A in HUVECs after different treatments were measured by western blotting in the SC79 vs NC and SC79 + shS1PR1 vs shS1PR1 groups.(i, j) Immunofluorescence for FN1 was measured in the LY294002 vs NC and LY294002 + PRP-Exos vs PRP-Exos groups.(k, l) Immunofluorescence for FN1 was measured in the SC79 vs NC and SC79 + shS1PR1 vs shS1PR1 groups.* * * p < 0.001, * * p < 0.01, * p < 0.05.LY294002 inhibitor of AKT phosphorylation, SC79 agonist of AKT phosphorylation, NC normal control, PRP platelet-rich plasm, PRPExos exosomes derived from PRP, FN1 fibronectin 1, p-AKT phosphorylated protein kinase B, t-AKT total protein kinase B, VEGF-A vascular endothelial growth factor A Figure 8 . Figure 8. FN1 directly regulates VEGF levels.(a, b) The relative levels of VEGF-A were measured by western blotting in the si-FN1 vs si-NC groups.(c) ELISA assays were conducted for measurement of VEGF-A levels in the HUVEC supernatants after different treatments.(d, e) Immunofluorescence for VEGF-A labelled with Alexa Fluor 488 was conducted.* * * * p < 0.0001, * * p < 0.01.si-NC negative control siRNA, si-FN1 fibronectin 1 siRNA, FN1 fibronectin 1, VEGF-A vascular endothelial growth factor A Table 2 . Primers used in the study
2023-05-27T05:05:16.601Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "4a6e72e2d401ef859bef9a88f94180fdc9746f01", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/burnstrauma/article-pdf/doi/10.1093/burnst/tkad003/50424959/tkad003.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33c1fd52ee0d1c1c895ecd724dda115ff6253918", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247364032
pes2o/s2orc
v3-fos-license
Acetyl-CoA Carboxylases and Diseases Acetyl-CoA carboxylases (ACCs) are enzymes that catalyze the carboxylation of acetyl-CoA to produce malonyl-CoA. In mammals, ACC1 and ACC2 are two members of ACCs. ACC1 localizes in the cytosol and acts as the first and rate-limiting enzyme in the de novo fatty acid synthesis pathway. ACC2 localizes on the outer membrane of mitochondria and produces malonyl-CoA to regulate the activity of carnitine palmitoyltransferase 1 (CPT1) that involves in the β-oxidation of fatty acid. Fatty acid synthesis is central in a myriad of physiological and pathological conditions. ACC1 is the major member of ACCs in mammalian, mountains of documents record the roles of ACC1 in various diseases, such as cancer, diabetes, obesity. Besides, acetyl-CoA and malonyl-CoA are cofactors in protein acetylation and malonylation, respectively, so that the manipulation of acetyl-CoA and malonyl-CoA by ACC1 can also markedly influence the profile of protein post-translational modifications, resulting in alternated biological processes in mammalian cells. In the review, we summarize our understandings of ACCs, including their structural features, regulatory mechanisms, and roles in diseases. ACC1 has emerged as a promising target for diseases treatment, so that the specific inhibitors of ACC1 for diseases treatment are also discussed. INTRODUCTION In mammalian cells, acetyl-CoA is a global currency that can mediate the carbon transactions between metabolic pathways, including glycolysis, tricarboxylic acid cycle, amino acid metabolism, gluconeogenesis, and fatty acid synthesis. Lipid metabolism or fatty acid metabolism is the bank of acetyl-CoA. It can deposit extra acetyl-CoA in the form of fatty acids and regulate the intracellular availability of acetyl-CoA to the global metabolism network by controlling the conversion of acetyl-CoA into fatty acids. As such, fatty acid synthesis is a central pathway in harnessing a myriad of metabolic pathways and related physiologies in cells. Acetyl-CoA carboxylases (ACCs) are enzymes that catalyze the carboxylation of acetyl-CoA to produce malonyl-CoA, which in turn is utilized by the fatty acid synthase (FASN) to produce longchain saturated fatty acids (1). There are two members of ACCs in mammalian cells. ACC1 localizes in the cytosol and takes the major responsibility of converting cytoplasmic acetyl-CoA into malonyl-CoA for fatty acid synthesis (2). Despite ACC2 also catalyzing the conversion of acetyl-CoA into malonyl-CoA, it localizes on the outer membrane of mitochondria that makes the downstream pathways of ACC2-produced malonyl-CoA different from ACC1. It is reported that the ACC2-produced malonyl-CoA can allosterically influence the activity of carnitine palmitoyltransferase 1 (CPT1) in the b-oxidation of fatty acid (3). More functional studies about ACC2 are expected in this field. Fatty acid synthesis controls the storage and expenditure of carbon source and energy, which can regulate the activities of other metabolic pathways, such as amino acid metabolism and glucose metabolism, so that fatty acid synthesis is frequently alternated to harness the intracellular metabolism network to meet the requirement of materials and energy for diseases progressions, such as cancer and metabolic diseases (4)(5)(6)(7)(8). ACC1 is the first rate-limiting enzyme in the fatty acid synthesis that plays a central role in fatty acid synthesis, so that ACC1 is the hub of the fatty acid synthesis-related metabolism network. Its dysregulation in diseases is intensively studied, including the roles of ACC1 in regulating tumour cell proliferation, migration, and metabolic disease progression (9)(10)(11)(12). In addition, because acetyl-CoA and malonyl-CoA are cofactors in protein acetylation and malonylation, respectively, the emerging non-metabolic functions of ACC1 in diseases are discussed in recent studies (11,13,14), which further expand the roles of ACC1 in physiologies and pathophysiologies. ACC1 is therefore considered as a promising therapeutic target for treating diseases, such as cancer and metabolic diseases. This review summarizes our current knowledge about ACCs, including the structure of ACCs, the regulatory mechanisms, and the roles of ACCs in tumorigenesis and metabolic diseases. Besides, we briefly introduce ACCs inhibitors that are under investigation for cancer and metabolic diseases therapy. STRUCTURE OF ACETYL-COA CARBOXYLASES In mammals, ACCs have two isoforms: ACC1 and ACC2. Human ACC1 (ACCa, 265 kDa) is encoded by the ACACA gene on chromosome 17q12 while ACC2 (ACCb, 275 kDa) is encoded by the ACACB gene on chromosome 12q23 (15). ACC1 and ACC2 share 75% identity in amino acid sequence and are composed of conservative domains for enzyme activity (16,17). ACC1 and ACC2 have similar structures and molecule bases in catalyzing carboxylation of acetyl-CoA to produce malonyl-CoA. ACC1 is discussed here in terms of ACCs' structure. ACC1 contains three major functional domains: a biotin carboxylase domain (BC domain), a carboxyl transferase domain (CT domain), and a biotin carboxyl carrier protein domain (BCCP domain) that links the BC domain and CT domain ( Figure 1). To perform the catalytic activity, the BC domain of ACC1 firstly consumes ATP and bicarbonate to catalyze the carboxylation of biotin, in which bicarbonate serves as the donor of the carboxyl moiety (18). Then, the BCCP domain transfers the carboxyl moiety from the carboxylated biotin to the CT domain (19), where the carboxyl moiety is transferred to the acetyl-CoA to accomplish the carboxylation of acetyl-CoA, converting acetyl-CoA into malonyl-CoA (20) (Figure 1). Although the BC domain and CT domain are linked by the BCCP domain in a single ACC1 molecule, the spatial dimension of ACC1 is so broad that the functional domains are spatially separated, which makes the carboxylated biotin in the BC domain can hardly reach to the acetyl-CoA that bond by the CT domain of the same molecule of ACC1. To link the cascade reactions of acetyl-CoA carboxylation, ACC1 molecules form homodimers to enable the carboxylated biotin in the BC domain reaching to the acetyl-CoA in the CT domain of the other ACC1 molecule of the homodimer (19)(20)(21)(22)(23). Therefore, regulating the formation of ACC1 homodimers is considered as an important mechanism controlling the acetyl-CoA carboxylation activity of ACC1. DISTRIBUTION AND FUNCTIONS OF ACETYL-COA CARBOXYLASES ACC1 and ACC2 are widely distributed in organs and tissues in mammals. ACC1 is highly enriched in lipogenic tissues, such as liver and adipose tissue, while ACC2 is majorly expressed in oxidative tissues, such as cardiac and skeletal muscle (24), which are consistent with the functions of ACC1 in lipogenesis and ACC2 in regulating fatty acid b-oxidation. In mammalian cells, ACC1 and ACC2 are differently distributed ( Figure 2). ACC1 is a cytoplasmic protein that catalyzes the conversion of acetyl-CoA into malonyl-CoA, which is utilized by the fatty acid synthetase (FASN) and subjected to the de novo fatty acid biosynthesis (2). It controls the synthesis of mid-and long-chain fatty acids that serve as building blocks for the cell biology process. Inhibiting ACC1 by 5-tetradecyloxy-2-furoic acid (TOFA) can completely inhibit hepatic de novo lipogenesis (DNL), which is considered a new strategy for non-alcoholic fatty liver disease (NAFLD) treatment (25). Soraphen A, another ACC1 inhibitor, can pharmacologically inhibit fatty acid synthesis in diet-induced obesity mice and significantly suppress weight gain, which sheds new light on controlling diet-induced obesity (26). Liver-specific ACC1 knockout (LACC1 KO) mice can survive but show dysregulated lipid metabolism and deficiency in triglycerides metabolism (27). In cancer cells, inhibition of ACC1 by Soraphen A significantly reduces saturated and monounsaturated phospholipid species and increases the proportion of poly-unsaturation, rendering cells vulnerable to oxidative stresses (28). Moreover, activity-impeded ACC1 reduces cytoplasmic membrane fluidity and impairs mobilities of transmembrane receptors, ultimately impairing cell membranedependent biological processes (29). ACC2 contains a hydrophobic N-terminal region that leads ACC2 attaching to the outer membrane of mitochondria (25). The mitochondria-located ACC2 also catalyzes the conversion of acetyl-CoA into malonyl-CoA. However, instead of entering the de novo fatty acid biosynthesis, the ACC2-generated malonyl-CoA locally interacts with carnitine palmitoyltransferase 1 (CPT1) that also localizes on the outer mitochondrial membrane. CPT1 accounts for the first step of long-chain fatty acids b-oxidation in mitochondria. The binding of malonyl-CoA allosterically inhibits the activity of CPT1 and therefore influences the process of fatty acid b-oxidation in mitochondria (30). In animal experiments, inhibition of ACC2 can increase hepatic fat oxidation, reduce hepatic lipids, and improve hepatic insulin sensitivity in mice with NAFLD (31), which is further confirmed in mice with genetic depletion of ACC2 (3,32,33). In addition, the fatty acid oxidation rate in the soleus muscle of the ACC2-/-mice is 30% higher than that of wild-type mice and is not affected by the addition of insulin, leading to reduction of body weight under normal food intake and slower weight-gain with high-fat/highcarbohydrate diets (34). In addition to the roles in metabolic flow, fatty acids, acetyl-CoA, and malonyl-CoA are effector molecules that participate in signaling pathways in cells (35)(36)(37). Correspondingly, ACCs, as the consumer of acetyl-CoA and the producer of malonyl-CoA that function as the rate-limiting enzyme in fatty acid synthesis, play intriguing roles in regulating cellular signaling networks. For example, polyunsaturated fatty acids (PUFAs) are the precursors of various signaling molecules, such as eicosanoids, that regulate the activity of sterol-regulatory-element-binding protein 1c (SREBP1c) in fatty acid metabolism in liver (38). Inhibition of FIGURE 2 | ACCs in fatty acid metabolism. ACC1 is a cytoplasmic protein that catalyzes the conversing of acetyl-CoA to malonyl-CoA in the de novo fatty acid biosynthesis (2). On the other hand, the hydrophobicity of the N-terminal region of ACC2 allows its localization to the outer membrane of the mitochondria, regulates CPT1 which controls fatty acid b-oxidation (25). Created with BioRender.com. ACCs is considered as a promising strategy for treating liver diseases (39). However, on the other hand, it leads to a decrease in malonyl-CoA and the synthesis of downstream PUFA, which in turn activates SREBP1c and upregulates the expression of glycerol-3-phosphate acyltransferase 1 (GPAT1) that catalyzes triglyceride synthesis, stimulating hepatic VLDL secretion and leading to hypertriglyceridemia (40). As such, hypertriglyceridemia is used to be accompanied with the ACCs-targeting therapies. Acetyl-CoA is another instance. It can regulate gene transcription by donating the acetyl-moiety in the acetyltransferases-mediated histone acetylation (41). Inhibition of ACCs can increase the intracellular acetyl-CoA level and stimulate the influx of calcium into the cells, which lead to the activation and translocation of NFAT (nuclear factor of activated T cells 1) into the nucleus to promote the transcription of adhesion and migration-related genes, promoting adhesion and migration of glioblastoma cells through Ca 2+ -NFAT signaling. Malonyl-CoA plays roles in regulating dietary behavior and appetite (42). It is shown that mammalian neural tissue was able to rapidly convert administered acetate into acetyl-CoA, which subsequently entered the Krebs cycle to promote ATP production. Excessive ATP level, in turn, down-regulates AMPK activity and secures ACC2's enzymatic activity. As a result, malonyl-CoA was produced to a great extent, causing the downstream effector molecular pro-opiomelanocortin upregulation and neuropeptide Y downregulation, eventually leading to loss of appetite in mice (43,44). In addition to the metabolic functions, ACC1 and ACC2 regulate protein acetylation by manipulating the availability of acetyl-CoA in cells. In liver-specific ACC1 knockout mice, the amount of acetyl-CoA in the extra-mitochondrial space is substantially elevated, which can serve as the substrate cofactor for acetyltransferases and increase the acetylation of proteins to regulate the functionome, including metabolic enzymes that regulate the metabolism network in ACC1 KO mice (13). Another study also shows that attenuated expression of ACC1 leads to a substantial increase in histone acetylation and alters transcriptional regulation, resulting in increased histone acetylation that consequently regulates biological processes in cells via influencing gene transcription (14). While the causal relationship between ACCs' activities and protein acetylation is confirmed, the detailed mechanism underlying ACC1-related hyperacetylation remains elusive. Besides, ACC1 regulating protein acetylation by controlling the intracellular concentration of acetyl-CoA might also play a role in disease development. A supportive study reported that ACC inhibition regulates smad2 acetylation, which consequently affects the activity of smad2 and breast cancer metastasis (11). Malonyl-CoA is the product of ACCs' enzymatic reactions. It can also serve as the substrate cofactor in the enzyme-catalyzed protein malonylation. Increased intracellular malonyl-CoA can result in upregulation of protein malonylation, which might affect protein functions and biological activities in cells (9). Despite evidences are supporting the importance of the non-metabolic functions of ACC1 in regulating protein modifications and functions, it is premature to conclude the non-metabolic functions of ACC1 in diseases development and treatment. Altogether, ACCs regulate the physiologies and pathophysiological processes of cells by executing metabolic and non-metabolic functions. It should sense the alternatives in cells and precisely translate the alternated signals into the responses of cells. As such, sophisticated regulation of ACCs is required to secure the metabolism network matching the physiologies of cells. REGULATION OF ACETYL-COA CARBOXYLASES The activities of ACCs in cells can be transcriptionally and posttranscriptionally regulated that are tightly associated with the metabolic status of cells. In general, the protein level and enzymatic activities of ACCs are upregulated in nutrient and energy abundant conditions, aiming to store the excess nutrient and energy in the form of fatty acids. Correspondingly, the protein level and enzymatic activities of ACCs are suppressed in nutrient and energy-deficient conditions, aiming to secure the limited energy and nutrients being utilized for survival (45,46). The AMP-activated protein kinase (AMPK) is the most studied energy sensor that senses the nutrient and energy status of cells and is an important regulator of ACC1. When cells suffering metabolic stresses, such as glucose deprivation or hypoxia, AMPK is activated that can phosphorylate the serine-79 residues in ACC1 (equivalent to ACC2 Ser212) (47). Phosphorylation of the Ser-79 residue effectively blocks the formation of ACC1 homodimer, leaving ACC1 molecules as monomers that are unable to catalyze acetyl-CoA carboxylation (21). The fatty acid synthesis pathway is therefore suppressed. However, when cells return to a nutrient and energy-abundant environment, the phosphorylation of Ser-79 in ACC1 can be removed by the type 2A protein phosphatase (PP2A), allowing the reformation of ACC1 homodimer that is active in catalyzing acetyl-CoA carboxylation (48,49). Besides nutrient and energy stresses, the Ser-79 residue in ACC1 can be phosphorylated and maintained to prevent lipogenesis in certain pathophysiological processes. For example, the susceptibility gene 1 (BRCA1) C-terminal (BRCT) domain binds to p-ACC1 to from BRCA1/p-ACC1 complex (50), which prevents dephosphorylation of p-ACC1 and constantly suppress the activity of ACC1 to reprogram the metabolism network in breast cancer. Insulin-like growth factor-1 (IGF-1) treatment can disrupt the interaction between BRCA1 and p-ACC1, which leads to the dephosphorylation and reactivation of ACC1 (51). In addition to phosphorylation, metabolites that are associated with changes in metabolism can allosterically regulate the activities of ACCs. For example, citrate is an intermediate metabolite in the TCA cycle that can allosterically activate ACC1 to drive the fatty acid synthesis pathway in normal condition (52,53). Intriguingly, opposite effects of citrate on ACCs' activities were reported (54), the underlying mechanism remains elusive. Glutamate can allosterically activate phosphatase that mediates dephosphorylation and activation of ACCs in cardiomyocytes, which may contribute to the cardioprotective effects of glutamine against lipolysis (55).. Fatty acyl-CoAs can induce the de-dimerization of ACC1 that inhibits the activities of ACC1 and fatty acid synthesis in cells (54). By interplay with metabolites from different metabolic pathways, ACCs mediate the cross-talk between fatty acid synthesis and other metabolic pathways, forming a sophisticated regulation network to secure the metabolic status fit the physiologies of cells. The protein levels of ACC1 and ACC2 are dynamically regulated in cells. The expression level of ACC1 can be regulated by certain transcription factors. SREBP1c is a well-studied instance. The ACACA gene has two distinct SREBP binding sites, which recruit SREBP1c to initiate RNA Polymerase II-dependent transcription. Carbohydrate response element (ChRE) -binding protein (ChREBP) is another transcription factor that regulates ACC1. It binds to the promoter regions and activates the transcription of ACACA, in response to the high-carbohydrate diet (56,57). Besides transcription, the protein stability of ACC1 can also be regulated. In breast cancer, small interfering RNAmediated Aldo-keto reductase family 1B10 (AKR1B10) knockdown induces ACC1 degradation via the ubiquitinationproteasome pathway, resulting in a markedly drop in fatty acid synthesis in RAO-3 cells (58). In prostate cancer, the expression of prolyl isomerase Pin1 positively correlates with the protein level of ACC1. It binds to ACC1 to prevent ACC1 from entering the lysosomal pathway, leading to the stabilization of ACC1 protein and resulting in enhanced activity of ACC1 in cells (59). ACC1 activity can also be regulated by post-transcriptional and translational mechanisms. There are 64 exons included in the gene ACACA that result in 7 alternatively spliced minor exons (1A, 1B, 1C, 3, 5A', 5A, and 5B). The exon 5B can lead to transcriptional termination of the upstream exon 5 in two different transcripts, producing a short peptide that leads to the production of truncated ACC1 that affects the transcriptional efficiency and activity of ACC1. These studies suggest that ACC1 activity can be regulated by post-transcriptional and translational mechanisms and consequently result in suppression of fatty acid synthesis (60). The protein level of ACC1 can be posttranscriptionally reduced in calcium/calmodulin-dependent protein kinase kinase2 (CAMKK2) knock out cells, suppressing the proliferation of human prostate cancer cells (61). Taken together, ACC1 and ACC2 are sophisticatedly regulated in cells to make the process of fatty acid synthesis, as well as its cross-talk metabolic networks, meet the physiologies of cells. There are myriad factors that regulate ACC1's activities, including nutrients, protein kinases, phosphatases, allosteric regulators, transcriptional factors et al. Dysregulation of these regulatory factors usually serves as causative signaling for the development of cancer and metabolic diseases (10, 54, 61-63). Dysregulation of ACCs in diseases is therefore intensively studied. DYSREGULATION OF ACETYL-COA CARBOXYLASES IN DISEASES Fatty acid synthesis is central in the cross-talk between multiple biological processes, including membrane biosynthesis, energy storage, and the generation of signaling molecules (64). Lipogenesis is dynamically regulated in response to the physiologies of cells. Correspondingly, dysregulation of fatty acid synthesis can induce or promote the development of diseases. ACCs is the first rate-limiting enzyme in fatty acid synthesis. It is therefore the focus of mountains of studies and be validated as a critical participant in diseases, especially cancer and metabolic diseases (11,(65)(66)(67)(68)(69)(70)(71). Signaling regulators of lipid biosynthesis are major downstream targets of oncogenes and tumour suppressor pathways. Alternations of oncogenes and tumour suppressor pathways can manipulate de novo fatty acid synthesis. Dysregulation of fatty acid metabolism, in turn, influences the cellular processes that are linked to diseases, such as cancer. For example, the AMPK pathway is important in regulating cell growth, lipid and glucose metabolism, and autophagy (72). It senses the relative level of ADP to ATP and be activated when the ratio of ADP to ATP increased. When tumour cells suffering metabolic stresses, AMPK can be activated, which then phosphorylates ACCs to suppress the lipid biosynthesis pathway, resulting in metabolism reprogramming that influences the survival and growth of tumour cells. Mutagenic blockage of the AMPK phosphorylation site of ACC (ACC1 Ser76Ala and ACC2 Ser212Ala) increases liver DNL and accelerates the development of hepatocellular carcinoma (HCC). Liver-specific inhibitor ND-654, which mimics ACC phosphorylation, inhibits liver DNL and the progression of HCC, resulting in an improved prognosis for tumour-bearing mice (73). In head and neck squamous cell carcinoma cells (HNSCC), the AMPK activator cetuximab and 5-aminoimidazole-4-carboxamide-1-b-D-ribofuranoside (AICAR) can suppress tumour cell growth (74,75). Abolishing the AMPK phosphorylation sites on ACC1 by mutagenesis can protect HNSCC from cetuximab-induced growth inhibition. Decreased AMPK activity in hereditary leiomyomatosis renal cell cancer (HLRCC) leads to the elevated activity of ACC1, which contributes to the oncogenic growth of HLRCC (76). Metformin is an agonist of AMPK that can promote phosphorylation of ACCs. Metformin treatment can effectively suppress lipogenesis and cancer cell proliferation (10). Because ACC1 can mediate the AMPK-sensed metabolic stress and the downstream of cancer metabolism reprogramming, it is considered a potential target for cancer therapy. However, some studies also show exceptional viewpoints on the AMPK/ACC signaling pathway for tumour growth (77). For example, under energy stress conditions, activated AMPK can phosphorylate and inhibit ACC1, which can suppress the NADPH-consuming fatty acid synthesis and maintain the NADPH homeostasis in tumour cells. Similarly, ACC1 depletion can also suppress the NADPH consumption by fatty acid synthesis, which in turn partially facilitates solid tumour survival under stress conditions (77). Thus, under special conditions, the AMPK/ACC signaling pathway that can alternatively regulate tumour cell proliferation by maintaining NADPH homeostasis. The phosphatidylinositol-3 kinase (PI3K)/Akt/mammalian target of the rapamycin (mTOR) is another signaling pathway that senses the physiologies of cells and executes important functions by regulating ACC1 activities. In general, receptor tyrosine kinases (RTKs)-mediated activation of PI3K can activate Akt. Hyperactivated Akt then activates mTOR, which processes the upstream signals and forms mTORC1 (78). PI3K/Akt/mTOR signaling pathway regulates tumour metabolism, growth, survival, and metastasis (79,80). ACC1 is tightly associated with the PI3K/ Akt/mTOR signaling pathway in cancer cells. For example, the melanoma antigen ganglioside GD3 is a downstream target of PI3K/Akt/mTOR signaling. In melanoma, GD3 can induce the activation of SREBP-1, which is a transcription factor that regulates the expression of ACC1 (81). In breast cancer, the HER2 oncogene can induce ACC1 expression through translational regulation of the mTOR signaling pathway (82). Correspondingly, inhibition of ACC1 by siRNA or chemical inhibitors can inhibit AKT-related pathways, which is detrimental for cancer, such as human HCC (83). It is therefore concluded that ACC1 protein level and activity can be regulated by various internal alterations, which in turn affects lipid synthesis in tumors. Dysregulated lipid metabolism impacts multiple intracellular processes, such as membrane synthesis and energy metabolism that may influence tumor development ultimately. However, the mechanisms underlying lipid metabolism influencing tumor progressions, such as proliferation and metastasis, have not been fully elucidated. How ACC1 cross-talk with other pathways remains open for discussion. Metabolic diseases are also tightly associated with the dysregulation of ACCs. In mammals, the accumulation of lipid in tissues, such as muscle and liver, is closely related to insulin resistance that associates with a myriad of metabolic disorders (84,85). Likewise, dysregulated lipogenesis may lead to metabolic diseases such as obesity, diabetes, and NAFLD (6-8). As a central player of lipogenesis, ACCs promptly participates in the progression of metabolic disease. For example, a high-fat diet leads to increased ACC1 activity and obesity in mice while inhibition of ACC1 antagonizes the high fat diet induced obesity. ACC2 plays roles in controlling diet-induced diabetic nephropathy (DN). High-glucose diets promote lipid deposition and reduce fatty acid b-oxidation in human podocytes. Depletion of ACC2 attenuates the high-glucose diet-promoted lipid deposition and podocyte injury. The expression of glucose transporter 4 (GLUT4) is also restored by ACC2 depletion, which hampers the insulin signaling pathway. Besides, the expression of SIRT1/PGC-1a, an important complex related to the insulin metabolic pathway is also restored in the cells with ACC2 depletion, leading to the reduction of cellular insulin resistance and ultimately alleviating DNinduced cell injury (86). ACCs knockout animal models are powerful tools to understand the roles of ACCs in the progression of metabolic diseases, with which, a study demonstrated that ACC1 is necessary to maintain functional pancreatic b cells and glucose homeostasis in vivo, which indicates that ACC1 might be used to improve insulin secretion during diabetes (71). Liver-specific ACC1-KO mice (LACC1 KO) accumulate 40%-70% lower triglycerides in livers than that of wide-type mice when overnutrition is provided. Similarly, ACC2 knockout (ACC2 KO) mice do not gain weight when fed with high-fat diet (HF) (34). It might be due to the hepatic peroxisome proliferator-activated receptor-g (PPAR-g) proteins that are significantly reduced in ACC2 KO mice that are fed with highfat and high-carbohydrate diet (HFHC). In this case, lipid synthesis-related enzymes such as ACC1, FASN, and ATP citrate lyase (ACL) are decreased, which in turn reduced dietinduced obesity. Besides, ACC2 KO mice are able to alleviate the HFHC diet-induced insulin resistance. ACC2 KO mice with HF diet show reduced AKT level and increased phosphorylation of AKT, which is critical in the insulin signaling pathway that can protect the mice from diabetes (87). The above researches demonstrate that ACCs is responsible for metabolic disorders caused by dietary factors (27). Moreover, hyper-activation of ACC1 can also result in abnormal physiologies in metabolic disease. For instance, the enhanced activity of ACC1 accelerates lipogenesis and lipid accumulation when animal suffering overnutrition and obesity, which leads to the accumulation of triglycerides in hepatocytes and thus causing NAFLD (88). In general, dysregulated lipogenesis leads to the development of tumorigenesis and metabolic diseases. The roles of ACCs in regulating metabolism reprogramming in cancer and metabolic diseases are revealed in accumulated studies, which shed bright light on diseases treatment. As such, ACCs are becoming a promising therapeutic target for discovering novel therapeutic strategies and therapeutics development. ACETYL-COA CARBOXYLASES-TARGETING SMALL MOLECULES FOR THERAPEUTIC PROPOSES With the evidence of ACCs participating in the progression of diseases and its structural information, countless screenings for ACCs antagonists are performed and several promising leading compounds are confirmed for further validations (65,83,(89)(90)(91)(92)(93)(94). The ACCs inhibitors mainly target its BC domain and CT domain. The BC domain accounts for the biotin carboxylation and formation of the homodimer of ACCs molecules. The main mechanism of action (MOA) of the BC domain targeting inhibitors is allosterically inhibiting the dimerization of the BC domain, maintaining ACC1 molecules as inactive monomers that are unable to perform the catalytic activity (21). Soraphen A, AMPK activators, and ND-series inhibitors (ND-630, ND-646, ND-654) inhibit ACC1 belong to this category (73,91,(95)(96)(97)(98). These inhibitors can effectively inhibit ACCs activity and affect the process of lipid metabolism and the development of disease (10,26,28,29,90,99,100). It is worth noting that there are studies already confirmed that inhibiting ACCs in the liver by using ND-630 (GS-0976) can significantly reduce 29% liver fat, hepatic steatosis, and markers of liver injury in NAFLD patients (101,102), which further encourage the finding of ACCs inhibitors for therapeutic proposes. The CT domain catalyzes the transfer of carboxyl-moiety from the carboxylated biotin to the acetyl-CoA to produce malonyl-CoA. Competitive inhibitors targeting the binding of acetyl-CoA by CT domain are therefore another promising strategy for inhibiting ACCs. TOFA, CP-640186, piperidinyl derived analogs, and spiropiperidine derived compounds are antagonists that belong to this category (103)(104)(105)(106)(107)(108). These antagonists can reduce the mice's appetite and accelerate weight loss (93,109) and lead to apoptosis in different cancer cell lines (92,110,111). Despite no relevant clinical trials of this class of antagonists are found, it keeps recruiting screenings for new leading compounds. In conclusion, numbers of commercially available ACCs inhibitors have exhibited strong therapeutic effects on disease models in vivo and in vitro, supporting that ACCs are promising therapeutic targets for the treatment of tumour and metabolic diseases. However, no agonist can specifically inhibit one ACCs member and keep another member intact. This might lead to adverse effects, because ACC1 and ACC2, indeed, are different in physiologies and pathophysiologies. To this end, the development of agonists that are specifically against ACC1 or ACC2 might be a promising strategy to target ACCs for diseases treatment. LIMITATIONS AND PROSPECTS Antagonists that target ACCs are intensively studied in clinic but hampered by several side-effects. For example, inhibiting lipogenesis via suppressing the expression of ACCs can reduce hepatic steatosis, but it simultaneously results in hypertriglyceridemia due to the activation of SREBP-1c and increased VLDL secretion (40). Another instance is PF-05175157, the first-in-human clinical trials ACC inhibitor, contributes to DNL reduction in treatment for T2DM but with concomitant reductions in platelet count (112). Recently, an exciting result in phase II clinical trial shows that the co-administration of PF-0522134 (a new ACC1 inhibitor in clinical trial) and PF-6865571 (DGAT2 inhibitor) has a strong effect in treating NASH without the side effect of hypertriglyceridemia (113). However, there are several challenges to address the side-effects of ACCs inhibition in clinical practice. The principal challenge is that the inhibitors can hardly distinguish ACC1 from ACC2. As described above, ACC1 and ACC2 share 75% identity in amino acid sequence and are similar in structures that are composed of conservative domains for enzyme activity. However, the ACCs antagonists, such as Soraphen A and TOFA, can target and influence the activity of both ACC1 and ACC2, which might lead to the side-effects caused by the inhibition of unwanted ACC isoform. In nutrientabundant condition, fatty acid synthesis and breakdown are coordinately controlled, avoiding a wasteful cycle of metabolism. However, in cancer cells, both fatty acid synthesis and breakdown are boosted to support cancer growth. To this end, coordinately antagonizing the dysregulation of ACC1 and ACC2 in cancer cells would be a promising strategy for cancer treatment. So far, none of ACCs inhibitors is approved useful in clinic. This might be due to the fact that ACC inhibitors that are not isoform-specific only partially reverse cancer's preferences. Moreover, it is shown that the selectively inhibition of ACC2 may be ineffective in treating some metabolic diseases (114). A selective inhibitor targeting ACC1 that shows anti-NAFLD/ NASH effects in pre-clinical models is reported in a recent study (115), which is expected to strengthen the efficacy. Accumulating studies indicate the importance of ACCs in tumour cell growth which shows the great potential of ACCs in the treatment of cancer. However, studies on the role of ACCs in cancer have been attributed to their roles in fatty acid synthesis, the exact mechanism of which remains to be investigated. The role of fatty acid metabolism in cancer biology is not fully understood (116). More in-depth research about fatty acid metabolism in cancer will help examine and detail the roles of the ACCs, in cancer initiation, progression, and development. AUTHOR CONTRIBUTIONS YuW: draft the manuscript and iconography. WY, SL, DG, and JH: proofread the manuscript and iconography. YugW: conceptualized, supervised, and finalized the manuscript. All authors contributed to the article and approved the submitted version.
2022-03-11T14:27:10.961Z
2022-03-11T00:00:00.000
{ "year": 2022, "sha1": "f07c60b37ed6c730b27c05f5b6aa68de06f53175", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "f07c60b37ed6c730b27c05f5b6aa68de06f53175", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251373069
pes2o/s2orc
v3-fos-license
Nutritional aspects in chronic non-cancer pain: A systematic review Objectives Chronic pain (CP) is an unpleasant emotional and sensory experience that can be accompanied by tissue damage that persists for more than 3 months. Recent studies show that certain nutritional strategies can help to improve pain, so this study is aimed to systematically review scientific evidence to understand and map the effect of the use of nutritional strategies on the presence or intensity of chronic non-cancer pain (CNCP) and the association of these nutritional aspects with the presence or intensity of CNCP. Study design A systematic review. Methods Two independent researchers searched for randomized clinical trials (RCTs) and observational studies that explored the relationship between nutrition and CNCP in adults from 2010 to 2020 in PubMed, Web of Science, Scopus, and Cochrane Library databases. A total of 24 studies were included, of which 20 were RCTs and 4 were observational studies. They are classified into the administration of nutritional supplements, dietary modification, and incorporation of food. Results Of these studies, those that have a significant effect on pain are dietary modification and the use of nutritional supplements. On the other hand, the main results from the few observational studies included in this review point to the existence of an association relationship between less pain and a ketogenic or hypocaloric diet or adherence to the Mediterranean diet. Conclusion Dietary modification seems to be one plausible therapeutic option to improve and relieve CNCP. However, more research is needed in this regard to obtain better conclusions. Systematic Review Registration [www.crd.york.ac.uk/prospero], identifier [CRD42021226431]. Introduction According to the International Association for the Study of Pain, chronic pain (CP) is defined as an unpleasant emotional and sensory experience that may or may not be accompanied by tissue damage that persists for more than 3 months (1,2). When pain is not a consequence of an oncological process, it is called chronic non-cancer pain (CNCP) (3). It is estimated that one in five people in the world suffers from CP and one in three cannot maintain an independent lifestyle due to pain (4). It produces consequences in the performance of daily activities, in the practice of physical exercise (3), and poor quality sleep (5), and it is difficult to participate in social activities (6) with significant social and health costs (1,7). The main intervention for CP relief is the use of antalgic drugs, which gives rise to numerous adverse effects (7). Nevertheless, there are currently other approaches to pain, such as psychosocial strategies, physical activity interventions (2), or nutritional care (8), which seem to show positive results in pain relief. Recent studies show the potential use of nutritional strategies to decrease pain sensation or reduce the risk of suffering from CP since it is cheaper than analgesic drugs and is less likely to produce adverse effects. That is why some researchers have tried to shed light on the role of nutritional elements in CP. Thus, our objective was to systematically review scientific evidence based on clinical and observational studies to understand and map the effect of the use of different nutrients, foods, or food supplements on the presence or intensity of CNCP, and the association of these nutritional aspects with the presence or intensity of CNCP. Materials and methods Search strategy and data sources Between March and April 2020, a search was carried out for documents published in the last 10 years in the PubMed, Web of Science, Scopus, and Cochrane Library databases. The search equation was as follows: (diet OR antioxidants OR micronutrients OR nutrition OR "integrative pain medicine" OR healing OR eating OR "nutritional status" OR "anti-inflammatory diet" OR food OR eating OR appetite OR "food habits" OR "food preferences" OR nutrient OR "diet therapy") AND ("chronic pain" OR "persistent pain" OR "long term pain" OR pain OR "back pain" OR neuralgia OR "trigeminal neuralgia" OR hyperalgesia OR fibromyalgia OR "phantom limb" OR "complex regional pain syndromes" OR "nociceptive pain" OR headache OR endometriosis OR migraine OR arthritis) NOT (cancer OR tumor OR oncolog * ). Inclusion criteria The selected documents were (1) original articles or systematic reviews that explored the relationship between nutrition and CNCP; (2) published between 2010 and 2020; (3) in English or Spanish; (4) with experimental (randomized clinical trials; RCTs) or observational epidemiological design; (5) implemented in over 18 years old population, men, and/or women; (6) full text available, and (7) with sufficient methodological quality. Specifically, only those observational studies that had a high or acceptable methodological quality according to the Scottish Intercollegiate Guidelines Network (SIGN) tool (9) and experimental studies with a score greater than 3 on the Jadad scale (10) were included in the present review. Exclusion criteria The exclusion criteria were (1) documents that studied pharmacological and surgical treatments with no nutritional approach for CNCP, (2) acute pain, and (3) as this systematic review is focused only on nutritional interventions, pain derived from surgical interventions or oncological processes was also excluded. The search and screening of documents were carried out by two researchers independently and the discrepancies regarding the selected documents were resolved by consensus of the researchers. Registration was made in the International Prospective Register of Systematic Reviews (PROSPERO) with the code CRD42021226431. A data extraction table was created for the documents included in the review (Table 1), with the following items: first author, year, type of pain, objectives, method, sample, duration, measuring instruments, intervention design, and results. Study characteristics A total of 17,295 documents were found. Of these, 64 articles were selected for full-text reading of which 24 documents were finally included. Figure 1 summarizes the selection process of the studies included in this review. Regarding the epidemiological design of the studies included, 20 studies were experimental (RCTs), 2 studies were prospective cohort observational studies, 1 was the retrospective cohort, and 1 was case-control. The most common etiology of pain in the studies was osteoarthritis (n = 10), followed by rheumatoid arthritis (n = 7). Table 1 describes the main characteristics of the studies included in this systematic review. Main results from the experimental studies The nutritional interventions evaluated for CNCP in the studies included the administration of nutritional supplements, dietary modification, and incorporation of food. Administration of nutritional supplements Regarding the studies carried out on pain caused by chronic pancreatitis, Abbasnezhad et al. (11) reported a significant improvement in pain during 6 months with the administration of 50,000 IU of vitamin D (p < 0.007). In addition, Singh et al. Preferred reporting items for systematic reviews (PRISMA) flow diagram. the Roland Morris and Oswestry Disability Scales after 28 days of intervention with the combined administration of theramine (710 mg/day) and ibuprofen (p < 0.05). Concerning patients with CP due to rheumatoid arthritis, Ghavipour et al. (15) supplemented the diet of the participants with two daily capsules of POMx (250 mg/day with a concentration of 40% ellagic acid) for 8 weeks and observed a significant reduction in rheumatoid arthritis pain perception measured with disease activity score-28 (DAS28; p < 0.001) and a decrease in the number of tender joints (p = 0.001) that also reduced pain intensity (p = 0.003). Helli et al. (16) observed that when 200 mg/day of sesamin was administered for 6 weeks, the number of tender joints and the intensity of pain evaluated with DAS28 and VAS were significantly reduced (p < 0.05 for both of them). In the case of pain caused by osteoarthritis, Fukumitsu et al. (17) performed an intervention with maslinic acid with a dose of 50 mg/day for 12 weeks and found no significant difference in pain intensity measured with VAS when compared with the placebo group. However, Malek et al. (18), after using L-carnitine with a dose of 750 mg/day for 8 weeks, did find significantly lower pain intensity levels assessed by the VAS in the intervention group as compared to the control group (p = 0.019). Analogously, Rondanelli et al. (19) found that the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) score in the group that had consumed chondroitin sulfate for 12 weeks at a dose of 600 mg/day had decreased significantly by 8.70 points, compared to the placebo group (p = 0.001). On the other hand, Cordero et al. (20) evaluated pain in patients with fibromyalgia using Fibromyalgia Impact Questionnaire (FIQ). After the administration of 300 mg/day of CoQ 10 for 70 days, they found a significant reduction in pain intensity (p < 0.01) and a significantly lower number of tender joints (p < 0.01) in comparison with the placebo. Furthermore, Sawaddiruk et al. (21) studied the effect of CoQ 10 at a dose of 300 mg/day for 40 days in fibromyalgia and observed that the VAS and FIQ values decreased significantly in the CoQ 10 group as compared to the placebo (p < 0.05). Regarding dysmenorrhea, Santanam et al. (22) found a significant decrease in the number of painful days of the menstrual cycle in the group of participants who had ingested 1,200 IU of vitamin E and 1,000 mg of vitamin C for 8 weeks (p < 0.05). After the antioxidant intervention, chronic pelvic pain was decreased in 43% of the patients, and dysmenorrhea was descended in 37%. For their part, Zamani et al. (23) carried out a clinical trial administering symbiotic supplements (Symbiotic Lactobacillus acidophilus, Lactobacillus casei, Bifidobacterium bifidum, and 800 mg inulin) for 8 weeks to patients with rheumatoid arthritis. A significant improvement was observed in scores measured with the DAS28 and VAS scales in this group (p = 0.004 and p < 0.001, respectively). Dietary modification In other research studies, diet intervention is accompanied by physical exercise. Messier et al. (24) carried out an intervention with a hypocaloric diet (low in fat and high in vegetables) combined with 1 h per day of physical training for 3 days a week, alternating aerobic and strength exercises in patients with osteoarthritis. The results showed that the compressive strength in the knee was decreased by 5% in the group that only did physical exercise (E), 10% in the group where only the diet was modified (D), and 9% in the diet group accompanied by physical exercise (D + E) at 18 months. However, in the D + E group, a greater decrease in pain was found, according to WOMAC, at 18 months when compared to E (p = 0.004) and D (p = 0.001). Incorporation of food The addition of foods, such as mussels, chamomile tea, blueberry or cherry juice, green tea, and strawberries, has been studied to evaluate the reduction of osteoarticular pain. With respect to rheumatoid arthritis, Lindqvist et al. (25) observed that the group that consumed 75 g/day of mussels showed a lower intensity of perceived pain measured with DAS28 (p = 0.017). However, this difference was not observed when compared with the group that consumed meat (p = 0.200). Likewise, no statistically significant difference was obtained when comparing the number of tender joints and the assessment of pain intensity using the VAS tool when comparing the intervention group with the control group (p = 0.48). For their part, Pirouzpanah et al. (26) analyzed the effect of chamomile tea (6 g/day) on rheumatoid arthritis. The number of tender joints was decreased significantly (p < 0.001), although this change was not observed in the score measured by DAS28, the number of swollen joints, or the perception of pain. Regarding the consumption of blueberry juice (500 ml/day) at 90 days, Thimóteo et al. (27) stated that there was a significant reduction (p = 0.048) in the perception of pain when compared with the control group measured with the DAS28 instrument. For osteoarthritis, Schumacher et al. (28) observed a significant improvement in the WOMAC score at 13 weeks in the group that consumed 470 ml/day of cherry juice (p = 0.002) when compared with the placebo group. The same relationship was observed by Hashempur et al. (29) in all the variables analyzed (knee pain, functional capacity, and joint stiffness) that included the VAS score (p = 0.038) in the group that consumed green tea (1,500 ml/day) during 30 days. In the control group, pain intensity only significantly descended when measured with WOMAC but not when measured with VAS. Regarding knee pain, Schell et al. (30) described that in the group that consumed 50 g/day of strawberries, the intensity of pain was significantly lower at 12 weeks (p < 0.05) for both constant pain and intermittent pain, measured with the Intermittent and Constant Osteoarthritis Pain (ICOAP), although there were no differences in VAS for pain at the end of the 26 weeks of intervention. Main results of observational studies Nutritional aspects, such as the type of diet or some supplements, have been evaluated from observational studies for their plausible relation to pain. Concerning diet modification studies, Di Lorenzo et al. (31) observed that the number of days with headache was decreased in the two groups that followed hypocaloric or ketogenic diet (p < 0.0001). However, this improvement had occurred earlier in the group with a hypocaloric diet, from the second month, while in the group with a ketogenic diet, it had occurred from the sixth month. On the other hand, other clinical variables, such as frequency of headache attacks or consumption of drugs for headaches, were decreased equally in the two groups from the sixth month (p < 0.0001). Furthermore, Veronese et al. (32) found that patients who had greater adherence to the Mediterranean diet had better scores in WOMAC (p < 0.0001) and less general pain evaluated by WOMAC (p < 0.05). Regarding the observational studies about nutritional supplements, Shmagel et al. (33) focused on knee pain in patients with osteoarthritis and observed that a lower intake of magnesium in the diet was associated with worse scores on WOMAC and KOOS than those with higher magnesium intake (p < 0.001). Likewise, they found a relationship between people who had low magnesium intakes in their diet and greater intensity of knee pain due to osteoarthritis at 48 months of follow-up. However, Lourdudoss et al. (34) did not find a statistically significant association between the consumption of omega 3 fatty acids within the diet and pain due to rheumatoid arthritis nor did they found an association between supplementation with omega 3, omega 6, and the omega 6:omega 3 ratios with DAS28 scores. Discussion The aim of this study was to review the scientific literature on the impact of the use of nutritional strategies among people with CNCP. We found that most of the interventions with nutritional supplements collected in our study show improvement and relief in CP (11,13,20,21). This is also the case when it is modified to a hypocaloric, Mediterranean, or with a healthier profile diet (24,31,32). However, the use of stand-alone foods, such as fruit juices, yields few hopeful results (26,30). We found a few studies whose intervention was the modification of the diet, and it was easier to find studies whose intervention was by using a capsule or pill. This could be due to the ease of applicability of the second one, while the modification in diet requires more effort both in patients and researchers. That is why we understand the nutritional education of special relevance in these patients, highlighting above all the main difficulties they may go through, such as lack of knowledge, lack of interest, or rigidity in the face of change (35). The use of nutritional interventions to relieve pain in clinical practice has numerous benefits, such as fewer adverse effects than drugs, being more economical methods, or increasing patient autonomy (7,8,36). We observe that the intervention that offers the best results is diet modification. This is also confirmed by Brain et al. (8), Clinton et al. (37), and Kaartinen et al. (38). However, this modification has to be easy to wear, durable, and adapted to the patient to obtain the best results (35). Brain et al. (8) included four types of interventions in their review, which were dietary modifications, nutrient intake modifications, use of nutritional supplements, and use of fasting. Comparing our systematic review with that carried out by Brain et al. (8), we found that their team did not include observational studies and interventions that were to add a specific food. In addition, they included non-RCTs, so we could find some bias. On the other hand, if we compare it with Ahmed Ali et al. (39), they conducted a systematic review that specifically focused on clinical trials on chronic pancreatitis, while our team has addressed a broader field. The main limitation that we found in our study was that there are still a few studies on the relationship between nutrition and pain, maybe because it is a new topic (36). When comparing the 24 documents included in this review, the heterogeneity between them was revealed, which particularly affects the methodology and design of the intervention. It is for this reason that we could not do a meta-analysis. An effort is needed to carry out future research on this topic using validated instruments to assess non-cancer CP and the nutritional variables, with deep described homogeneous interventions on large and wellcharacterized patient samples. Conclusion The results obtained show that there are nutritional interventions, especially diet modification, that can improve and alleviate CNCP. Furthermore, there is a need for future research to study CP as an independent entity and not as a symptom of the disease. If the evidence is strong, interventions could be applied in a clinical setting to improve the quality of life of patients suffering from this problem. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
2022-08-08T13:30:08.794Z
2022-08-08T00:00:00.000
{ "year": 2022, "sha1": "a368fa115cc185f11055ffd27548ae393d7ab3d5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "a368fa115cc185f11055ffd27548ae393d7ab3d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4560269
pes2o/s2orc
v3-fos-license
Ectopic forms of schistosomiasis mansoni in the second macroregion of Alagoas : case series report and review of the literature Introduction: Ectopic forms of schistosomiasis are those in which the parasitic element is localized outside the portal system, the natural habitat of the helminth. Although the prevalence rates of schistosomiasis are high in Brazil, clinical and epidemiological data on ectopic forms of the disease are still scarce. Methods: Cross-sectional, retrospective and descriptive epidemiological study in which cases with a confirmed histopathological diagnosis of an ectopic form of schistosomiasis were analyzed. The cases were selected from a database of the anatomic pathology files of a referral center. Results: Of the 21 cases identified, seven affected the female genital tract and five the male genital tract; four cases were identified in the peritoneum; two cases involved lymph nodes and two involved adipose tissue; and renal involvement was detected in one case. Conclusions: The lack of knowledge of the clinical presentation of ectopic forms of schistosomiasis makes the early identification and treatment of this form difficult, with direct implications in the reduction of morbidity and mortality in endemic areas. INTRODUCTION Schistosomiasis is an infectious granulomatous disease caused by helminths of the genus Schistosoma, with Schistosoma mansoni being the causative agent in Brazil.Originally described in 1851 in Egypt by Theodor Bilharz, schistosomiasis spread from Africa to other continents following migratory flows 1 .Today, according to the World Health Organization (WHO), schistosomiasis ranks second in the world in importance and socioeconomic repercussion, after malaria.It is estimated that schistosomiasis affects 200 million people and poses a threat to more than 600 million individuals living in areas with high endemicity 2 . Endemic schistosomiasis mansoni occurs in 54 countries located mainly in Africa, the Eastern Mediterranean and the Americas.In South America, it occurs mainly in the Caribbean region, Venezuela and Brazil 2,3 . In Brazil, the disease occurs in 19 states, covering an endemic area ranging from Maranhão to Espírito Santo and Minas Gerais, with local outbreaks in other states 4 .However, the magnitude of infection with Schistosoma spp. is heterogeneous.Precarious or nonexistent conditions of basic sanitation, poverty and low levels of schooling characterize the most affected areas 5 .The prevalence of schistosomiasis is particularly higher in the Northeastern States and Minas Gerais 6,7 . The extensive geographic distribution of schistosomiasis mansoni in Brazil, by itself, provides a dimension of the magnitude of this problem for public health 1 .It is estimated that 2.5 to 8 million Brazilians have the disease.However, knowledge about the epidemiological behavior of this parasitosis is limited, with divergent estimates of its prevalence in the country 4 . Poor socioeconomic conditions, difficulties in accessing health services, migratory movements and poor conditions of water and sewage treatment are among the main factors responsible for the transmission of schistosomiasis in endemic areas.In Northeast Brazil, the State of Alagoas has the highest prevalence 6 . According to data from the National Health Foundation (FUNASA), through the Schistosomiasis Control Program in Alagoas, there was a resurgence of the endemic disease 7 , aggravated by socioeconomic conditions unfavorable to disease control 8 .Sixty-nine percent of the municipalities within the state are high-endemic areas for schistosomiasis, with more than two and a half million people living at risk of the disease 8 .Among 18 states in the Northeast and Southeast, the highest positivity rates of parasitological tests for schistosomiasis were found in Alagoas 9 . Currently, 72 municipalities in Alagoas are concentrated in the so-called risk zone for schistosomiasis.In addition, it is possible that an endemic expansion is occurring in the Alagoas municipalities located near the Mundaú and Paraíba river basins.The state is rich in water resources, and the main focuses of the disease are concentrated in the basins of these rivers 7,9 . The ectopic forms of schistosomiasis are those in which the parasitic elements -eggs or adult worms -are located outside the portal system, the natural habitat of the helminth 10 .Rarely, schistosomiasis occurs in only one isolated organ.In most cases there are lesions in other sites besides the liver and the intestine.The lesions outside this location are seen in up to 20% of patients, isolated or in association with hepatic and intestinal involvement 11 .However, ectopic forms often do not raise clinical suspicion and usually represent findings in biopsies or necropsies 10,11 . Classically, the histopathological diagnosis of S. mansoni infection is based on the presence of granulomas containing in the central portion the eggs or fragments of S. mansoni eggs, peripherally surrounded by a variable number of multinucleated giant cells, histiocytes, lymphocytes, and eosinophils 11 . Considering the large number of factors involved in the endemic form of schistosomiasis mansoni in Brazil and the paucity of clinical and epidemiological data on the ectopic forms of the disease 8 , this study aimed to investigate this form of presentation of schistosomiasis in a particular geographic region in the State of Alagoas. Ethical considerations This cross-sectional, retrospective and descriptive epidemiological study was approved by the Research Ethics Committee of the Federal University of Alagoas (CAAE: 1.568.537). The sample of the present study was selected by an active search in the database of Center of Cancer Prevention and Diagnosis [Núcleo de Prevenção e Diagnóstico do Câncer (NPDC)], Arapiraca, Alagoas, in order to identify all anatomopathological examinations diagnosed with ectopic schistosomiasis from January 2000 to December 2015. The state of Alagoas has two macroregions established by the Unified Health System (SUS), and the second macroregion of Health covers four regions (from the 7 th to the 10 th that correspond to the sertão and agreste characterized by a semiarid and harsh climate).These regions are identified as a continuous geographical space made up of groups of neighboring municipalities, delimited by the existence of cultural, economic and social identities, as well as in the areas of communication, infrastructure, transportation and health services.The second macroregion has 1,026,693 inhabitants and its main reference is the City of Arapiraca 12 .NPDC is the only laboratory of anatomic pathology that serves this geographical region, and because it is the only regional destination of human specimens for anatomopathological analysis, it confers on its collection a representative character of this entire region of the state. For the purposes of inclusion in this study, we considered lesions diagnosed in samples from biopsies or in surgical specimens from organs or tissues located outside the portal system as ectopic forms of schistosomiasis.After identifying ectopic forms of the disease, the histopathological diagnoses were confirmed by two experienced pathologists who were unaware of the initial diagnoses of the lesions.Cases with and without clinical suspicion of ectopic schistosomiasis were included in the series.Patient information (age, gender, ethnicity, municipality of residence, and association of schistosomiasis with other clinical conditions) and the characteristics of the lesion (location, macroscopic description of the surgical specimen and histopathological finding) were recorded in a standardized form.Patients from geographic regions outside the perimeter of the second health macroregion of Alagoas and cases diagnosed outside the period delimited in the study were not included in the series. The results were expressed as numbers and analyzed by the frequency distribution of the ectopic forms in relation to the demographic and clinical variables of the patients. RESULTS From the survey of the NPDC database, 159,474 histopathological reports were issued between January 2000 and December 2015, of which 174 cases with confirmed diagnoses of schistosomiasis were identified.From this number, five cases were excluded, corresponding to patients from a geographical region outside the perimeter of the second health macroregion of Alagoas.Of the 169 cases identified in the geographic region of interest of this study, 148 were diagnosed in an organ with venous drainage within the portal system (Figure 1). Ectopic forms of the disease were diagnosed in 21 samples from 20 patients (two samples came from one patient) corresponding to 12.4% of cases diagnosed as schistosomiasis in the geographic region of the study.The diagnoses were established in biopsies performed due to several clinical indications, and there was no suspicion of schistosomiasis in any of the cases.Thus, in the entire sample, the diagnosis of the ectopic form was a histopathological finding.The demographic and clinical characteristics of the patients are described in Table 1. Of the twenty patients 11 were men; age ranged from 24 to 103 years and the most common ectopic form of schistosomiasis was that of the female genital tract (7/21).Of the seven cases diagnosed in this region, three were in the uterus, with two localized in the uterine cervix, one associated with infiltrating carcinoma, one associated with chronic cervicitis (Figure 2A) and one in the uterine body associated with leiomyoma.In the four cases diagnosed in the ovary, we observed association with oophoritis in one case, cystic teratoma in one case, endometriosis in one case, and follicular cyst in one case.Involvement of the male genital tract was observed in five cases; in three cases, the lesions were located in the testis (in two patients orchiectomy was performed due to adenocarcinoma of the prostate and in one patient there was association with chronic orchitis), in one case it was localized in the penis in association with squamous cell carcinoma (Figure 2B), and in one it was localized in the epididymis and was associated with epididymitis. In the abdominal cavity, the ectopic form was diagnosed in three cases in the visceral peritoneum (colonic serosa including one case of pseudotumoral form); in the omentum it was diagnosed in a patient who underwent radical gastrectomy due to adenocarcinoma, presented also as a pseudotumoral lesion (Figure 2C).The ectopic form diagnosed in the adipose tissue of two patients corresponded to the fat surrounding lymph nodes isolated from the perigastric and pelvic regions, removed during a surgical procedure for the treatment of gastric adenocarcinoma and carcinoma in situ of the uterine cervix, respectively. Lymph node involvement was seen in two cases, in one lymph node removed during a surgical procedure for the treatment of gastric adenocarcinoma (Figure 2D), and the other was removed during a procedure for surgical treatment of traumatic rupture of the rectosigmoid colon.In the patient diagnosed with cancer, there was no evidence of metastatic infiltration of the lymph node by the neoplasia).Renal A B C D involvement was seen in one case in association with primary renal carcinoma (renal cell carcinoma, clear cell type). DISCUSSION Female genital schistosomiasis is common in endemic areas, but there are few reports in the Brazilian and global medical literature 13 .Recently, the WHO has included female genital schistosomiasis in the category of gender-specific diseases that deserve priority research 13 .It is estimated that 6-27% of girls and women with intestinal schistosomiasis suffer from pathological conditions induced by eggs entrapped in parts of the genital tract 13,14 . The mechanism by which S. mansoni eggs reach the female genital tract is not fully understood 15 .However, it is believed that the anastomoses between the hemorrhoidal veins and vessels that drain the internal genitalia and vulva allow the migration of worms and egg embolization into the capillary bed, where they induce the formation of granulomas [14][15][16] . Data on the topographic distribution of genital schistosomiasis in Brazil and in other countries show the involvement of the following organs, in descending order of importance: ovaries, cervix, uterus, uterine tubes, vulva, and vagina 13 .In our series, female genital involvement showed a similar proportion of distribution as that described in the literature, with the ovary being the main organ affected (4/7) followed by the cervix (2/7) and the uterine body (1/7). Schistosoma mansoni eggs when transported to the genital and reproductive tract may give rise to hypertrophic or ulcerative lesions causing female infertility and increased risk of ectopic pregnancy.Lesions may also represent a risk factor for infections due to sexually transmitted diseases, especially human immunodeficiency virus and human papilloma virus 13,15,17 .In the ectopic form of schistosomiasis involving the uterine cervix, postcoital, intermenstrual and post-menstrual bleeding, vaginal discharge and dysmenorrhea can occur -signs that are not direct consequences of schistosomiasis but result from its association with cervicitis (Figure 2A) 18 .However, the involvement of the female genital tract is usually asymptomatic and the eggs are an occasional finding evidenced in histological examinations for different purposes 18 . Despite the scarcity of reports on the presence of S. mansoni in the male genital tract and the sexual and reproductive impact it can cause in men, ectopic forms have been described in the penis, scrotum, testis, epididymis, spermatic funiculus, seminal vesicle, and prostate.There are also reports of the presence of eggs in human semen 19 . Although schistosomiasis is endemic in many countries, S. mansoni infection with testicular involvement is extremely rare 20 .Its occurrence is attributed to the migration of eggs through the venous channels between the mesenteric veins and the internal spermatic vein, inducing an immune response and formation of granulomas associated with fibrotic changes 20,21 . In an autopsy study performed in an endemic area of Brazil, testicular involvement was observed in 3.2% of cases of schistosomiasis 22 .However, in this location schistosomiasis may be clinically characterized by unilateral testicular edema, gonadal atrophy, libido alterations, erectile dysfunction, infertility, and testicular nodulations, mimicking a neoplastic lesion that may in certain circumstances lead to unnecessary orchidectomy 20,21,23,24 . In endemic areas it is fundamental to maintain a high degree of suspicion of testicular schistosomiasis as a differential diagnosis of testicular tumors in boys or adult males with a testicular mass 20,25 .The indication of radical orchiectomy should be carefully reviewed to avoid unnecessary invasive surgeries, especially in patients of reproductive age.In this context, partial orchiectomy with intraoperative frozen section examination may be a safe and effective approach for the diagnosis and treatment of these benign lesions 20 . There is only one case described in the literature of ectopic presentation of S. mansoni eggs in the epididymis 26 .However, schistosomiasis of the epididymis due to S. haematobium is not uncommon in areas endemic for the disease and is usually associated with rectal or vesical schistosomiasis.Although there may be scrotal pain, hardening or enlargement of the organ associated with epididymitis 21,26 , it is usually asymptomatic even in the advanced form of the disease, which may explain the small number of cases reported in the literature 27 . Penile involvement in S. haematobium infection through sexual transmission is a possibility that was raised a few decades ago 28 .The suggested mechanism would be the transference of eggs through the contact of the penis with the affected cervix or indirectly by the release of eggs through the female urethra.In this report, the authors described that the infection of the penis by S. haematobium results in fibrosis of the cavernous tissue, organ stiffness, urinary obstruction and extensive tissue destruction, simulating early carcinoma of the penis 28 .However, we did not find in the literature the association of S. mansoni, the only species found in Brazil, and ulcerative infiltrating squamous cell carcinoma of the penis (Figure 2B). Further studies on the urogenital system are needed to assess the real significance of male genital lesions and a systematic study among men with schistosomiasis mansoni to establish the prevalence of urogenital localization 19 . Schistosomiasis has been associated with some forms of neoplasias 29 .The most striking association occurs in areas with a high prevalence of urinary schistosomiasis due to S. haematobium, such as Africa and the Middle East.In these regions there is a predominance of squamous cell carcinoma of the bladder, a relatively rare form of cancer in non-endemic regions 29,30 .In China, where colorectal cancer reaches relatively high incidence rates in endemic areas, the association between this type of neoplasia and diffuse colitis due to S. japonicum infection has also been suggested 30,31 .Some isolated case reports describe the association between adenocarcinoma of the prostate and prostate infection by S. haematobium and S. mansoni.However, the epidemiological relationship between the two conditions remains limited to the few published cases [30][31][32] . In our series we observed the association of the ectopic form of schistosomiasis with a benign neoplasia in two cases (uterine leiomyoma and ovarian cystic teratoma).The association with malignant neoplasia was observed in nine cases.In three patients (squamous cell carcinoma of the penis, carcinoma of the uterine cervix and renal cell carcinoma) there was an intimate association between neoplastic cells and S. mansoni eggs.It is possible that local hypervascularization related to tumor growth facilitates the migration of the eggs resulting in the simultaneous association, without causal relationship, between the two conditions. The ectopic forms of S. mansoni affecting the peritoneum are usually associated with the hyperplastic manifestations of the disease (pseudotumoral form) that seems to be directly related to the egg of the parasite.In these circumstances the egg acts as an antigenic complex, which would provoke an exaggerated granulomatous inflammatory response in the organism 33,34 .As the oviposition of worms occurs in the descending colon, sigmoid colon and rectum, the ectopic lesions are located mainly in relation to this topography 34,35 .Our results confirm both the association of the peritoneal lesion with the pseudotumoral form of schistosomiasis (Figure 2C) and with the topographic relation with the sigmoid and descending colon. There is only one report describing the presence of S. mansoni eggs in the omentum of humans 36 .However, in experimental studies in mice, after direct intraperitoneal inoculation of the parasite there was evidence of eggs in the omentum and peritoneum 37 .These findings suggest that the parasite is able to develop without ingestion of red blood cells (intravascular localization) and may explain the presence of Schistosoma eggs in human omentum. Only two cases of ectopic forms of Schistosoma with lymph node involvement have been reported in the literature published in the English language 38,39 .In one report, eggs from S. japonicum were found in the mesenteric lymph nodes associated with carcinoma of the sigmoid colon 38 .In the other, which is the first report of lymph node involvement by Schistosoma eggs, there was an association between the granulomas and pericolic lymphadenitis 39 .Until then it was believed that the lymph nodes were resistant to such involvement due to the small diameter of the vessels in relation to the size of the eggs, due to the spicules of the eggs, making vascular migration difficult or by an indirect suppressive mechanism of the immune system that would inhibit granuloma formation in the lymph nodes 39 .In our study we observed lymph node involvement by S. mansoni in a patient with a malignant neoplasm of the colon.Despite the topography of the lymph node in the territory of organ drainage, the lesion in the lymph node was represented only by the granulomatous component, without neoplastic involvement (Figure 2D). Only one case of ectopic presentation in adipose tissue is reported in the literature 40 .In this report the ectopic form was described and a lipoma located in the subcutaneous tissue, distant from the life cycle of the worm and without connection with any portal vein or another vascular system that could to be used for its migration 40 .The two cases reported in our study, on the contrary, were described in the perigastric and pelvic adipose tissue and could be explained by the migration of worms and/ or egg embolization through vascular anastomoses. The prevalence of renal involvement in schistosomiasis is variable and is related to the profile of the study population 41 .The pathophysiology of renal injury in schistosomiasis has features common to other parasitic diseases, such as malaria.Kidney damage is mainly caused by glomerulopathy mediated by the deposition of immune complexes formed by circulating Schistosoma antigens and antibodies against the parasite 41 . Considering all forms of the disease, the incidence of glomerular involvement is estimated at 5-6% and increases by 15% in the hepatosplenic form of the disease 41,42 .Despite the established association between schistosomiasis and glomerulopathy by immunocomplexes, the kidneys are rare sites of granuloma formation and the contribution of ectopic schistosomiasis to renal damage remains poorly understood 43,44 .In our series, the only case of ectopic schistosomiasis in the kidney occurred in association with renal cell carcinoma, with no involvement of the renal parenchyma.The inflammatory process, represented by the presence of granulomas, was identified solely in the neoplastic tissue. Early identification of the disease and timely treatment of S. mansoni infected patients are the main measures in reducing the disease morbidity and mortality.However, ectopic forms of the disease are usually initially silent.Thus, this study reinforces the epidemiological importance of the diagnosis of the ectopic form of schistosomiasis mansoni and the need for health professionals to be aware of this form of presentation.Thus, the diagnosis of new cases of ectopic schistosomiasis, especially in endemic areas should be reported and investigated, as they also function as markers of critical areas that need prioritization and basic care integrated actions. The remarkable number of cases of ectopic schistosomiasis and the variety of anatomical location described in our study may be explained by the high endemic rates of schistosomiasis in this geographic region.Considering that the prevalence of S. mansoni in municipalities of the northeastern states of Brazil is still high, it is believed that low notification rates and lack of protocols tailored to the investigation of ectopic forms are responsible for the absence of diagnoses of ectopic forms of this parasitic disease 15 .Also, it would be advisable to maintain a high degree of diagnostic suspicion to identify ectopic forms of the disease. TABLE 1 : Demographic data of patients with ectopic forms of schistosomiasis.
2018-04-03T06:00:18.833Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "5256160d4ed15d6486e1011edae38b7f4ebda249", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rsbmt/v50n6/1678-9849-rsbmt-50-06-812.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7bb2c34829768c8027eb998798a6c0f8f646b85b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53208204
pes2o/s2orc
v3-fos-license
Whole Eye Transplantation : Allograft Survival with Tacrolimus Immunosupression and Comparison to Syngeneic Transplantation Presenter : PURPOSE: Buccal fat pad excision is often offered by many practioners as a means of obtaining a more aesthetic or congruent midface. This procedure is often performed by non-board certified physicians and has been documented in the form of countless videos on Instagram and other social media platforms with no long term patient follow-up. We performed a retrospective analysis of published data regarding buccal fat pad excision and sought to better elucidate pitfalls regard this underreported plastics procedure in the literature. PURPOSE: Buccal fat pad excision is often offered by many practioners as a means of obtaining a more aesthetic or congruent midface. This procedure is often performed by non-board certified physicians and has been documented in the form of countless videos on Instagram and other social media platforms with no long term patient follow-up. We performed a retrospective analysis of published data regarding buccal fat pad excision and sought to better elucidate pitfalls regard this underreported plastics procedure in the literature. METHODS: A literature search was conducted in October 2017 through the PUBMED database for articles regarding the utility of buccal fat pad excision in the setting of aesthetic improvement of the midface. Reference articles were screened manually to obtain relevant studies. A total of 121 citations were identified in the original search but after eliminating duplicate studies and abstracts and utilizing predefined inclusion/exclusion criteria only 11 articles were satisfactory. 1-5 None of these articles demonstrated any long-term patient follow-up. RESULTS: Out of the 121 relevant citations identified in our search, only 2 studies published describe a case series of > 5 patients regarding cheek or midface sculpturing with buccal fat pad excision for aesthetic purposes, the total sample size between these two studies was 53 patients. Neither of the two studies had long term follow up regarding patient satisfaction or related outcomes. CONCLUSIONS/SIGNIFICANCE: Buccal fat pad resection as an aesthetic improvement of the midface has traditionally been described but follow up regarding loss of subcutaneous fat with aging (or cheek hollowing) and late secondary deformities have not been published in the literature. Further research in long term patient follow up postoperatively including patient satisfaction rates and the encouragement of reporting postoperative complications is warranted. Sunday, September 30, 2018 Affiliation: University of Pittsburgh, Pittsburgh, PA PURPOSE: Visual impairment and blindness present significant economic, social and personal burdens for millions of patients and caregivers around the world. Whole eye transplantation (WET) is a potential solution. Our lab has established a viable rodent model with promising results in syngeneic transplants. To investigate allotransplantation, successful immunosuppression is necessary. Tacrolimus monotherapy is successful in rodent VCA and has possible neuroprotective effects in the central nervous system and injured optic nerve, but its efficacy in WET is unknown. Here, we present survival of allograft WET treated with Tacrolimus monotherapy. METHODS: Brown-Norway to Lewis rat transplants were performed (n=6), followed by daily intraperitoneal 1mg/kg Tacrolimus injection. Animals were examined at weeks 1, 3, 5, and 6, and compared to syngeneic transplants. Structure and blood flow of the eye and retina were studied using Optical Coherence Tomography (OCT). A retina specialist ophthalmologist performed anterior segment examination, fundoscopy, indirect ophthalmoscopy, and tonometry for intraocular pressures. Animals were sacrificed at 6 weeks. Specimens of the transplanted globe, external ear, eyelid, bone and vessel anastomoses were stained with H&E and interpreted by an ocular pathologist. RESULTS: Compared to syngeneic transplants, allografts demonstrated comparable corneal thickening, retinal thinning, and blood flow in the central retinal artery and vein (OCT). Intraocular pressures were normal and comparable to syngeneic transplants. On clinical examination, both groups had mild corneal anomalies, but allografts had more frequent fundus and optic nerve ischemia (moderate). Histologically, both groups had global ocular chronic inflammation, some degree of retinal degeneration, but, in contrast to allografts, syngeneic transplants actually showed consistent degeneration of the optic nerve. CONCLUSION: This is the first study of orthotopic allograft eye transplantation and immunosuppression. Compared to syngeneic transplants, allografts had increased ischemia, but less optic nerve degeneration, without signs of rejection. Overall preservation of ocular structures is an exciting first step. With this, we can begin to explore innumerable new questions in eye transplantation.
2018-11-17T15:56:28.508Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "dc0e323f086a11bd591c6e857d61f83ab90fae78", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/01.gox.0000546943.03467.1e", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc0e323f086a11bd591c6e857d61f83ab90fae78", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
199447563
pes2o/s2orc
v3-fos-license
Cx26 keratitis ichthyosis deafness syndrome mutations trigger alternative splicing of Cx26 to prevent expression and cause toxicity in vitro The Cx26 mRNA has not been reported to undergo alternative splicing. In expressing a series of human keratitis ichthyosis deafness (KID) syndrome mutations of Cx26 (A88V, N14K and A40V), we found the production of a truncated mRNA product. These mutations, although not creating a cryptic splice site, appeared to activate a pre-existing cryptic splice site. The alternative splicing of the mutant Cx26 mRNA could be prevented by mutating the predicted 3′, 5′ splice sites and the branch point. The presence of a C-terminal fluorescent protein tag (mCherry or Clover) was necessary for this alternative splicing to occur. Strangely, Cx26A88V could cause the alternative splicing of co-expressed WT Cx26—suggesting a trans effect. The alternative splicing of Cx26A88V caused cell death, and this could be prevented by the 3′, 5′ and branch point mutations. Expression of the KID syndrome mutants could be rescued by combining them with removal of the 5′ splice site. We used this strategy to enable expression of Cx26A40V-5′ and demonstrate that this KID syndrome mutation removed CO2 sensitivity from the Cx26 hemichannel. This is the fourth KID syndrome mutation found to abolish the CO2-sensitivity of the Cx26 hemichannel, and suggests that the altered CO2-sensitivity could contribute to the pathology of this mutation. Future research on KID syndrome mutations should take care to avoid using a C-terminal tag to track cellular localization and expression or if this is unavoidable, combine this mutation with removal of the 5′ splice site. Introduction The introduction is lacking some important information on the differences between the mutations selected for study in terms of severity of disease. Cx26A88V is established to be associated with a lethal form of KID syndrome along with the mutation Cx26G45E. Other mutations such as N14K have been reported previously and are not discussed. A similar mutation in Cx30 A88V has also recently been reported by Kelly et al (Journal of Cell Science 2019) where the lethality of this mutation is also clearly presented. Thus some further introduction to current understanding of the pathological nature of the KID mutations and the A88V locus should be included. Methods All the constructs used are fused in frame to the expression vector mCherry and others are in untagged vectors. HeLa Ohio cells, that do not endogenously express connexins were used in all the studies, thus the system is essentially assessing homomeric Connexin channel formation. Transfections -cells are transiently transfected with the constructs and expression analysis assessed 48 hours -72 hours post transfection at which time there is considerable over expression of the introduced plasmids and build up of protein in intracellular stores. Cell Death was monitored only by trypan blue exclusion assays, while more physiological assays such as a MTT assay or WST-1 assay or apoptosis assay would seem more appropriate. Statistical analysis is appropriate Results/Discussion 1. The time point at which the assays are performed following transient transfection is quite late. In the controls (WT and the A88S nonsyndromic mutation) there is a clear accumulation of protein in intracellular stores by 48hrs. Was a time course on mRNA extraction performed. Although A88V is not highly expressed the spatial localisation of N14K and A40V in these cell lines is not reported. Indeed previous studies have clearly shown that N14K is trapped intracellularly in the ER 24-48 hours post transfection in HeLa cells. Further assays clarifying if the cell death phenotype/ ER accumulation is associated with the alternate splicing are required. discussion should consider the implications of this on the overall folding and structure of the protein. 3. The authors clearly identify that the introduction of the M151L mutation thereby removing the 5'splice site can rectify splicing. However, there is little comment on the spatial localisation of the proteins. Indeed it looks as if the A88V protein still has trafficking issues and higher quality images with co-staining with cellular markers (e.g. ER markers) wold help clarify this. This could have important implications that are not discussed. 4. The cell death assays clearly show that the removal of the splice sites restore cell viability of A88V. This is not the first mutation a A88V in a beta connexin that has been reported to cause cell death in HeLa cells. A similar mutation in Cx30, associated with Clostron syndrome was extensively studied in both HeLa cells, a cochlear cell line and a mouse model. In these studies, and others similar effects of cell death were observed in both GFP tagged and un tagged proteins. Further in the Kelly study mRNA levels of Cx30 in the mutant mice was significantly reduced where they suggest that the mutation results in loss of stability of the mRNA. Indeed there are extensive studies by this group and others that are not given recognition in this manuscript for contributions to the molecular mechanisms underlying a range of Connexin channelopathies. 5. The CO2 and hemichannel assay for A40V is tagged onto the end when most of the work has focused on the A88V mutations. It would seem more relevant to focus on A88V at this point 6. Furthermore, there is increasing literature to suggest that a number of mutations in bconnexins result in altered association with connexin43, with which the beta connexins do not normally oligomerise. It would therefore be worthy to take the A88V mutation and study the splicing effect in the presence of other connexins, providing a more integrated approach to the complex nature of connexin tissue expression profiles. 7. Finally, there is accumulating evidence that many of the connexin mutations induce ER stress pathways or mitotic crisis points, recent reports in the literature also suggest that such events can evoke alternative splicing pathways (eg Carew et al IJMS 2019). Therefore further studies should systematically rule out if the alternative splicing is truly caused by the addition of a tag or if it is a true reflection of the disease pathway. Conclusion While the manuscript has identified a novel pathway evoked in Hela cells expressing several KID mutations the study is premature and requires consolidation to consider the merits of the pathway in context with current literature. Review form: Reviewer 2 Is the manuscript scientifically sound in its present form? Yes Is the language acceptable? Yes Is it clear how to access all supporting data? Yes Do you have any ethical concerns with this paper? No Have you any concerns about statistical analyses in this paper? No Recommendation? Accept with minor revision (please list in comments) Comments to the Author(s) The authors present here an interesting article that details a previously unknown phenomenon of cryptic splice site activation in disease-causing Cx26 mutants in vitro, which can also act in trans to cause the splicing of co-expressed wild-type Cx26. However, this feature is only evident in cells expressing tagged Cx26 and therefore does not occur in vivo, or in cells expressing untagged Cx26/mutants. The likely pathological mechanism in KID syndrome is related to the reduction of hemichannel CO2 sensitivity which the authors have found across several KID mutantsincluding now the A40V mutant in this study. The manuscript is well written, and the authors have revealed a detrimental mechanism of tagged Cx26 mutants in vitro, which should be taken into consideration for future research in the connexin field. I only have the following minor corrections and considerations for the authors to address: 1) Insert the word "is" before "unavoidable" online 29 of abstract 2) What's the scale of the Y axis in Fig.4, -is it percentage? If so, it should be written as such in the axis label. 3) Splice is spelled as "spice" on page 11, line 7. 4) Is the untagged Cx26A88V mutant toxic to cells alone? It would also be interesting to see additional data of untagged mutant in Figure 4. 5) It's interesting that untagged input and reporter Cx26/mutants do not get spliced, whereas tagged versions so. Therefore, I think it would be useful to see the untagged Cx26A88V and untagged Cx26WT data in Fig. 6 rather than saying "(data not shown)". 6) Although untagged Cx26 mutants lack the ability to undergo aberrant splicing (i.e. it only occurs with tagged proteins in vitro), is there any evidence that the mutations could have an affect on other connexins? We know that Cx30 is often expressed in the same cell types as Cx26 and it's interesting that some mutations in Cx26 are mirrored in Cx30 in terms of their location/substitution (such as A88V) and phenotype (skin disease and/or hearing loss). Manuscript ID RSOS-190188 entitled "Cx26 KID syndrome mutations trigger alternative splicing of Cx26 to prevent expression and cause toxicity in vitro" which you submitted to Royal Society Open Science, has been reviewed. The comments from reviewers are included at the bottom of this letter. In view of the criticisms of the reviewers, the manuscript has been rejected in its current form. However, a new manuscript may be submitted which takes into consideration these commentsthe Editors have opted to give you a reject/resubmit decision to allow you the fullest opportunity to tackle their concerns. Please note that resubmitting your manuscript does not guarantee eventual acceptance, and that your resubmission will be subject to peer review before a decision is made. You will be unable to make your revisions on the originally submitted version of your manuscript. Instead, revise your manuscript and upload the files via your author centre. Once you have revised your manuscript, go to https://mc.manuscriptcentral.com/rsos and login to your Author Center. Click on "Manuscripts with Decisions," and then click on "Create a Resubmission" located next to the manuscript number. Then, follow the steps for resubmitting your manuscript. Your resubmitted manuscript should be submitted by 27-Oct-2019. If you are unable to submit by this date please contact the Editorial Office. We look forward to receiving your resubmission. Kind regards, Andrew Dunn Royal Society Open Science Editorial Office Royal Society Open Science openscience@royalsociety.org on behalf of Dr John Dalton (Associate Editor) and Catrin Pritchard (Subject Editor) openscience@royalsociety.org Editor comments: The reviewers and AE agree that this is a potentially interesting manuscript but have raised a number of major flaws which need to be addressed. These could potentially be addressed in a resubmission but the manuscript as it stands cannot be accepted. Associate Editor Comments to Author (Dr John Dalton): Your manuscript was deemed original and interesting by our reviewers. However, one reviewer raised substantial concerns and suggested further interpretation, attention to other studies in the field and, indeed, some further experiments to relate the studies to the pathological mechanisms underlying KID syndrome. Reviewers' Comments to Author: Reviewer: 1 Comments to the Author(s) Brief overview of manuscript In the present manuscript the authors have studied several mutations in GJB2 encoding the gap junction protein connexin 26 that are associated with the inflammatory skin dysplasia Keratitis Ichthyosis Deafness syndrome -KID. They suggest that the presence of the mutation evokes an internal splice site within the mRNA that is not normally active and that this is linked with cell death. They further mutate 5' and 3' splice regions and identify a site that rescues the cytotoxicity and aberrant splicing. They also show that one of the mutations alters hemichannel sensitivity in response to CO2. The authours conclude that the aberrant splicing is due to the addition of mcherry as a marker, however they do not in the discussion consider wider implications. The work is well presented and aspects of the results clearly explained. However, I have several major concerns about the scope of the manuscript, the interpretation of the data in line with accumulated evidence in the field and recent literature on induction of alternative splicing linked to cell stress responses. These concerns should be reviewed and indeed further experiments warranted to relate the studies to the pathological mechanisms underlying KID syndrome. Specific comments. Introduction The introduction is lacking some important information on the differences between the mutations selected for study in terms of severity of disease. Cx26A88V is established to be associated with a lethal form of KID syndrome along with the mutation Cx26G45E. Other mutations such as N14K have been reported previously and are not discussed. A similar mutation in Cx30 A88V has also recently been reported by Kelly et al (Journal of Cell Science 2019) where the lethality of this mutation is also clearly presented. Thus some further introduction to current understanding of the pathological nature of the KID mutations and the A88V locus should be included. Methods All the constructs used are fused in frame to the expression vector mCherry and others are in untagged vectors. HeLa Ohio cells, that do not endogenously express connexins were used in all the studies, thus the system is essentially assessing homomeric Connexin channel formation. Transfections -cells are transiently transfected with the constructs and expression analysis assessed 48 hours -72 hours post transfection at which time there is considerable over expression of the introduced plasmids and build up of protein in intracellular stores. Cell Death was monitored only by trypan blue exclusion assays, while more physiological assays such as a MTT assay or WST-1 assay or apoptosis assay would seem more appropriate. Statistical analysis is appropriate Results/Discussion 1. The time point at which the assays are performed following transient transfection is quite late. In the controls (WT and the A88S nonsyndromic mutation) there is a clear accumulation of protein in intracellular stores by 48hrs. Was a time course on mRNA extraction performed. Although A88V is not highly expressed the spatial localisation of N14K and A40V in these cell lines is not reported. Indeed previous studies have clearly shown that N14K is trapped intracellularly in the ER 24-48 hours post transfection in HeLa cells. Further assays clarifying if the cell death phenotype/ ER accumulation is associated with the alternate splicing are required. 2. Figure 2 indicates the identified internal splice sites that are activated. However the selection of the M151L mutation that rectifies this should be indicated on this figure for clarity. The discussion should consider the implications of this on the overall folding and structure of the protein. 3. The authors clearly identify that the introduction of the M151L mutation thereby removing the 5'splice site can rectify splicing. However, there is little comment on the spatial localisation of the proteins. Indeed it looks as if the A88V protein still has trafficking issues and higher quality images with co-staining with cellular markers (e.g. ER markers) wold help clarify this. This could have important implications that are not discussed. 4. The cell death assays clearly show that the removal of the splice sites restore cell viability of A88V. This is not the first mutation a A88V in a beta connexin that has been reported to cause cell death in HeLa cells. A similar mutation in Cx30, associated with Clostron syndrome was extensively studied in both HeLa cells, a cochlear cell line and a mouse model. In these studies, and others similar effects of cell death were observed in both GFP tagged and un tagged proteins. Further in the Kelly study mRNA levels of Cx30 in the mutant mice was significantly reduced where they suggest that the mutation results in loss of stability of the mRNA. Indeed there are extensive studies by this group and others that are not given recognition in this manuscript for contributions to the molecular mechanisms underlying a range of Connexin channelopathies. 5. The CO2 and hemichannel assay for A40V is tagged onto the end when most of the work has focused on the A88V mutations. It would seem more relevant to focus on A88V at this point 6. Furthermore, there is increasing literature to suggest that a number of mutations in bconnexins result in altered association with connexin43, with which the beta connexins do not normally oligomerise. It would therefore be worthy to take the A88V mutation and study the splicing effect in the presence of other connexins, providing a more integrated approach to the complex nature of connexin tissue expression profiles. 7. Finally, there is accumulating evidence that many of the connexin mutations induce ER stress pathways or mitotic crisis points, recent reports in the literature also suggest that such events can evoke alternative splicing pathways (eg Carew et al IJMS 2019). Therefore further studies should systematically rule out if the alternative splicing is truly caused by the addition of a tag or if it is a true reflection of the disease pathway. Conclusion While the manuscript has identified a novel pathway evoked in Hela cells expressing several KID mutations the study is premature and requires consolidation to consider the merits of the pathway in context with current literature. Reviewer: 2 Comments to the Author(s) The authors present here an interesting article that details a previously unknown phenomenon of cryptic splice site activation in disease-causing Cx26 mutants in vitro, which can also act in trans to cause the splicing of co-expressed wild-type Cx26. However, this feature is only evident in cells expressing tagged Cx26 and therefore does not occur in vivo, or in cells expressing untagged Cx26/mutants. The likely pathological mechanism in KID syndrome is related to the reduction of hemichannel CO2 sensitivity which the authors have found across several KID mutantsincluding now the A40V mutant in this study. The manuscript is well written, and the authors have revealed a detrimental mechanism of tagged Cx26 mutants in vitro, which should be taken into consideration for future research in the connexin field. I only have the following minor corrections and considerations for the authors to address: 1) Insert the word "is" before "unavoidable" online 29 of abstract 2) What's the scale of the Y axis in Fig.4, -is it percentage? If so, it should be written as such in the axis label. 3) Splice is spelled as "spice" on page 11, line 7. 4) Is the untagged Cx26A88V mutant toxic to cells alone? It would also be interesting to see additional data of untagged mutant in Figure 4. 5) It's interesting that untagged input and reporter Cx26/mutants do not get spliced, whereas tagged versions so. Therefore, I think it would be useful to see the untagged Cx26A88V and untagged Cx26WT data in Fig. 6 rather than saying "(data not shown)". 6) Although untagged Cx26 mutants lack the ability to undergo aberrant splicing (i.e. it only occurs with tagged proteins in vitro), is there any evidence that the mutations could have an affect on other connexins? We know that Cx30 is often expressed in the same cell types as Cx26 and it's interesting that some mutations in Cx26 are mirrored in Cx30 in terms of their location/substitution (such as A88V) and phenotype (skin disease and/or hearing loss). A recent article by Kelly et al. (J. Cell Sci., 2019) found that the untagged Cx30 A88V mutant killed cells in vitro, but not in vivo, and reduced both mRNA and protein expression of Cx30 in the cochleae of Cx30A88V mutant mice. Have the authors have looked at this mechanism with other connexin subtypes? Or could the authors comment on the possible trans effects across connexin subtypes? Author's Response to Decision Letter for (RSOS-190188.R0) See Appendix A. 08-Jul-2019 Dear Professor Dale, I am pleased to inform you that your manuscript entitled "Cx26 KID syndrome mutations trigger alternative splicing of Cx26 to prevent expression and cause toxicity in vitro" is now accepted for publication in Royal Society Open Science. You can expect to receive a proof of your article in the near future. Please contact the editorial office (openscience_proofs@royalsociety.org and openscience@royalsociety.org) to let us know if you are likely to be away from e-mail contact. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication. Royal Society Open Science operates under a continuous publication model (http://bit.ly/cpFAQ). Your article will be published straight into the next open issue and this will be the final version of the paper. As such, it can be cited immediately by other researchers. As the issue version of your paper will be the only version to be published I would advise you to check your proofs thoroughly as changes cannot be made once the paper is published. You have the opportunity to archive your accepted, unbranded manuscript, but access to the full text must be embargoed until publication. Articles are normally press released. For this to be effective we set an embargo on news coverage corresponding to the publication date of the article. We request that news media and the authors do not publish stories ahead of this embargo (when final version of the article is available). The reviewers and AE agree that this is a potentially interesting manuscript but have raised a number of major flaws which need to be addressed. These could potentially be addressed in a re-submission but the manuscript as it stands cannot be accepted. Associate Editor Comments to Author (Dr John Dalton): Your manuscript was deemed original and interesting by our reviewers. However, one reviewer raised substantial concerns and suggested further interpretation, attention to other studies in the field and, indeed, some further experiments to relate the studies to the pathological mechanisms underlying KID syndrome. We agree that adding further reference to other studies in the field, and improving the Discussion strengthens the paper. Much of the comments from Reviewer 1 point to a desire to answer questions that this paper was not designed to investigate and are not relevant to the novel phenomenon that we have discovered. We think it worth being clear up front that while we have responded as much as we can to the Reviewers' comments and have added some new data, we are not willing to perform wholesale new experiments on trafficking or pathology in response to Reviewer 1 for the reasons given below. Reviewer: 1 Comments to the Author(s) Brief overview of manuscript In the present manuscript the authors have studied several mutations in GJB2 encoding the gap junction protein connexin 26 that are associated with the inflammatory skin dysplasia Keratitis Ichthyosis Deafness syndrome KID. They suggest that the presence of the mutation evokes an internal splice site within the mRNA that is not normally active and that this is linked with cell death. They further mutate 5' and 3' splice regions and identify a site that rescues the cytotoxicity and aberrant splicing. They also show that one of the mutations alters hemichannel sensitivity in response to CO2. The authours conclude that the aberrant splicing is due to the addition of mcherry as a marker, however they do not in the discussion consider wider implications. The work is well presented and aspects of the results clearly explained. However, I have several major concerns about the scope of the manuscript, the interpretation of the data in line with accumulated evidence in the field and recent literature on induction of alternative splicing linked to cell stress responses. These concerns should be reviewed and indeed further experiments warranted to relate the studies to the pathological mechanisms underlying KID syndrome. Specific comments. Introduction The introduction is lacking some important information on the differences between the mutations selected for study in terms of severity of disease. Cx26A88V is established to be associated with a lethal form of KID syndrome along with the mutation Cx26G45E. Other mutations such as N14K have been reported previously and are not discussed. A similar mutation in Cx30 A88V has also recently been reported by Kelly et al (Journal of Cell Science 2019) where the lethality of this mutation is also clearly presented. Thus some further introduction to current understanding of the pathological nature of the KID mutations and the A88V locus should be included. We have added greater explanation of KID syndrome mutations and references to the introduction pp 3. Methods All the constructs used are fused in frame to the expression vector mCherry and others are in untagged vectors. HeLa Ohio cells, that do not endogenously express connexins were used in all the studies, thus the system is essentially assessing homomeric Connexin channel formation. Transfectionscells are transiently transfected with the constructs and expression analysis assessed 48 hours -72 hours post transfection at which time there is considerable over expression of the introduced plasmids and build up of protein in intracellular stores. Cell Death was monitored only by trypan blue exclusion assays, while more physiological assays such as a MTT assay or WST-1 assay or apoptosis assay would seem more appropriate. In our view the Trypan Blue assay quantifies cell death and is informative -it shows that it is the splicing itself, but not the spliced product, that is the important factor in increasing cell death. We do not see that we need to use another assay to document this point. Statistical analysis is appropriate Results/Discussion 1. The time point at which the assays are performed following transient transfection is quite late. In the controls (WT and the A88S nonsyndromic mutation) there is a clear accumulation of protein in intracellular stores by 48hrs. Was a time course on mRNA extraction performed. Although A88V is not highly expressed the spatial localisation of N14K and A40V in these cell lines is not reported. Indeed previous studies have clearly shown that N14K is trapped intracellularly in the ER 24-48 hours post transfection in HeLa cells. Further assays clarifying if the cell death phenotype/ ER accumulation is associated with the alternate splicing are required. mRNA analysis was performed at 48h post transfection. Our aim was to test for the presence of the mRNA because expression levels of the mutant connexins were so poor. mRNA extractions at different time points were not seen as informative in this regard. We performed the assays at a time point when we know that functional Cx26 hemichannels (WT and mutant) are present in the plasma membrane from our previous studies (including N14K/N14Y, de Wolf et al 2016, Physiol Rep 4, e13038). Given that the toxic moiety seems to be the folded mRNA and not the protein and the alternatively spliced product is non-functional, accumulation in the ER seems a moot point to us. 2. Figure 2 indicates the identified internal splice sites that are activated. However the selection of the M151L mutation that rectifies this should be indicated on this figure for clarity. The discussion should consider the implications of this on the overall folding and structure of the protein. We have added a notation to the legend of Figure 2 that the highlighted codon at the 5' splice site is for Met151. We have added material to the discussion (pp 11). M151 is an outward facing residue of the third transmembrane domain that interacts with the membrane lipids, making it unlikely that there is a substantial change in channel conformation. WT Cx26 with the conservative M151L mutation is still CO 2 sensitive, still permeable to the fluorescent dye and can still be blocked by extracellular Ca 2+ . By these measures the channels still retains the major functions necessary for its role as a CO 2 sensitive hemichannel. 3. The authors clearly identify that the introduction of the M151L mutation thereby removing the 5'splice site can rectify splicing. However, there is little comment on the spatial localisation of the proteins. Indeed it looks as if the A88V protein still has trafficking issues and higher quality images with co-staining with cellular markers (e.g. ER markers) would help clarify this. This could have important implications that are not discussed. This paper is about the splicing not about the trafficking. The M151L mutation rescues expression minimizes cell death and allows functional hemichannels to enter the plasma membrane. This is true also for A88V which exhibits CO 2 insensitive hemichannels, an observation that we did not include in this paper as we had already documented this previously. 4. The cell death assays clearly show that the removal of the splice sites restore cell viability of A88V. This is not the first mutation a A88V in a beta connexin that has been reported to cause cell death in HeLa cells. A similar mutation in Cx30, associated with Clostron syndrome was extensively studied in both HeLa cells, a cochlear cell line and a mouse model. In these studies, and others similar effects of cell death were observed in both GFP tagged and un tagged proteins. Further in the Kelly study mRNA levels of Cx30 in the mutant mice was significantly reduced where they suggest that the mutation results in loss of stability of the mRNA. Indeed there are extensive studies by this group and others that are not given recognition in this manuscript for contributions to the molecular mechanisms underlying a range of Connexin channelopathies. This is a different connexin with a very different C-terminal tail. The relevance of this work is not immediately obvious to our work on Cx26, but we have made reference to this with brief discussion on pp 8. As mentioned earlier we have included a more comprehensive section in the introduction on mutations and channelopathies relevant to Cx26 (pp 3). 5. The CO2 and hemichannel assay for A40V is tagged onto the end when most of the work has focused on the A88V mutations. It would seem more relevant to focus on A88V at this point We have already published the CO 2 insensitivity of A88V, we chose A40V to extend knowledge of the effects of KID syndrome mutations. 6. Furthermore, there is increasing literature to suggest that a number of mutations in b-connexins result in altered association with connexin43, with which the beta connexins do not normally oligomerise. It would therefore be worthy to take the A88V mutation and study the splicing effect in the presence of other connexins, providing a more integrated approach to the complex nature of connexin tissue expression profiles. This is beyond the scope of what we are reporting in this paper. We also point out that the alternatively spliced product has an altered membrane topology (C-terminus now extracellular) so is very likely non-functional (pp11, "The nature of the toxic moiety and the trans dominance of the alternative splicing"). 7. Finally, there is accumulating evidence that many of the connexin mutations induce ER stress pathways or mitotic crisis points, recent reports in the literature also suggest that such events can evoke alternative splicing pathways (eg Carew et al IJMS 2019). Therefore further studies should systematically rule out if the alternative splicing is truly caused by the addition of a tag or if it is a true reflection of the disease pathway. We think there is little doubt that the alternate splicing is caused by the addition of the tag. Our additional data in Figure 6 makes this very clear (see Fig 6C and D -where there is no tag there is no splicing). However, it is a very interesting question as to whether this could occur in the human disease pathway. It is entirely possible that the non-coding sequences of the mRNA (missing from our expression constructs) could cause the splicing to occur for the KID syndrome mutations. We discuss this in the paper. The best and most direct way to answer this question would be to extract mRNA from human KID syndrome patients. We think this is an important issue to be settled by others with access to relevant fresh tissue samples and our paper may provide sufficient motivation for them to do this. Conclusion While the manuscript has identified a novel pathway evoked in Hela cells expressing several KID mutations the study is premature and requires consolidation to consider the merits of the pathway in context with current literature. We disagree that our study is premature. We have been very open about the limits of our study, and whether this is pathophysiologically relevant. We think this is an interesting cellular phenomenon that we discovered by chance. It is worth reporting as many investigators in the field still tag their mutant connexins with a fluorescent protein. Our paper will assist them to design their experimental approaches to achieve better cellular expression and avoid a potentially artefactual confounding factor. Reviewer: 2 Comments to the Author(s) The authors present here an interesting article that details a previously unknown phenomenon of cryptic splice site activation in disease-causing Cx26 mutants <i>in vitro</i>, which can also act in trans to cause the splicing of coexpressed wild-type Cx26. However, this feature is only evident in cells expressing tagged Cx26 and therefore does not occur in vivo, or in cells expressing untagged Cx26/mutants. The likely pathological mechanism in KID syndrome is related to the reduction of hemichannel CO2 sensitivity which the authors have found across several KID mutantsincluding now the A40V mutant in this study. The manuscript is well written, and the authors have revealed a detrimental mechanism of tagged Cx26 mutants in vitro, which should be taken into consideration for future research in the connexin field. I only have the following minor corrections and considerations for the authors to address: 1) Insert the word "is" before "unavoidable" online 29 of abstract This has been corrected 2) What's the scale of the Y axis in Fig.4, -is it percentage? If so, it should be written as such in the axis label. This is now clarified in the axis label -it is fold increase over mock-transfected control. This has been corrected 4) Is the untagged Cx26A88V mutant toxic to cells alone? It would also be interesting to see additional data of untagged mutant in Figure 4. We have added these new data to Figure 4. 5) It's interesting that untagged input and reporter Cx26/mutants do not get spliced, whereas tagged versions so. Therefore, I think it would be useful to see the untagged Cx26A88V and untagged Cx26WT data in Fig. 6 rather than saying "(data not shown)". We now show this result in a new version of Figure 6. 6) Although untagged Cx26 mutants lack the ability to undergo aberrant splicing (i.e. it only occurs with tagged proteins in vitro), is there any evidence that the mutations could have an effect on other connexins? We know that Cx30 is often expressed in the same cell types as Cx26 and it's interesting that some mutations in Cx26 are mirrored in Cx30 in terms of their location/substitution (such as A88V) and phenotype (skin disease and/or hearing loss). A recent article by Kelly et al. (J. Cell Sci., 2019) found that the untagged Cx30 A88V mutant killed cells in vitro, but not in vivo, and reduced both mRNA and protein expression of Cx30 in the cochleae of Cx30A88V mutant mice. Have the authors have looked at this mechanism with other connexin subtypes? Or could the authors comment on the possible trans effects across connexin subtypes? We have not looked at other connexin subtypes. Given this appears to be a mechanism that depends very specifically on the mRNA sequence (A88V but not A88S triggers this) it may be unlikely that different genes with different nucleotide sequences would display this, but of course it might still occur. We have added some discussion of this on pp 12.
2019-08-07T13:02:35.708Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "a6dcbd0974605d229a851bf2556b174ce8f573e5", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.191128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af6944d9ef2bfc0510bd0c27e1617ca7d7a6baf4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
11030763
pes2o/s2orc
v3-fos-license
HIV drug resistance mutations following poor adherence in HIV-infected patient: a case report Key Clinical Message Acquired HIV drug resistance following poor adherence is common. We report a case of HIV-infected patient with poor CD4 gain and self reported poor adherence. Investigations revealed high viral load and resistance to NRTIs and NNRTIs with sensitivity to boosted PIs. HIVDR mutations create treatment challenges in resource limited settings. Introduction The World Health Organization (WHO), together with the Joint United Nations Programmme on AIDS (UNA-IDS) and other partners have a stated goal of providing worldwide access to antiretroviral therapy (ART) [1]. Data worldwide show that there is increasing accessibility to HIV treatment. In 2011, approximately 8 million people in low-middle income countries were receiving HIV treatment, compared to only 400,000 in 2003 [2]. In Eastern and Southern Africa, it is estimated that up to 56% of those eligible for HIV treatment had access to therapy while only 10% of patients were treated in 2009. This scale up of HIV treatment in low and middle-income countries has been associated with a significant decline in morbidity and mortality, [3] as well as maternal-to-child transmission. Unfortunately, the expanding access to HIV treatment goes hand in hand with the emergence of HIV drug resistance (HIVDR) with a reported 20% risk of mutations from at least two of the three main drug classes within 6 years [4]. In Tanzania, a nationwide care and treatment program has been established and implemented for HIV-infected individuals since 2004. Despite this fact, these clinics face a number of challenges including limited resources and a low number of available antiretroviral drugs. HIV infection is treated with standardized first and second-line ART regimens and no medications are proposed or available for use as third-line treatment in Care and Treatment Clinics (CTC) [5]. There are a number of mechanisms by which HIVDR occur in treatment-experienced patients. Selection pressures in patients using ART can occur due to sub-optimal dosage, inadequate use of regimens, poor adherence to medication or other pharmacodynamic factors. Genotypic resistance testing (GRT) is, therefore, an important component of treatment and constitutes the standard of care in developed countries. In resource-limited countries, however, GRT is rarely available. In resource-limited settings, the practice is to switch patients who do not respond to therapy based on either clinical findings or immunological criteria from a nonnucleoside reverse transcriptase inhibitor (NNRTI) to protease inhibitor (PI) based regimens. In most cases, however, treatment failure is detected late and there is significant drug resistance leading to limited efficacy of second-line drugs [6]. There are several reports on acquired HIVDR in Africa, with most cases being associated with poor adherence. This has been shown to limit future treatment options. We report a case of a 54-year-old HIV-infected patient who self reported poor adherence to his ARVs presenting with poor immune recovery and features suggestive of tuberculosis. Investigations revealed a viral load of 89,752 copies/mL and resistance to Abacavir, Didanosine, Zidovudine, Tenofovir, Stavudine, Emtricitabine, Lamivudine, Nevirapine, and Efavirenz with sensitivity to boosted protease inhibitors. Case Report A 54-year-old male patient from Mwanza, Tanzania enrolled at Hindu Hospital CTC in 2010 was referred to Bugando Medical Centre CTC in 2012 for management of poor CD4 gain. When he was received at Bugando CTC, his CD4 count was 7 cells/lL and had been taking ARV medication for 2 years. The patient was started on Atripla in 2010 and a month later was switched to Combivir/Efavirenz because Atripla was out of stock. However, one year later he was switched back to Atripla because of anemia. The chief complains at his current visit were productive cough, pleuritic chest pain, and low-grade evening fevers for 3 weeks prior to admission. The patient also reported a history of significant night sweats and weight loss over the same time period. He had used several over the counter medications without relief. A review of systems was significant for loss of appetite. His past medical history was not significant prior to the diagnosis of HIV. On scrutinizing his medication use, he admitted poor adherence to ART. He divorced his first wife in 1994 and has been remarried to his second wife since 1996. His second wife tested positive for HIV in 2012 and she is also on ART. The patient had history of alcohol abuse, but has abstained since 2003. The general examination was normal. He had stable vital signs with a blood pressure of 130/90 mmHg, pulse rate of 80 beats/min, respiratory rate of 20 breaths/min and body temperature of 36.7 0 Celsius. Systemic examination revealed no abnormal findings. Investigations were ordered accordingly and the results were as follows: Sputum for Acid Fast Bacilli (AFB) was negative twice, chest x-ray (CXR) revealed perihilar opacities, CD4 cell count was 78 cells/ll. The viral load performed at a private owned laboratory (Lancet laboratory) was found to be 89,752 copies/mL. Genotypic resistance and drug sensitivity tests were also performed at the Lancet laboratory. Discussion In treatment-experienced HIV patients with treatment failure, HIVDR has been found to be very common [7,8] and is associated with poor adherence [7] and, in some cases, the duration of treatment [8]. Failure to predetermine existence of primary resistance predisposes patients to treatment failure. Without viral load monitoring, treatment failure will often be detected late and risk further accumulation of resistance mutations limiting future treatment options. Our patient was not tested at baseline for primary HIVDR because genotypic resistance testing is not routinely done. Viral load monitoring is not readily available as well and, as a result, treatment failure was diagnosed late. In most cases, treatment failure in resourcelimited settings is reached through clinico-immunological criteria. that usually leads to unnecessary switching of drugs or late detection of treatment failure [6,9]. Nonadherence to ART in adults ranges from 33 to 88%, depending on the definition of adherence used [10]. Evidence shows that high levels of adherence are required for prevention of HIVDR [10]. Fear of disclosure and the resulting stigma, forgetfulness, a lack of understanding of treatment benefits, and complicated regimens have all been found to be barriers to adherence [11]. Adherence can be improved by addressing these barriers through discussions with patients and providing education regarding the treatment benefits to health outcomes [12]. Addressing stigma is a very important intervention in improving adherence levels. As described in the case report above, our patient has high resistance to all of the NRTIs and NNRTIs available for use in HIV treatment in Tanzania and susceptibility only to the PIs. None of the proposed drugs for third-line use are available in Tanzania [5]. The resistance pattern seen in this patient poses a great challenge in the choice of medication, especially bearing in mind the low availability of different types of antiretroviral drugs. Our patient was kept on Zidovudine, Lamivudine, and boosted Atazanavir, with the only susceptible drug being boosted Atazanavir. In developed countries, GRT plays an important role in the care of HIV-infected patients. However, in resourcelimited settings this cannot be routinely done. Failure to pretest primary HIVDR coupled with the lack of viral load monitoring increases the chances of treatment failure, late detection of treatment failure and accumulation of resistance mutations. Our case outlines the challenges that care providers encounter in HIV treatment in resource-limited settings, especially in patients failing first-line treatment. Given the limited availability of medications and the inability to detect treatment failure early, it is difficult to switch medications in a timely manner and prevent accumulation of resistance mutations. It also demonstrates the importance of early detection of HIVDR and minimizing acquired resistance by supporting measures to improve adherence to treatment because development of resistance greatly limits treatment options. Conclusion The report conclusively outlines how development of HIVDR limits treatment options in resource-limited settings, where there is already limited availability of antiretroviral drug options, and the importance of viral load monitoring for early detection of treatment failure and prevention of accumulation of resistance mutations.
2016-05-12T22:15:10.714Z
2015-03-29T00:00:00.000
{ "year": 2015, "sha1": "1d8b16a66d8965ed6418a27634b330b5d6aaab17", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.254", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d8b16a66d8965ed6418a27634b330b5d6aaab17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
156748211
pes2o/s2orc
v3-fos-license
Uses of Macro Social Theory: A Social Housing Case Study This article reflects on the use of macro social theoretical perspectives to explain micro social issues, using social housing allocations as a case study. In contrast to a number of social theoretical examinations of social housing allocation schemes in recent years, spanning socio‐legal studies, we argue that ‘cookie‐cutter’ theories may overlook other positions and counter‐factual scenarios. We draw on a sample of local authority allocation schemes to reflect on the growing category of households (commonly termed ‘unhouseables’ by housing officers) which are excluded from appearing on such schemes because of their former housing deviance or some other disqualification. We offer a set of reflections grounded in our data, which focus on sustainability. Thus, rather than point to particular rationalities or the like, we offer particular housing issues as explanatory factors – including the declining stock and financial ‘competitiveness’ of social housing management – as well as a rise in punitiveness. INTRODUCTION Public services in England and Wales are in a state of flux. On 21 July 2015, the Chancellor of the Exchequer, George Osborne, launched a spending review calling for further savings amounting to £20bn to be found from Whitehall budgets. Every unprotected government department is to begin modeling how they might cut budgets by 25 and 40 per cent by 2020. Those savings are in addition to the £12bn of welfare savings already unveiled. Such cuts are likely to involve a reinvention in the very manner by which public services are delivered and enjoyed. On any assessment, public services are, across the board, entering yet another period of significant change and upheaval. In this article we focus on housing, more specifically the allocation of 'social' housing, but in truth we could have selected other public services, from highways to prisons. Housing in particular has become a political hot topic, with each political party in the run up to the general election in May 2015 pledging to outdo their rivals as to the number of houses they would build if elected, and with the Conservatives guaranteeing housing association tenants the right to buy their homes. It is now widely acknowledged, quite apart from party politicking, that there is an acute housing shortage, both private and social, and that previous measures designed to redress this shortage have failed. Whilst the crisis within the housing sector is now highly topical, challenges to the very nature of public services and their functions are anything but new. Indeed, almost every change in government leads to apparently stark claims about the need for fundamental changes to the purpose of public services, which have extended beyond a simple question of who is to provide them. 1 Specifically in the field of social housing, this has been shown to be too simplistic. 2 The precise purpose or purposes behind social housing have remained largely undefined and unclear since its inception. As Power pithily observed, 'Councils became landlords without commitment, plan or forethought.' 3 The fevered context within which social housing operates and the contemporary public services 'cuts agenda' provide us with the impetus to consider not only how social housing is allocated by local authorities in this testing environment but also how social theory might be employed to explain this process. As to such theorising, quite grand theoretical claims have been made. Thus, the Conservative Party's time in office in the 1980s has been described as entailing a shift in the provision of social housing from allocation on the basis of 'housing need' to being awarded according to who 'deserved' it. That shift was underscored by the government of John Major, with its ill-fated and ill-conceived 'back to basics' philosophy, embodying a nostalgic appeal to socalled 'traditional values'. 4 By way of illustration, there was an explicit bias towards favouring married couples over cohabitants. 5 The allocation of social housing in the New Labour era from 1997 has been defined as founded on 'advanced liberalism', a link between a mentality of government and ethical self-regulation. 6 The introduction of greater 'choice' into the system as a means of empowering communities during this time was portrayed as a solution to the principal housing problems. The Coalition Government's interventions have been identified with a number of new problematics including: whether the purpose of social housing was to be a welfare safety net, an ambulance service; 7 the site of further 'class war conservatism'; 8 or the culmination of processes of the production of 'welfare ghettos'. 9 The Coalition focused its attention on the need for social housing to be governed from within, by and for the local; 10 allocations were interpreted as 'fundamentally a local' issue. 11 Thus, the purpose of social housing was to respond to local need and the legislative prescription was, not surprisingly, also therefore local, encapsulated in the Localism Act 2011. 12 This emphasis on localism and decentralisation continues under the current Conservative government. The central argument in this paper is that simple binaries, monochrome applications of social theories and perspectives, or even different strands of localism (with its lengthy and contested histories) cannot explain social housing allocation processes. In support of this argument, we advance three key findings. First, we draw on our own empirical evidence of a relatively neglected backwater to illustrate this point: those households which are excluded from social housing waiting lists or which, in the legislative vernacular, are 'non-qualifying' households. From our survey of 50 local authority housing allocation schemes, we identify that each authority applies similarly phrased but subtly distinct allocation qualification criteria. Secondly, we build on the identification of this patchwork of qualifying criteria to reject what we call 'cookie-cutter' theoretical analysis. This leads us to call for the development of data-led social theories to explain social housing problems and processes. Our argument is that although there is a temptation to adopt 'a kind of cookie-cutter typification or explanation, a tendency to identify any program with neo-liberal elements as essentially neoliberal', 13 the strategies and techniques of government are far more complex and contradictory. No one size fits all, but, to continue the metaphor, there are different shapes of cookie-cutter apparently in action simultaneously, each with their own explanatory logics. 14 As Larner observes, Our point is that part of the problem with the 'cookie cutter' approach is that it uses macro social theory to explain micro phenomena. The theoretical positions we review are best used on a very broad canvass of both time and space -in other words, they are best suited to describing large social trends; and it is always possible to dispute their use by drawing on micro data which do not 'fit'. We are not seeking to undermine or upset larger social theory but to argue for its more appropriate use. Where it is used in relation to micro social processes, it can be used in an inappropriate way: first, to explain rather than describe, and, second, as an all-purpose starting point. Our final and third finding, again from our data, is that the identified patchwork of housing qualification criteria carries with it the potential for the expansion of a category of households which are, in social housing parlance, 'the unhouseables'. 16 This is not a category where housing deviance (a term we explain below) is to be neutralised through expulsion; nor are they households which, although now excluded, may be re-affiliated at a later time once they have demonstrated their self-governing capacities. 17 They are, quite simply, banished by a 'savage sorting' through an 'ultimately elementary extraction', 18 and one that might be said to have its basis in the techniques of social science. 19 As we explore, perhaps the most pertinent explanation is not the relatively broad-based grand theory, but one of transparent politics against a backdrop of coerced collaboration in which the more risky households are quite simply weeded out with no possibility (or, at least, no immediate possibility) of redemption. These are not conditionality strategies (which are well-documented aspects of social welfare) 20 through which the poor and supposedly indigent are exhorted to self-improvement, 21 but the punitive, sovereign strategy of having the door slammed in their faces. As we seek to explain, by whatever means one describes such strategies -as contradictory, as authoritarian liberalism, or naked and punitive sovereignty -they are neither politically nor spatially differentiated. We begin this paper, however, by outlining the rather tortuous legislative and policy history surrounding social housing and, more specifically, by detailing the basis upon which households are excluded from such housing and how 'non-qualifying' status is determined. 16 This is a phrase which appears time and again across various empirical projects in which one of us has been involved. It refers to a category of household for which no public or private agency wants to provide accommodation. They are not ready fodder for the private sector, as some of the more antagonistic, class theorists might suggest. The developing law: Rationales From the outset, exclusions were part and parcel of the social housing allocations process. 22 Such exclusions were either implicit -the rent that was set was too high for many 23 -or explicit. 24 Early studies, for example, demonstrated how applicants were 'graded' according to preconceived notions of deservingness, often occurring after the 'housing visitor' (often, a middle aged, middle class female local government employee) came to discuss rehousing with the applicant(s). 25 In 1993, 92 per cent of local authorities excluded certain households from their waiting lists. 26 So, although it can properly be argued that the production of social housing was a means to manage the poor, it is also clear that only certain poor households were included in this process of what has been described as 'moral cleansing'. 27 The Housing Act 1996, Part 6, formalised the social housing allocations process. Previously, legislation contained only vague reference to prioritisation through reasonable preference. 28 The 1996 Act, for the first time, set out a requirement on local authorities to have a housing register and a published allocation scheme. 29 Local authorities were given the power to 'decide what classes of persons are, or are not, qualifying persons' and therefore to decide who was entitled to appear on the register (subject to statutory and quasi-statutory inclusions and exclusions). 30 Certain persons from abroad were also made non-qualifying persons. 31 The accompanying Code of Guidance, to which local authorities were required to refer, provided examples of the categories of households central government believed could be 'non-qualifying': 22 'The Ministry of Health provided guidance in 1920 which suggested "the careful selection of tenants" and "the elimination of unsatisfactory tenants", but did not offer any criteria under which such assessments could be made': D. Cowan people with a history of anti-social behaviour, people who have attacked housing department staff, or tenants with a record of rent arrears. Authorities could impose other qualifications, such as those related to residency in the authority's district or ownership of a property, although they may wish to consider the implications of excluding all members of such groups, eg, elderly owner-occupiers. 32 It was said 33 that the most common exclusion under the Housing Act 1996, as originally enacted, excluded those with rent arrears, without a 'local connection' 34 with the authority, those guilty of anti-social behaviour, under 18s, 35 and owner-occupiers. Despite a commitment to remove the qualifying persons provision in their 1997 election manifesto, it was not until the Homelessness Act 2002 that New Labour reversed it. The exclusion of households from abroad remained, but local authorities were prevented from making other households non-qualifying. There were good reasons for doing so, in line with the choice-based lettings (a market-based model of social housing allocation, requiring households to be active in their search by bidding for the property they wanted) 36 and equalities agendas, as well as an ambition that lists would reflect housing need in the area. Further, it was in line with recommendations made by the then disbanded Central Housing Advisory Committee. 37 However, this was not a completely altruistic policy, because it was also clear that the previous mantra of 'allocation according to need' was also breaking down. As the 2000 Green Paper made clear: The ability to exclude households from the list was replaced with a discretionary power to alter a household's priority as a result of behaviour which 'affects his suitability to be a tenant', and financial resources. 39 After the 2009 local elections, when the British National Party made some gains, albeit very limited, guidance was issued which suggested that local connection could be taken into account as a 'policy priority' in allocations decisions. 40 The Coalition Government returned the original power to exclude to local authorities in the Localism Act 2011. The rationale was that 'open waiting lists' had encouraged households to put their names down on the list where they had no real need for housing, and created 'false expectations' in areas where demand outstripped supply. 41 The guidance offered to local authorities about the new exclusion provision was rather more nuanced than that offered in 1996. It recognised that authorities had to balance their new power against their equalities obligations, the requirement to give reasonable preference to certain categories of household. There was also a more nuanced framing of exclusions and a suggestion that there should be an allowance for 'exceptional circumstances'; and that they 'should avoid setting criteria which disqualify groups of people whose members are likely to be accorded reasonable preference for social housing'. 42 Alongside this guidance, though, was a requirement by statutory instrument that local authorities could not apply local connection rules to certain members of the armed forces. 43 Subsequent guidance reinforced the government's view that local authorities 'should prioritise applicants who can demonstrate a close association with the local area'; 44 indeed, rather more strongly than has appeared before in statutory guidance. 45 The Secretary of State believes that including a residency requirement is appropriate and strongly encourages all housing authorities to adopt such an approach. The Secretary of State believes that a reasonable period of residency would be at least two years. 46 This reflects an odd contradiction at the heart of social housing policy -the Coalition Government (as those before it) emphasised the need for mobility, 47 while at the same time prioritising local connection. There is a clear tension in localism, however, between a coherent national scheme and local schemes. 48 As Maclennan and O'Sullivan put it, a commitment to localism in housing policy does not mean the abandonment of coherent housing policies and governance mechanisms at national, regional and metropolitan scales, and that a simple localisation of housing policy can dangerously hamper the development of effective, efficient and flexible housing systems. 49 So, a tension exists between local exclusions, regional allocation schemes, and national housing policy. If national housing policy is so vulnerable that households should be prioritised, 50 then an exclusion makes no sense. Similarly, if a household is excluded by one local authority, a rational householder might be expected to seek out an alternative local authority not subject to such an exclusion Thus, the local pressure is likely to be to maintain qualification criteria at least similar, if not more strict, than your neighbour. There is the potential for a race to the bottom, although, as we shall see, that may not have occurred in practice. The relevant provisions of the Localism Act 2011 were brought in to force in January 2012. By that stage, local authority waiting lists had grown annually. By 1 April 2012, there were 1,851,426 households on waiting lists in England. The most recently published statistics show that, as at 1 April 2014, there has been a 26 per cent decrease in that number (there were 1,368,312 households on waiting lists in England). 51 The bottom line is that nearly 500,000 households have apparently been wiped off waiting lists. It has been recognised by government that local authorities' 'freedom to manage their own waiting list' is 'probably partly responsible for this decrease'. 52 In that same year, 2013-2014, there was a slight increase in the number of social housing general needs lettings (approximately 293,000 lettings were made). 53 Internal transfers, which 48 , who argues that that the Coalition Government's version of localism was presented as 'advantageous over "inflexible centrally-determined rules" but inflexible centrally determined rules can also be characterised as a national code of protective, justiciable rights'. 50 As appears to be the case by reference to the categories of household to which reasonable preference must accrue on allocation schemes: Housing Act 1996, s 166ZA. 51 Table 600 happen off register so to speak, as they are not now included on waiting lists, 54 made up 23 per cent of all such lettings. 55 The judicial approach It is instructive to consider the interaction of the courts, local authorities and allocation schemes. As one commentator has observed, after a period in which social housing allocation schemes were subject to intense scrutiny, allocation schemes have been 'de-judicialised' as a result of the House of Lords decision in R(Ahmad) v Newham London Borough Council 56 (Ahmad). The court refused to interfere with Newham's broad-brush allocations scheme, which was based on date order (ie, priority within broad bands was determined on the date on which applicants applied for accommodation). They did so because the statutory scheme gave authorities broad powers; and it seems unlikely that the legislature can have intended that Judges should embark on the exercise of telling authorities how to decide on priorities as between applicants in need of rehousing, save in relatively rare and extreme circumstances. Housing allocation policy is a difficult exercise which requires not only social and political sensitivity and judgment, but also local expertise and knowledge. 57 However, the court has shown itself to be more willing since implementation of the Localism Act 2011 to intervene to strike down perceived unfairness in the design and operation of allocation schemes. One such scheme, that of Hammersmith and Fulham LBC, warrants closer attention. Under this scheme, homeless households in long-term accommodation were excluded from appearing on the list, treated as non-qualifying households. At a stroke, this meant that 87 per cent of households were excluded. Homeless households are, however, statutorily accorded 'reasonable preference' under the 1996 Act, as amended. 58 The reasoning behind the exclusion was said to be that the Council is recognising an obvious but often neglected truth -i.e. that the housing needs of applicants owed the main housing duty are not uniform and that very often the quality of the accommodation provided in discharge of the main housing duty is such that the Council can reasonably conclude that an applicant's current housing needs are met. 59 outside the statutory scheme. 60 An allocation 'scheme' included not just those entitled to an allocation but also those excluded. This view was supported by the guidance (to which reference has been made above). Secondly, and perhaps less persuasively given the House of Lords' approach in Ahmad, the court held that the exclusion of such a large proportion of applicants effectively thwarted the statutory scheme, despite the flexibility allowed to an authority in the term 'reasonable preference'. Tomlinson LJ doubted whether 'either these proceedings have achieved any practical purpose or that the Claimant will derive any benefit from our decision' because it was observed that the council could simply adjust their allocation scheme so as to arrive at the same conclusion. 61 A consideration of the judicial treatment of local authority allocation schemes provides one of the contexts in which our survey of 50 allocation schemes must be analysed. It is to this survey that we now turn. LOCAL EXCLUSIONS: METHODOLOGY In 2014, we conducted an electronic survey of a selection of 50 local authorities' qualifying criteria, purposively sampled by reference to geographical area, political control, and whether rural/urban. The 'geographical area' element was selected on the basis of regional area, as used to be provided by the Department of Communities and Local Government in presentation of their statistics until 2011. 62 'Political control' and 'rural/urban' have both been indicated in past research to be relevant factors in analysing waiting lists. As authorities are required to publish their schemes, the focus on the actual allocations schemes meant that they were relatively easy to access online. 63 We downloaded each of the published schemes and, for the purposes of this article, analysed their qualification criteria. As qualification criteria are more than 'a matter of mere detail' (for example, because affected households can request a review of a negative decision), 64 they are required to be included in the published allocations scheme. Thus, they appear on the face of the scheme. We coded them initially into four broad qualification criteria relating to (generalised) 'nuisance and anti-social behaviour', 'local connection', 'rent arrears', and 'other'. We used these categories because of the experience under the Housing Act 1996 as originally drafted, but, as we demonstrate below, the 'other' criteria encompasses a range of more specific exclusions, some of which are used by a large proportion of our sample. 60 It is important to note that local authorities were sampled and not other providers of social housing (such as other social landlords). The reason for that limitation was because local authority allocation schemes tend to be the gateway to all social housing in an area, either because they operate 'common housing registers' with other providers or because the authority has a large proportion of nomination rights over other social housing in their area. 65 One reason for this purposive sampling was that, although many local authorities were involved in regional schemes including in London, they tend to maintain their own non-qualification categories. It is realistic to suppose that different authorities in different regions, and perhaps within the same region, would have different categories if the 'local' emphasis has purchase. It might also have been anticipated that a local connection emphasis would be more prevalent in rural areas, where there is a greater emphasis on maintaining locality in a limited housing stock. London, of course, is a different housing market altogether, as reflected in the mayor's powers in relation to social housing construction, the extremely limited social housing stock, the high cost of living (such that restrictions on benefits cause particular problems), and the large-scale export of homeless households in out-of-area placements. However, one of the most interesting findings of this project is that the various factors that were taken in to account in our sampling did not appear to reflect exclusion policies. There is a further and particular limit to our survey. We do not have the internal reports to the local authority committees which approved the new allocations schemes. Local authority data suggest that 60 per cent of local authorities have altered their allocation schemes since the Localism Act 2010. 66 Each change would have been supported by an officer's report. Transparency applies only to the allocation scheme itself, which must be published. 67 Therefore, we do not have the local authority perspective for the changes. There is no particular proxy that can be used either -for example, each local authorities is required to produce a homelessness strategy and a tenancy strategy (and, for London, its own housing strategy), 68 but these documents do not provide the basis for alterations. LOCAL EXCLUSIONS: OUTCOMES We analysed our sample of allocation schemes by reference to four particular categories, drawing on the experiences under the 1996 Act as first settled, the current guidance, and newspaper reports of exclusions as they were appearing. 69 These sources suggested the likely prevalence of three categories of non-qualifying households: those with rent arrears, without a local connection, and nuisance/anti-social behaviour. An 'other' category was also maintained to account for other types of exclusion. The manner in which local authorities constructed their non-qualifying categories beyond these broad labels was also of interest to us -for example, the measure of rent arrears giving rise to exclusion from allocation schemes. If local connection was a category, how long does a household have to have an association with the area? And, if anti-social behaviour, what type and what proof was required? General Every local authority surveyed had some form of qualification criterion. Beyond that, the general results of our survey were that, as regards the specific, named exclusions: -44 authorities had qualification criteria for nuisance or anti-social behaviour, 3 authorities had no such exclusion, and for 3 others it went to prioritisation (ie, an applicant's prioritisation on the list could be reduced because of such behaviour). -27 local authorities had qualification criteria for local connection, 3 had no such exclusion, and for 20 others it went to prioritisation. -30 local authorities had qualification criteria for rent arrears, 8 did not, and for 12 others it went to prioritisation. Where an authority was coded as not having a relevant named qualification criterion, that does not necessarily mean that there was no sanction. So, for example, West Berkshire DC did not 'exclude' an applicant for rent arrears but 'defers' their application. 70 Similarly, the London Borough of Enfield did not 'exclude' an applicant for rent arrears but 'suspends' the application. 71 Sheffield City Council's modus in respect of anti-social behaviour was to retain discretion either to 'suspend [a household's] registration' or 'otherwise restrict it'. Effectively, even where there was no qualification criterion, de-prioritisation and 'suspension' or 'deferment' may well amount to the same thing (particularly in high demand areas). 72 Much of this is not surprising. What is surprising, however, is the low number of local authorities which had an explicit qualification criterion for local connection. Given the emphasis on the 'local' in the Localism Act 2011, there was an assumption that this would be the most significant qualification 69 H. Spurr, 'Locked out' Inside Housing 2 May 2014. 70 Before the deferment will be lifted, applicants need to demonstrate that they have made and adhered to an agreed payment plan for a period of at least three months and/or that the arrears have been cleared or have been reduced below eight weeks rent. 71 Applicants with more than eight times the weekly accommodation charge or who have not maintained a payment agreement for six months will have application suspended until arrears are repaid or a payment plan maintained for six months. 72 And it is notable that most (but not all) such formulations were in high demand areas. criterion of qualifying status (and it had been a prominent exclusion under the 1996 Act as originally settled). It nevertheless turned out to be a far less prominent criterion in our sampled schemes than anticipated. Indeed, the same is true for past rent arrears. The significant number of local authorities with an explicit nuisance/anti-social behaviour qualification criterion can be explained by the increasing attention given to tackling anti-social behaviour by policymakers and its depiction as a social housing problem reaching back as far as the 1990s. 73 As to the category of 'other' qualification criteria, Table 1 below indicates the range and frequency of particular criteria: In addition, households with high care needs, refusing an offer of accommodation, squatters, and poor tenancy reports were excluded by one local authority each. There are clear overlaps with nuisance/anti-social behaviour as regards some of these criteria and others are either high profile housing issues (such as making a false or misleading statement to obtain a social housing tenancy), covered by strong guidance, 74 or are high profile housing allocation issues (such as the income and/or capital restriction). 75 Even the apparently more contentious qualification criteria, such as an unsatisfactory tenancy report (Durham CC), have a history in social housing allocation which would not overly surprise a housing academic or professional. 76 Specifics It is to be noted that, as regards each of the named exclusions, the policies use similar, almost generic formulations (although the specifics are rather different, as discussed below). It is not surprising because, as Spicker noted, there were similarities in allocations policies in the 1980s because practitioners shared views with each other and drew on central government guidance; further, '[m]any housing departments face similar pressures, and they respond with similar policies as a result'. 77 Perhaps the most significant underlying reason for this similarity in phrasing and scheme formulation is legislative and quasi-legislative framing of the qualification. So, for example, as regards nuisance/anti-social behaviour, the most common formulation was: 'An applicant (or a member of their household) found to have engaged in unacceptable behaviour serious enough for the landlord to pursue court action had they been a tenant' -or unacceptable behaviour that was serious enough for a court to order possession (n = 39)a formulation of unacceptable behaviour that was previously found in section 160(8) of the Housing Act 1996, prior to the Localism Act amendments. 78 Similarly, the general definition of 'local connection' was tied to that provided in the homelessness legislation and the local authority agreed protocol. 79 Although less frequent, a common formulation for rent arrears was the equivalent of a Housing Act 1985 or 1988 ground for possession, 'rent lawfully due that has not been paid (current or former tenancy)'. 80 Once the analysis becomes more fine-grained, however, distinctions and considerable scope for discretion emerge. So, for example, on rent arrears, a common expression of the qualification criterion was 'significant rent arrears'. Some local authorities were explicit as to what this meant -the range was from two weeks arrears (Northampton) to 14 weeks, the median being eight weeks (n = 4; including three local authorities in the South West). Where a precise figure was stated, the most common figure used was £1,000 (n = 5). However, 17 local authorities provided no definition of significant rent arrears (although some reduced prioritisation for low or moderate arrears). 81 Only East Riding of Yorkshire DC in our sample expressed a clear limitation period as regards rent arrears -two years. The same is true of nuisance/antisocial behaviour, although here most local authorities provided the generic descriptor followed by lists of relevant conduct. 82 Where authorities excluded applicants above a threshold of behaviour, many would also give less priority to applicants who made it on to the list but with arrears or behaviour below the threshold. (17); dogs (1); domestic violence/racists abuse and/or harassment (15); eviction for breach of tenancy obligation (10); history of anti-social behaviour (9); use of a property for immoral or illegal purposes (11); noise (1); property neglect or damage (9); abuse of staff (7); acts of violence (2). Similar observations can be made in relation to local connection. 14 local authorities, in fact, departed from the local authority protocol when it came to the specific periods of residence in the locality. The range of these periods was from six months to 10 years (Hillingdon LBC), with one local authority (Kingston-upon-Thames RLBC) providing no specific period. The median specific period was three years (n = 4). All London boroughs surveyed (other than Kingston) had a specific period of residence required, ranging from two years (Havering LBC) to 10 years. Ealing and Greenwich had five year requirements; Hackney a three year requirement; Newham a two year requirement and City of London a 12 month requirement. 83 The range here was surprising, in part because of the move to a pan-London allocations scheme and the conflict with regional consistency. Whether or not such lengthier periods survive must be open to some doubt and are likely to be susceptible to challenge. 84 In relation to rent arrears and nuisance/anti-social behaviour (although less so in relation to the latter), some local authorities offered some degree of redemption. In all such cases, the onus is clearly on the applicant to demonstrate some behavioural change. As regards rent arrears, where the applicant entered into a repayment plan, 20 local authorities across our entire sample took this into account in deciding on qualification, prioritisation, or 'postponement/suspension'. At a more detailed level, however, a number of authorities required the applicant to have been regularly meeting the relevant payments, for example for six months in Stevenage, Hackney, and Hillingdon. For others in our sample the degree of commitment to repayment of arrears was merely a relevant factor to be taken into account either to bolster or weaken an individual's case. So, for example, Preston BC express their qualification criteria in part as where the applicant has a housing related debt up to £1,000 and has not made a repayment plan which has been maintained in accordance with the policy; however, an applicant will qualify if the repayment plan has been maintained for a minimum of three months for debts of up to £500. Five local authorities across our sample offer 'redemption' to those who might otherwise have been excluded for nuisance/anti-social behaviour. The usual requirement was that the applicant has changed their behaviour for a certain period. Northumberland DC, for example, may exercise its discretion in favour of qualification where there is evidence of the applicant's improved behaviour over a sustained period of time; Hillingdon LBC, which does not exclude but 'verifies' applicants, will not so verify an applicant unless they can demonstrate a change in their behaviour for a minimum of 12 months at the time of an allocation. QUALIFICATION AND SOCIAL THEORY In this section, we are concerned with the ways in which social theory has been, or might be, deployed to explain social housing allocations. We draw attention to four particular explanations: a form of advanced liberalism; actuarialism and risk; class war conservatism; and authoritarian liberalism. Our argument is that our data disrupts such neat theoretical explanations, and we should be led by our data rather than resort to the cookie cutter. Some qualification criteria have the ethical self at their heart -what we have termed 'redemption'. They fit neatly with post-Foucauldian governmentality studies which focus on advanced or neo-liberal political rationalities exhorting people to work on themselves. They promise to remove the stain of disqualification if the applicants improve themselves (such as, for example, by making and keeping an arrangement to pay back rent arrears). Applicants can break the 'circuit of exclusion'. However, that explanation is rather too simple because, for some, that stain cannot be removed whatever action the applicant takes. Local connection is neither liberal, nor neo-liberal nor advanced liberal as a political rationality. There is no apparent purpose to the exercise of the exclusionary power in this case because it does not affect an individual's self-government. 85 Perhaps an actuarial approach, with which criminal justice scholars are familiar, 86 offers a better explanation. As Dean puts it, risk 'is a way of representing events in a certain form so they might be made governable in particular ways, with particular techniques and for particular goals'. 87 Cowan, Pantazis and Gilroy have considered the way social housing allocation embeds risk-based assessment and management within its processes. They claim that, in social housing selection processes, risk has always been the central principle; this has simply become more apparent in recent years. Selection processes have been designed to assess the risks posed by particular individuals both to the management of social housing as well as to the safety of the community. If a person is regarded as 'risky', they are likely to be excluded from social housing or allocated stock which nobody else wants. 88 That claim was made in the context of a study of the rehousing of sex offenders, in a situation in which the risk management of the offender clearly imbued the entire allocation process. One can see the calculability of risk in relation to some of the exclusions. So, for example, rent arrears are a risk to a social housing organisation, whose business plan will be predicated on the collection of a particular proportion of rent as its income. The exclusion of a bad payer is a fairly crude risk management device; the re-inclusion of those applicants who have kept to a repayment plan can properly be described as a more fine-tuned risk assessment process. The management of nuisance/anti-social behaviour is financially and emotionally costly to social housing organisations and, thus, the exclusion of those households is readily explicable in the language of risk. However, risk does not provide a general explanation for the complete array of exclusions. It cannot explain local connection. Indeed, the argument made was that social housing provides spaces and places in which the risky can be controlled, whereas our focus here is on the excluded, rather than the punitive containment of households on estates. 89 Thus, our argument departs from the assertion that risk has always been the central principle defining social housing selection. Drawing on Ralph Miliband's description of the Thatcher Government as exercising 'class war conservatism', some have argued that Coalition Government housing policy was largely completing the job. Hodkinson and Robbins argue that [h]ousing policy is being used as a strategic intervention to unblock and expand the market, complete the residualisation of social housing and draw people into an ever more economically precarious housing experience in order to boost capitalist interests. 90 On this view, the expanding category of unhouseables provides fodder for an ever more ravenous private rented sector or owner-occupation, which are being fed also on a diet of occupiers' welfare benefits. Thus, the stigmatisation of social housing as providing 'ghettos' for the 'welfare poor' is part of a discursive formation in which capital is benefiting. However, the empirical evidence does not support this proposition either -private landlords are risk averse, unlikely to provide housing to those in receipt of welfare benefits, and mortgage lenders are unlikely to enter such risky markets because of the stringent entry rules post-crash. 91 The qualification criteria predate the Thatcher governments, and the roots of local connection can be found in the local settlement provisions of the Poor Laws. 92 It appears therefore that what we are left with is that no single approach fits all the qualification criteria. A range of potential explanations -pathdependency, institutionalism (in the sense that institutions are enduring collections of rules, structures and standard procedures), and various formats of 89 We are here drawing on Wacquant, n 9 above. neo-liberalism -co-exist. The shadow of the past is long. 93 Local connection forms a qualification criterion because of its pertinent history from settlement under the Poor Law onwards, its spatial defence against 'outsiders', and its reinvigoration as a criterion as a result of a certain form of nationalism. 94 To refer to the qualification criteria as 'neo-liberal' requires some further explanation. After all, these criteria are punitive sanctions and, as discussed above, only some of the authorities surveyed allowed for individuals to 'redeem' themselves and qualify under a scheme. Scholars of neo-liberal governmentality have recognised the authoritarianism inherent within it. Dean, for example, argues that 'authoritarian foldings' arise as a result of the joining up of state organisations and civil society, such as the tenant participation movement, the enfolding of civil society processes into the political sphere, and the refolding of the values of civil society into the political sphere. The important point to note here is that civil society norms are translated into a set of norms enforceable, if necessary, by sovereign action. 95 This has the benefit of reinforcing what is often forgotten in governmentality studies, and that is that Foucault himself emphasised the triangle of sovereignty-discipline-government, not the displacement of one by the other. Indeed, as Foucault observed, the problem of sovereignty is made more acute than ever by advances in the theory of government. And, as Valverde argued, illiberal practices of moral governance are 'a seldom noticed but irreducible despotism in the heart of the paradigmatic liberal subject's relation to himself '. 96 Dean and Valverde both recognise this as key to appreciating the remoralisation strategies of conditionality in welfare. 97 However, even with this more nuanced appreciation of the term 'neoliberal', the explanation remains problematic. We now turn to explanations from within housing policy itself. QUALIFICATION CRITERIA AND HOUSING POLICY In this section, we are concerned with the interaction between qualification criteria and housing policy. We consider how they fit with explanations of housing allocations that place housing need, desert, and sustainability at the forefront. Our contention in this paper is that these qualification criteria have produced a new, expanded generation of 'unhouseables'. The greater ability (and apparent willingness) of local government to disqualify more applicants through local exclusions together with the increasing numbers of excluded households are evidence of this strain. These are households who are labelled as having some type of former housing deviance -they are the 'outsiders': deviance is not a quality of the act the person commits, but rather a consequence of the application by others of rules and sanctions to an 'offender'. The deviant is one to whom that label has been successfully applied; deviant behavior is behavior that people so label. 98 The significant number of households excluded from social housing lists reflect the increasing number of outsiders, 99 to a certain extent caught in the anomic trap. These are the predominantly risky households and others who can be lumped together. Exclusions are not purely to do with housing 'need'. Indeed, those with rent arrears and past histories of anti-social behaviour are likely to have greater need for social housing because of their inability to access other housing tenures. Fitzpatrick and Stephens make a case that 'when it is said that housing should be allocated to those in most need, what is really meant is that it should be allocated to those who are in greatest housing deprivation', and deprivation should be considered over the long-term. 100 However, the qualification criteria clearly operate as blunt tools independent of housing need. There is an argument that the qualification criteria reflect desert, something which has been particularly prevalent in social housing allocation schemes since their foundation. 101 The justification for desert in allocation schemes is that social housing 'consumption provides utility not only to those who are housed, but also to those who provide it ("society")'. 102 If one interprets desert in the manner of 'moral rectitude', 103 one can see that central government's prescription that armed service personnel are automatically qualifying as a clear example of desert, 104 but local connection provisions stand in defiance of desert. Another way of analysing the qualification criteria which is less pejorative is through the label now current in social housing of the need for 'sustainability'. Sustainability has a personal, estate, and global implication for social housing. The personal effect is that tenancies should be sustainable, and considerable management effort goes in to that purpose. An estate effect is that estates should be sustainable, meaning that they should not be allowed to 'slip', for example as a result of the actions of a 'bad apple' or a consequence of an unfixed 'broken window'. As significant are the increasing concerns about a nebulous category of 'outsiders', often regarded as persons from abroad, occupying estates so that family members of households already living on the estate cannot be housed there. 105 The implication is that social housing has come to be seen as a residualised tenure -a numerically declining tenure with concentrations of poverty: 'the poverty of existing social tenants is almost certainly a contributory factor in the unattractiveness of the sector for better-off groups'. 106 Social housing and its occupants are stigmatised 'as particular locales where social pathologies and problems flourish'. 107 In order to destigmatise it, it is necessary to exclude some households so that the positive role models required can be houseditself redolent of the 1938 Central Housing Advisory Committee report which advocated 'The bad tenant will learn more readily by eye than by ear: example is better than precept'. 108 Sustainability implies conditionality: the purpose of social housing is now linked with broader welfare policy, 109 something which has become current since the mid-1990s and led to the fin de siècle academic debate about the 'end of housing policy'. 110 Never mind that the data does not support the assertions of ghettos with patterns of worklessness and benefit dependency, 111 nor the powerful role of the state in the production of 'advanced marginality' 112 amid the austerity trope. 113 The sustainability narrative is certainly powerful but also contradictory. If social housing is 'ambulance service' housing, so that access is only granted to those with severe vulnerabilities, 114 then the stigma will be hard to shift. However, despite its inherent contradictions, this narrative provides an underpinning that reflects the backdrop to this new set of problematisations about social housing. To put it another way, exclusions can be seen as a response to the decline in housing finance. The management of social housing has been consistently squeezed since the mid-1970s through central government cuts and other processes, 115 such as bidding for loans which reward those with lower management costs. 116 The other major factor here is the numerical decline in housing stockthere is now less social housing stock than private rented stock for the first time since the 1960s, and the increased tentacles of the right to buy under the Coalition Government continued to reduce the stock (the same policy was also to be found in the 2015 Conservative Party manifesto). 117 The basic problem with transparent choice-based allocation schemes which reflect housing need in an area is that they accentuate expectations when those expectations patently cannot be met. 118 As Henderson and Karn observed in their classic study of institutional discrimination in Birmingham's allocation scheme in the 1970s, [w]hat emerges most forcefully from this study is that the management of public sector housing cannot be separated from its production. Only in the production of more suitable types of housing for families and for the elderly and single, and in the constant up grading of existing housing can many allocation problems be brought within manageable limits. 119 This is what frames social housing now as 'transitional' and 'ambulance service'terms which amount to the same thing, namely temporary housing -and also underpins the need for this ever-greater category of unhouseables. This understanding of sustainability as the guiding rationality ties in with Wacquant's development of Bourdieu's notion of the 'bureaucratic field'. 120 As befits one of Bourdieu's students and collaborators, 121 Wacquant's conclusions are broad, stark and critical. 122 He argues that social and penal policy should not be isolated from each other because they 'function in tandem at the bottom of the structure of classes and places'. 123 He begins with a treatise on social in/security, neo-Darwinism, and penality, which results partly in the 'most disruptive elements' being 'neutralize[d] and warehouse[d]'. 124 What, for our purposes, is significant about this argument is that, like Bourdieu, Wacquant is rigorously empirical. It is this which enables him, for example, to differentiate the specificities of urban marginality in the US and in France. 125 It is this methodological proposition that we advocate, 126 one which links method with institutional analysis, and to which we return in our conclusion. Theory is derived from our data but not completely abstracted from the macrostructural determinants that, although ostensibly absent from the neighbourhood, still govern the practices and representations of its residents because they are inscribed in the material distribution of resources and social possibles as well as lodged inside bodies in the form of categories of perception, appreciation and action. 127 If this is the case, then, of course, our conclusions from the data are likely to be different and equally subjective. 128 Thus, sustainability operates as a link between what Wacquant describes as the tension between the left hand and right hand of the state -between the social ministries of state responsible for housing and welfare, and the enforcing state, with its economic discipline, resort to punitiveness, and judicial enforcement. 129 It may be the case that what we now have are segregated spaces of 'advanced marginality', 130 in which community bonds are disintegrating, where stigmatisation and marginalisation are rife, and which operate discursively as framing the 'broken society'. 131 But, and this is distinctive about our data, rather than concentrating poverty in degraded spaces, social housing qualification criteria provide a crude mechanism to redress and distort those problems. Local connection provisions reinforce community bonds, enabling management to govern more efficiently. Rent arrears and nuisance/anti-social behaviour exclusions destigmatise and offer the social housing in an area an opportunity for redemption in its own right. However, what Wacquant emphasises is the crucial role of the state and capital in the production of such spaces and, indeed, the response. 132 The right hand of the state, through the effects of austerity policies and other economic practices (such as the whittling away of social housing subsidies), has produced this current situation, which others are implementing. That is not to say that those others are coerced (expressly or impliedly) into developing these criteria -precisely the reverse. Indeed, the local state's engagement of these criteria is significant, probably not as an explicit response to advanced marginality but to 125 Wacquant, Urban Outcasts n 9 above. 126 Although, we should add, we are not specifically advocating a Bourdieusian methodology, but empirical and analytical rigour. 127 Wacquant, Urban Outcasts n 9 above, 10; for discussion of the representations of social housing residents, see Jacobs, Kemeny and Manzi, n 73 above; Cowan, n 35 above. 128 See A. Sarat, 'Off to meet the wizard: Beyond validity and reliability in the search for a postempiricist sociology of law' (1990) 15 Law and Social Inquiry 155. 129 The latter is particularly evident in Wacquant, Punishing the Poor n 122 above. 130 Wacquant, Urban Outcasts n 9 above. 131 See, for discussion, Hancock and Mooney, n 9 above. 132 His understanding of neo-liberalism is inherently sociological, as opposed to the governmentality scholars -on which, see Larner, n 15 above. local and national politics. It should be remembered that all of our sample local authorities had qualification criteria of one kind or another. What is clear is that the onus is on the applicant household to prove that they are not unhouseable. They must have a clean housing record. And, just as in the old days, what the schemes analysed reinforce is the role of discretionary judgment by housing officers. 133 Definitions of 'serious' rent arrears or 'unacceptable behaviour' are left open. In this sense, we have returned to the 1970s when discretion was depicted as the bug in the system -a source of deviance which allowed short-term management goals to compromise the principle of social justice. It was the smokescreen behind which housing departments infused an agreed hierarchy of needs with a range of other, more dubious, allocative principles. 134 It was this kind of behaviour which drew such considerable criticism from both left and right in the 1980s. 135 CONCLUSION Social housing's purpose has always been unclear. It was not part of the post-World War II welfare settlement, and, originally, it was too expensive for the poor. 136 It is easily manipulated as a result. There has always been a category of 'unhouseables' and the extent of that category is likely to have varied according to spatial factors and local/national political economy. Drawing on a textual analysis of the qualification criteria of 50 local authority housing allocation schemes, our argument is that a process of net-widening and meshthinning has occurred, which has facilitated an increase in the number of households excluded from social housing -a hiving off of a sub-set of 'the dispersed and disparate populations caught in the pincer of social and spatial marginalization'. 137 As we have identified, previous explorations of social housing allocation schemes have left us ill-equipped to conceptualise this process. Schemes which emphasise local connection provide the most significant challenge to those explanations. Other challenges come from sanctions which punish what we have termed housing deviance, particularly when schemes appear to offer no chance of redemption. On one view, the fairly simple response is that the growth of this category of unhouseables is due to endogenous factors -a decline in the social housing stock and in the ability of social housing management to manage the stock due to financial constraints. However, there are also many other factors: the bureaucratic rule in which the social agencies rub up against the penal complex; the willingness by some to allow for redemption, provided the applicant demonstrably changes their behaviour; a return of discretionary judgment for which social housing allocation was criticised in the latter part of the twentieth century as a result of the increase of discretion; and the vain politics of the government's good news that they have reduced the size of waiting lists, thus reducing housing need by the fairly simple rewriting of the rules. 138 In this way, it becomes possible to argue that recent incarnations of localism should be viewed as endowing local government with greater flexibility in the management of their housing stock rather than benefiting would-be social tenants. 139 As we have argued through the analysis of 50 local authority social housing allocation schemes, no single theoretical position provides a complete explanation -each provides a partial view on certain aspects. Wacquant's observations come closest, but that is most likely because they are grounded in data, as ours are. Where Wacquant's analysis differs from our own is his focus on spatial segregation of the urban marginalia as an outcome of policies and practices -whereas, in our study, the ultimate housing outcome of the 'unhouseables' is neither known nor inscribed, beyond the label of 'private renting' or 'homeless'. Our argument is that a 'cookie-cutter' approach to social theory fails to offer a meaningful and satisfactory explanation of social housing problems. What we are advocating, however, is an attentiveness to method. 140 Our call is for the development of social theories that are grounded in data. Cookie-cutter explanations that begin from a particular theoretical perspective do not provide much assistance because they are not designed to provide such explanations. They are themselves contextualised and embedded. They may provide us with structural and analytical tools but slavish adaptations and translations to different empirical instances are themselves problematic. In part, this reflects the unique and contextually specific features of experiments in neo-liberalism, and the enfolding of new and old ideas. 141 If socio-legal studies tell us anything, it is that the shadow of the past remains, albeit mobile with time, so that a more 138 See, for example, Grant Shapps, the then housing Minister, 'Reforms will lead to first cut in waiting lists for 22 years' at https://www.gov.uk/government/news/reforms-will-lead-to-firstcut-in-waiting-lists-for-22-years (last accessed 27 July 2015). sociological approach, attendant to specificities and counter-factual scenarios, provides an important corrective. The corollary of our analysis is that synchronic and comparative static institutionalist analyses that provide a bare, partial picture should be rejected in favour of a diachronic approach. A diachronic analysis concerns itself with the development and change of a subject over time, akin to a 'historical' construction, whereas a synchronic analysis is confined to analysis of a particular snapshot in time. Diachronic analyses provide a more complicated picture, certainly in this context, through which continuities and discontinuities can be traced over time. 142 Wacquant makes this move in his study of urban poverty, arguing that singular events (such as riots) can only be viewed diachronically: to forget that urban space is a historical and political construction in the strong sense of the term is to risk (mis)taking for 'neighbourhood effects' what is nothing more than the spatial retranslation of economic and social differences. 143 Thus, time itself becomes an important object of study, one which is often left out of socio-legal analysis, as an important adjunct to space, for example. 144 And time here is not thought about as linear, but as plural and complex, so that it is mashed up. 145 In our study, we find layers of uneven shapes -like the rings in a tree reflecting particular weather conditions, or uneven substratawhich demonstrate the formalisations of different common sense interventions at different moments, all of which coalesce into current housing allocation schemes. 146 Our final point is that political processes have been subordinated to an understanding of housing 'crisis', which constructs the lack of affordable social housing as requiring such interventions as have increased the numbers of 'unhouseables' in the name of localism. Central government can now tell a 'good news' story about declining housing need because, as an apparent indicator of housing need, housing waiting lists are getting shorter. This dominant narrative is being met with alternative, subaltern processes of resistance. One of the lessons of recent housing politics was of increasing grassroots housing activism, combined with a disconnect between politicians and the electorate. 147 Law has provided one significant method of challenging the adoption of these criteria. It represents one of the remaining mechanisms through which authoritarian interventions can be challenged in the housing sphere. Law's power remains visible. Instinctively, this feels odd and problematic because the judicial review process provides a method through which individuals can challenge a decision made on behalf of the collective, after consultation, in the allocation of scarce resources. 148 However, law appears to be a significant mechanism for change, despite the existence of groups struggling against bureaucracies in high pressure areas, agitating for change and squatting empty properties (which are ironically evicted by the proper application of law). 149 Further, the developing jurisprudence on this crucial issue of allocations does not speak with one voice and is, perhaps paradoxically, attendant to the local. 150 We have already noted the potential issues with local connection as a policy and the striking down of Hammersmith and Fulham LBC's allocation scheme, which sought to exclude homeless households. There must now be a very real prospect of a cascading effect of nervousness amongst other local authority housing teams following that decision. 151 Given our findings, that sense of nervousness is likely to be pretty much universal. Some long-cherished and/or overlaid assumptions appear to be challengeable. Building on the Hammersmith experience, there must be some doubt about the legality of some of the other qualification criteria, especially if their effect is to exclude households to which the local authority is required to give reasonable preference.
2018-12-15T18:23:32.003Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "23cda1234014eaeb5bc5b47bb4acb252b4518857", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/772561/Uses%20of%20macro%20social%20theory-%20A%20social%20housing%20case%20study.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "334e5590bc6aae23bc12c2bd15c52fb36c0ab3a8", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
19294675
pes2o/s2orc
v3-fos-license
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM Traditional approaches to differential privacy assume a fixed privacy requirement $\epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint. As differential privacy is increasingly deployed in practical settings, it may often be that there is instead a fixed accuracy requirement for a given computation and the data analyst would like to maximize the privacy of the computation subject to the accuracy constraint. This raises the question of how to find and run a maximally private empirical risk minimizer subject to a given accuracy requirement. We propose a general"noise reduction"framework that can apply to a variety of private empirical risk minimization (ERM) algorithms, using them to"search"the space of privacy levels to find the empirically strongest one that meets the accuracy constraint, incurring only logarithmic overhead in the number of privacy levels searched. The privacy analysis of our algorithm leads naturally to a version of differential privacy where the privacy parameters are dependent on the data, which we term ex-post privacy, and which is related to the recently introduced notion of privacy odometers. We also give an ex-post privacy analysis of the classical AboveThreshold privacy tool, modifying it to allow for queries chosen depending on the database. Finally, we apply our approach to two common objectives, regularized linear and logistic regression, and empirically compare our noise reduction methods to (i) inverting the theoretical utility guarantees of standard private ERM algorithms and (ii) a stronger, empirical baseline based on binary search. Introduction and Related Work Differential Privacy [6,7] enjoys over a decade of study as a theoretical construct, and a much more recent set of large-scale practical deployments, including by Google [9] and Apple [10]. As the large theoretical literature is put into practice, we start to see disconnects between assumptions implicit in the theory and the practical necessities of applications. In this paper we focus our attention on one such assumption in the domain of private empirical risk minimization (ERM): that the data analyst first chooses a privacy requirement, and then attempts to obtain the best accuracy guarantee (or empirical performance) that she can, given the chosen privacy constraint. Existing theory is tailored to this view: the data analyst can pick her privacy parameter ε via some exogenous process, and either plug it into a "utility theorem" to upper bound her accuracy loss, or simply deploy her algorithm and (privately) evaluate its performance. There is a rich and substantial literature on private convex ERM that takes this approach, weaving tight connections between standard mechanisms in differential privacy and standard tools for empirical risk minimization. These methods for private ERM include output and objective perturbation [3,4,13,17], covariance perturbation [18], the exponential mechanism [2,15], and stochastic gradient descent [2,5,11,19,20]. While these existing algorithms take a privacy-first perspective, in practice, product requirements may impose hard accuracy constraints, and privacy (while desirable) may not be the overriding concern. In such situations, things are reversed: the data analyst first fixes an accuracy requirement, and then would like to find the smallest privacy parameter consistent with the accuracy constraint. Here, we find a gap between theory and practice. The only theoretically sound method available is to take a "utility theorem" for an existing private ERM algorithm and solve for the smallest value of ε (the differential privacy parameter)-and other parameter values that need to be set-consistent with her accuracy requirement, and then run the private ERM algorithm with the resulting ε. But because utility theorems tend to be worst-case bounds, this approach will generally be extremely conservative, leading to a much larger value of ε (and hence a much larger leakage of information) than is necessary for the problem at hand. Alternately, the analyst could attempt an empirical search for the smallest value of ε consistent with her accuracy goals. However, because this search is itself a data-dependent computation, it incurs the overhead of additional privacy loss. Furthermore, it is not a priori clear how to undertake such a search with nontrivial privacy guarantees for two reasons: first, the worst case could involve a very long search which reveals a large amount of information, and second, the selected privacy parameter is now itself a data-dependent quantity, and so it is not sensible to claim a "standard" guarantee of differential privacy for any finite value of ε ex-ante. In this paper, we describe, analyze, and empirically evaluate a principled variant of this second approach, which attempts to empirically find the smallest value of ε consistent with an accuracy requirement. We give a meta-method that can be applied to several interesting classes of private learning algorithms and introduces very little privacy overhead as a result of the privacy-parameter search. Conceptually, our meta-method initially computes a very private hypothesis, and then gradually subtracts noise (making the computation less and less private) until a sufficient level of accuracy is achieved. One key technique that saves significant factors in privacy loss over naive search is the use of correlated noise generated by the method of [14], which formalizes the conceptual idea of "subtracting" noise without incurring additional privacy overhead. In order to select the most private of these queries that meets the accuracy requirement, we introduce a natural modification of the now-classic AboveThreshold algorithm [7], which iteratively checks a sequence of queries on a dataset and privately releases the index of the first to approximately exceed some fixed threshold. Its privacy cost increases only logarithmically with the number of queries. We provide an analysis of AboveThreshold that holds even if the queries themselves are the result of differentially private computations, showing that if AboveThreshold terminates after t queries, one only pays the privacy costs of AboveThreshold plus the privacy cost of revealing those first t private queries. When combined with the above-mentioned correlated noise technique of [14], this gives an algorithm whose privacy loss is equal to that of the final hypothesis outputthe previous ones coming "for free" -plus the privacy loss of AboveThreshold. Because the privacy guarantees achieved by this approach are not fixed a priori, but rather are a function of the data, we introduce and apply a new, corresponding privacy notion, which we term ex-post privacy, and which is closely related to the recently introduced notion of "privacy odometers" [16]. In Section 4, we empirically evaluate our noise reduction meta-method, which applies to any ERM technique which can be described as a post-processing of the Laplace mechanism. This includes both direct applications of the Laplace mechanism, like output perturbation [4]; and more sophisticated methods like covariance perturbation [18], which perturbs the covariance matrix of the data and then performs an optimization using the noisy data. Our experiments concentrate on 2 regularized least-squares regression and 2 regularized logistic regression, and we apply our noise reduction meta-method to both output perturbation and covariance perturbation. Our empirical results show that the active, ex-post privacy approach massively outperforms inverting the theory curve, and also improves on a baseline "ε-doubling" approach. 2 Privacy Background and Tools Differential Privacy and Ex-Post Privacy Let X denote the data domain. We call two datasets D, D ∈ X * neighbors (written as D ∼ D ) if D can be derived from D by replacing a single data point with some other element of X . Definition 2.1 (Differential Privacy [6]). Fix ε ≥ 0. A randomized algorithm A : X * → O is εdifferentially private if for every pair of neighboring data sets D ∼ D ∈ X * , and for every event We call exp(ε) the privacy risk factor. It is possible to design computations that do not satisfy the differential privacy definition, but whose outputs are private to an extent that can be quantified after the computation halts. For example, consider an experiment that repeatedly runs an ε -differentially private algorithm, until a stopping condition defined by the output of the algorithm itself is met. This experiment does not satisfy ε-differential privacy for any fixed value of ε, since there is no fixed maximum number of rounds for which the experiment will run (for a fixed number of rounds, a simple composition theorem, Theorem 2.5, shows that the ε-guarantees in a sequence of computations "add up.") However, if ex-post we see that the experiment has stopped after k rounds, the data can in some sense be assured an "ex-post privacy loss" of only kε . Rogers et al. [16] initiated the study of privacy odometers, which formalize this idea. Their goal was to develop a theory of privacy composition when the data analyst can choose the privacy parameters of subsequent computations as a function of the outcomes of previous computations. We apply a related idea here, for a different purpose. Our goal is to design one-shot algorithms that always achieve a target accuracy but that may have variable privacy levels depending on their input. We refer to exp (Loss(o)) as the ex-post privacy risk factor. Note that if E(o) ≤ ε for all o, A is ε-differentially private. Ex-post differential privacy has the same semantics as differential privacy, once the output of the mechanism is known: it bounds the log-likelihood ratio of the dataset being D vs. D , which controls how an adversary with an arbitrary prior on the two cases can update her posterior. Differential Privacy Tools Differentially private computations enjoy two nice properties: Post-processing implies that, for example, every decision process based on the output of a differentially private algorithm is also differentially private. Theorem 2.5 (Composition [6]). Let A 1 : X * → O, A 2 : X * → O be algorithms that are ε 1 -and ε 2differentially private, respectively. Then the algorithm A : The composition theorem holds even if the composition is adaptive--see [8] for details. The Laplace mechanism. The most basic subroutine we will use is the Laplace mechanism. The Laplace Distribution centered at 0 with scale b is the distribution with probability density function Lap (z|b) = 1 2b e − |z| b . We say X ∼ Lap (b) when X has Laplace distribution with scale b. Let f : X * → R d be an arbitrary d-dimensional function. The 1 sensitivity of f is defined to be ∆ 1 (f ) = max D∼D f (D)−f (D ) 1 . The Laplace mechanism with parameter ε simply adds noise drawn independently from Lap to each coordinate of f (x). Gradual private release. Koufogiannis et al. [14] study how to gradually release private data using the Laplace mechanism with an increasing sequence of ε values, with a privacy cost scaling only with the privacy of the marginal distribution on the least private release, rather than the sum of the privacy costs of independent releases. For intuition, the algorithm can be pictured as a continuous random walk starting at some private data v with the property that the marginal distribution at each point in time is Laplace centered at v, with variance increasing over time. Releasing the value of the random walk at a fixed point in time gives a certain output distribution, for example,v, with a certain privacy guarantee ε. To producev whose ex-ante distribution has higher variance (is more private), one can simply "fast forward" the random walk from a starting point ofv to reachv ; to produce a less privatev , one can "rewind." The total privacy cost is max{ε, ε } because, given the "least private" point (sayv), all "more private" points can be derived as post-processings given by taking a random walk of a certain length starting atv. Note that were the Laplace random variables used for each release independent, the composition theorem would require summing the ε values of all releases. AboveThreshold with Private Queries Our high-level approach to our eventual ERM problem will be as follows: Generate a sequence of hypotheses θ 1 , . . . , θ T , each with increasing accuracy and decreasing privacy; then test their accuracy levels sequentially, outputting the first one whose accuracy is "good enough." The classical AboveThreshold algorithm [7] takes in a dataset and a sequence of queries and privately outputs the index of the first query to exceed a given threshold (with some error due to noise). We would like to use AboveThreshold to perform these accuracy checks, but there is an important obstacle: for us, the "queries" themselves depend on the private data. 2 A standard composition analysis would involve first privately publishing all the queries, then running AboveThreshold on these queries (which are now public). Intuitively, though, it would be much better to generate and publish the queries one at a time, until AboveThreshold halts, at which point one would not publish any more queries. The problem with analyzing this approach is that, a-priori, we do not know when AboveThreshold will terminate; to address this, we analyze the ex-post privacy guarantee of the algorithm. 3 Let us say that an algorithm M(D) = (f 1 , . . . , f T ) is (ε 1 , . . . , ε T )-prefix-private if for each t, the function that runs M(D) and outputs just the prefix (f 1 , . . . , f t ) is ε t -differentially private. ε T )-prefix private algorithm that returns T queries, and let each query output by The proof, which is a variant on the proof of privacy for AboveThreshold [7], appears in the appendix, along with an accuracy theorem for IAT. Remark 2.9. Throughout we study ε-differential privacy, instead of the weaker (ε, δ) (approximate) differential privacy. Part of the reason is that an analogue of Lemma 2.8 does not seem to hold for (ε, δ)-differentially private queries without further assumptions, as the necessity to union-bound over the δ "failure probability" that the privacy loss is bounded for each query can erase the ex-post gains. We leave obtaining similar results for approximate differential privacy as an open problem. Noise-Reduction with Private ERM In this section, we provide a general private ERM framework that allows us to approach the best privacy guarantee achievable on the data given a target excess risk goal. Throughout the section, we consider an input dataset D that consists of n row vectors X 1 , X 2 , . . . , X n ∈ R p and a column y ∈ R n . We will assume that each X i 1 ≤ 1 and |y i | ≤ 1. Let d i = (X i , y i ) ∈ R p+1 be the i-th data record. Let be a loss function such that for any hypothesis θ and any data point (X i , y i ) the loss is (θ, (X i , y i )). Given an input dataset D and a regularization parameter λ, the goal is to minimize the following regularized empirical loss function over some feasible set C: Let θ * = argmin θ∈C (θ, D). Given a target accuracy parameter α, we wish to privately compute a θ p that satisfies L(θ p , D) ≤ L(θ * , D) + α, while achieving the best ex-post privacy guarantee. For simplicity, we will sometimes write L(θ) for L(θ, D). One simple baseline approach is a "doubling method": Start with a small ε value, run an ε-differentially private algorithm to compute a hypothesis θ and use the Laplace mechanism to estimate the excess risk of θ; if the excess risk is lower than the target, output θ; otherwise double the value of ε and repeat the same process. (See the appendix for details.) As a result, we pay for privacy loss for every hypothesis we compute and every excess risk we estimate. In comparison, our meta-method provides a more cost-effective way to select the privacy level. The algorithm takes a more refined set of privacy levels ε 1 < . . . < ε T as input and generates a sequence of hypotheses θ 1 , . . . , θ T such that the generation of each θ t is ε t -private. Then it releases the hypotheses θ t in order, halting as soon as a released hypothesis meets the accuracy goal. Importantly, there are two key components that reduce the privacy loss in our method: 1. We use Algorithm 1, the "noise reduction" method of [14], for generating the sequence of hypotheses: we first compute a very private and noisy θ 1 , and then obtain the subsequent hypotheses by gradually "de-noising" θ 1 . As a result, any prefix (θ 1 , . . . , θ k ) incurs a privacy loss of only ε k (as opposed to (ε 1 + . . . + ε k ) if the hypotheses were independent). 2. When evaluating the excess risk of each hypothesis, we use Algorithm 2, InteractiveAboveThreshold, to determine if its excess risk exceeds the target threshold. This incurs substantially less privacy loss than independently evaluating the excess risk of each hypothesis using the Laplace mechanism (and hence allows us to search a finer grid of values). For the rest of this section, we will instantiate our method concretely for two ERM problems: ridge regression and logistic regression. In particular, our noise-reduction method is based on two private ERM algorithms: the recently introduced covariance perturbation technique of [18], and output perturbation [4]. Covariance Perturbation for Ridge Regression In ridge regression, we consider the squared loss function: ((X i , y i ), θ) = 1 2 (y i − θ, X i ) 2 , and hence empirical loss over the data set is defined as where X denotes the (n × p) matrix with row vectors X 1 , . . . , X n and y = (y 1 , . . . , y n ). Since the optimal solution for the unconstrained problem has 2 norm no more than √ 1/λ (see the appendix for a proof), we will focus on optimizing θ over the constrained set C = {a ∈ R p | a 2 ≤ √ 1/λ}, which will be useful for bounding the 1 sensitivity of the empirical loss. Before we formally introduce the covariance perturbation algorithm due to [18], observe that the optimal solution θ * can be computed as In other words, θ * only depends on the private data through X y and X X. To compute a private hypothesis, the covariance perturbation method simply adds Laplace noise to each entry of X y and X X (the covariance matrix), and solves the optimization based on the noisy matrix and vector. The formal description of the algorithm and its guarantee are in Theorem 3.1. Our analysis differs from the one in [18] in that their paper considers the "local privacy" setting, and also adds Gaussian noise whereas we use Laplace. The proof is deferred to the appendix. Theorem 3.1. Fix any ε > 0. For any input data set D, consider the mechanism M that computes where B ∈ R p×p and b ∈ R p×1 are random Laplace matrices such that each entry of B and b is drawn from Lap (4/ε). Then M satisfies ε-differential privacy and the output θ p satisfies In our algorithm CovNR, we will apply the noise reduction method, Algorithm 1, to produce a sequence of noisy versions of the private data (X X, X y): (Z 1 , z 1 ), . . . , (Z T , z T ), one for each privacy level. Then for each (Z t , z t ), we will compute the private hypothesis by solving the noisy version of the optimization problem in Equation (1). The full description of our algorithm CovNR is in Algorithm 3, and satisfies the following guarantee: Input: private data set D = (X, y), accuracy parameter α, privacy levels ε 1 < ε 2 < . . . < ε T , and failure probability γ Instantiate InteractiveAboveThreshold: A = IAT(D, ε 0 , −α/2, ∆, ·) with ε 0 = 16∆(log(2T /γ))/α and ∆ = ( 1/λ} and θ * = argmin θ∈C L(θ) Compute noisy data: Accurate hypothesis found. Output Perturbation for Logistic Regression Next, we show how to combine the output perturbation method with noise reduction for the ridge regression problem. 4 In this setting, the input data consists of n labeled examples (X 1 , y 1 ), . . . , (X n , y n ), such that for each i, X i ∈ R p , X i 1 ≤ 1, and y i ∈ {−1, 1}. The goal is to train a linear classifier given by a weight vector θ for the examples from the two classes. We consider the logistic loss function: (θ, (X i , y i )) = log(1 + exp(−y i θ X i )), and the empirical loss is The output perturbation method is straightforward: we simply add Laplace noise to perturb each coordinate of the optimal solution θ * . The following is the formal guarantee of output perturbation. Our analysis deviates slightly from the one in [4] since we are adding Laplace noise (see the appendix). nλε . For any input dataset D, consider the mechanism that first computes θ * = argmin θ∈R p L(θ), then outputs θ p = θ * + b, where b is a random vector with its entries drawn i.i.d. from Lap (r). Then M satisfies ε-differential privacy, and θ p has excess risk nλε + 4p 2 n 2 λε 2 . Given the output perturbation method, we can simply apply the noise reduction method NR to the optimal hypothesis θ * to generate a sequence of noisy hypotheses. We will again use InteractiveAboveThreshold to check the excess risk of the hypotheses. The full algorithm OutputNR follows the same structure in Algorithm 3, and we defer the formal description to the appendix. Proof sketch of Theorems 3.2 and 3.4. The accuracy guarantees for both algorithms follow from an accuracy guarantee of the IAT algorithm (a variant on the standard AboveThreshold bound) and the fact that we output θ * if IAT identifies no accurate hypothesis. For the privacy guarantee, first note that any prefix of the noisy hypotheses θ 1 , . . . , θ t satisfies ε t -differential privacy because of our instantiation of the Laplace mechanism (see the appendix for the 1 sensitivity analysis) and noise-reduction method NR. Then the ex-post privacy guarantee directly follows Lemma 2.8. Experiments To evaluate the methods described above, we conducted empirical evaluations in two settings. We used ridge regression to predict (log) popularity of posts on Twitter in the dataset of [1], with p = 77 features and subsampled to n =100,000 data points. Logistic regression was applied to classifying network events as innocent or malicious in the KDD-99 Cup dataset [12], with 38 features and subsampled to 100,000 points. Details of parameters and methods appear in the appendix. In each case, we tested the algorithm's average ex-post privacy loss for a range of input accuracy goals α, fixing a modest failure probability γ = 0.1 (and we observed that excess risks were concentrated well below α/2, suggesting a pessimistic analysis). The results show our meta-method gives a large improvement over the "theory" approach of simply inverting utility theorems for private ERM algorithms. (In fact, the utility theorem for the popular private stochastic gradient descent algorithm does not even give meaningful guarantees for the ranges of parameters tested; one would need an order of magnitude more data points, and even then the privacy losses are enormous, perhaps due to loose constants in the analysis.) To gauge the more modest improvement over DoublingMethod, note that the variation in the privacy risk factor e ε can still be very large; for instance, in the ridge regression setting of α = 0.05, Noise Reduction has e ε ≈ 10.0 while DoublingMethod has e ε ≈ 495; at α = 0.075, the privacy risk factors are 4.65 and 56.6 respectively. Interestingly, for our meta-method, the contribution to privacy loss from "testing" hypotheses (the InteractiveAboveThreshold technique) was significantly larger than that from "generating" them (NoiseReduction). One place where the InteractiveAboveThreshold analysis is loose is in using a theoretical bound on the maximum norm of any hypothesis to compute the sensitivity of queries. The actual norms of hypotheses tested was significantly lower which, if taken as guidance to the practitioner in advance, would drastically improve the privacy guarantee of both adaptive methods. A.1 AboveThreshold Proof of Lemma 2.8. Let D, D be neighboring databases. We will instead analyze the algorithm that outputs the entire prefix f 1 , . . . , f t when stopping at time t. Because IAT is a post-processing of this algorithm, and privacy can only be improved under post-processing, this suffices to prove the theorem. We wish to show for all outcomes o = (t, f 1 , . . . , f t ): We have directly from the privacy guarantee of InteractiveAboveThreshold that for every fixed sequence of queries f 1 , . . . , f t : because the guarantee of InteractiveAboveThreshold is quantified over all data-independent sequences of queries f 1 , . . . , f T , and by definition of the algorithm, the probability of stopping at time t is independent of the identity of any query f t for t > t. Now we can write: By assumption, M is prefix-private, in particular, for fixed t and any f 1 , . . . , f t : as desired. We also include the following utility theorem. We say that an instantiation of Interactive-AboveThreshold is (α, β) accurate with respect to a threshold W and stream of queries f 1 , . . . f T if except with probability at most γ, the algorithm outputs a query f t only if f t (D) ≥ W − α. A.2 Doubling Method We now formally describe the DoublingMethod discussed in Section 1 and Section 3, and give a formal ex-post privacy analysis. Let θ * = argmin θ∈R p L(θ). DoublingMethod accepts a list of privacy levels ε 1 < ε 2 < . . . < ε T , where ε i = 2ε i−1 . We show in Claim B.1 that 2 is the optimal factor to scale ε by. It also takes in a failure probability γ, and a black-box private ERM mechanism M that has the following guarantee: By the assumption, generating the k th private hypothesis incurs privacy loss ε 1 * 2 k−1 . By the Laplace mechanism, evaluating the error of the sensitivity ∆ query f i is 2∆ log(T /γ) α -differentially private. Theorem 3.6 in [16] then says that the ex-post privacy loss of outputting A.3 Ridge Regression In this subsection, we let (θ, (X i , y i )) = 1 2 (y i − θ, X i ) 2 , and the empirical loss over the data set is defined as where X denotes the (n × p) matrix with row vectors X 1 , . . . , X n and y = (y 1 , . . . , y n ). We assume that for each i, X i 1 ≤ 1 and |y i | ≤ 1. For simplicity, we will sometimes write L(θ) for L (D, θ). First, we show that the unconstrained optimal solution in ridge regression has bounded norm. The following claim provides a bound on the sensitivity for the excess risk, which are the queries we send to InteractiveAboveThreshold. Claim A.5. Let C be a bounded convex set in R p with C 2 ≤ M. Let D and D be a pair of adjacent datasets, and let θ * = argmin θ∈C L(θ, D) and θ • = argmin θ∈C L(θ, D ). Then for any θ ∈ C, The following lemma provides a bound on the 1 sensitivity for the matrix X X and vector X y. Lemma A.6. Fix any i ∈ [n]. Let X and Z be two n × p matrices such that for all rows j i, X j = Z j . Let y, y ∈ R n such that y j = y j for all j i. Then as long as X i , Z i , |y i |, |y i | ≤ 1. Proof. We can write Similarly, This completes the proof. Before we proceed to give a formal proof for Theorem 3.1, we will also give the following basic fact about Laplace random vectors. Proof. By Jensen's inequality, Note that by linearity of expectation and the variance of the Laplace distribution Proof of Theorem 3.1. In the algorithm, we compute Z = X X + B and z = X y + b, where the entries of B and b are drawn i.i.d. from Lap (4/ε). Note that the output θ p is simply a post-processing of the noisy matrix Z and vector z. Furthermore, by Lemma A.6, the joint vector (Z, z) is has sensitivity bounded by 4 with respect to 1 norm. Therefore, the mechanism satisfies ε-differential privacy by the privacy guarantee of the Laplace mechanism. Next, we will also provide a theoretical result for applying output perturbation (with Laplace noise) to the ridge regression problem. This will provides us the "theory curve" for output perturbation in ridge regression plot of Figure 1a. First, the following sensitivity bound on the optimal solution for L follows directly from the strong convexity of L. Theorem A.9. Let ε > 0 and C be a bounded convex set with C 2 ≤ √ 1/λ. Let r = ( √ 1/λ+1) p/(nλ)/ε. Consider the following mechanism M that for any input dataset D first computes the optimal solution θ * = argmin θ∈C L(θ), and then outputs θ p = θ * + b, where b is a random vector with its entries drawn i.i.d. from Lap (r). Then M satisfies ε-differential privacy, and θ p satisfies Proof. The privacy guarantee follows directly from the use of Laplace mechanism and the 1 sensitivity bound in Lemma A.8. For each data point d i = (X i , y i ), we have Since each entry in b has mean 0, we can simplify the expectation as In the following, let M = √ 1/λ. We can then bound Again, since each b s is drawn from Lap (r), we get To put all the pieces together and plugging in the value of r, we get which recovers our stated bound. A.4 Logistic Regression In this subsection, the input data D consists of n labelled examples (X 1 , y 1 ), . . . , (X n , y n ), such that for each i, x i ∈ R p , x i 1 ≤ 1, and y i ∈ {−1, 1}. We consider the logistic loss function: (θ, (X i , y i )) = log(1 + exp(−y i θ X i )), and our empirical loss is defined as In output perturbation, the noise needs to scale with the 1 -sensitivity of the optimal solution, which is given by the following lemma. By the fact that a 1 ≤ √ p a 2 for any a ∈ R p , we recover the stated result. We will show that the optimal solution for the unconstrained problem has 2 norm no more than 2 log 2/λ. Claim A.11. The (unconstrained) optimal solution θ * has norm θ * 2 ≤ 2 log 2 λ . Proof. Note that the weight vector θ = 0 has loss log 2. Therefore, L(θ * ) ≤ log 2. Since the logistic loss is positive, we know that the regularization term It follows that θ * 2 ≤ 2 log 2 λ . We will focus on generating hypotheses θ within the set C = {a ∈ R p | a 2 ≤ 2 log 2/λ}. Then we can bound the 1 sensitivity of the excess risk using the following result. The following fact is useful for our utility analysis for the output perturbation method. Proof of Theorem 3.3. The privacy guarantee follows directly from the use of Laplace mechanism and the 1 -sensitivity bound in Lemma A.10. Since the logistic loss function is 1-Lipschitz. For any (x, y) in our domain, By Claim A.7 and the property of the Laplace distribution, we know that It follows that which recovers the stated bound. We include the full details of OutputNR in Algorithm 5. B.1 Parameters and data For simplicity and to avoid over-fitting, we fixed the following parameters for both experiments: • n =100,000 (number of data points) • λ = 0.005 (regularization parameter) • γ = 0.10 (requested failure probability) • ε 1 = 4E, where E is the inversion of the theory guarantee for the underlying algorithm. For example in the logistic regression setting where the algorithm is Output Perturbation, E is the value such that setting ε = E guarantees expected excess risk of at most α. • ε T = 1.0/n. For NoiseReduction, we choose T = 1000 (maximum number of iterations) and set ε t = ε 1 r t for the appropriate r, i.e. r = ε T ε 1 1/T . For the Doubling method, T is equal to the number of doubling steps until ε t exceeds ε T , i.e. T = log 2 (ε 1 /ε T ) . Features, labels, and transformations. The Twitter dataset has p = 77 features (dimension of each x), relating to measurements of activity relating to a posting; the label y is a measurement of the "buzz" or success of the posting. Because general experience suggests that such numbers likely follow a heavy-tailed distribution, we transformed the labels by y → log(1 + y) and set the taks of predicting the transformed label. The KDD-99 Cup dataset has p = 38 features relating to attributes of a network connection such as duration of connection, number of bytes sent in each direction, binary attributes, etc. The goal is to classify connections as innocent or malicious, with malicious connections broken down into further subcategories. We transformed three attributes containing likely heavy-tailed data (the first three mentioned above) by x i → log(1 + x i ), dropped three columns containing textual categorical data, and transformed the labels into 1 for any kind of malicious connection and 0 for an innocent one. (The feature length p = 38 is after dropping the text columns.) For both datasets, we transformed the data by renormalizing to maximum L1-norm 1. That is, we computed M = max i x i 1 , and transformed each x i → x i /M. In the case of the Twitter dataset, we did the same (separately) for the y labels. This is not a private operation (unlike the previous ones) on the data, as it depends precisely on the maximum norm. We do not consider the problem of privately ensuring bounded-norm data, as it is orthogonal to the questions we study. The code for the experiments is implemented in python3 using the numpy and scikit-learn libraries. Figure 2 plots the empirical accuracies of the output hypotheses, to ensure that the algorithms are achieving their theoretical guarantees. In fact, they do significantly better, which is reasonable considering the private testing methodology: set a threshold significantly below the goal α, add independent noise to each query, and accept only if the query plus noise is smaller than the threshold. Combined with the requirement to use tail bounds, the accuracies tend to be significantly smaller than α and with significantly higher probability than 1 − γ. (Recall: this is not necessarily a good thing, as it probably costs a significant amount of extra privacy.) Figure 3 shows the breakdown in privacy losses between the "privacy test" and the "hypothesis generator". In the case of NoiseReduction, these are AboveThreshold's ε A and the ε t of the private method, Covariance Perturbation or Output Perturbation. In the case of Doubling, these are the accrued ε due to tests at each step and due to Covariance Perturbation or Output Perturbation for outputting the hypotheses. B.2 Additional results This shows the majority of the privacy loss is due to testing for privacy levels. One reason why might be that the cost of privacy tests depends heavily on certain constants, such as the norm of the hypothesis being tested. This norm is upper-bounded by a theoretical maximum which is used, but a smaller maximum would allow for significantly higher computed privacy levels for the same algorithm. In other words, the analysis might be loose compared to an analysis that knows the norms of the hypotheses, although this is a private quantity. Figure 4 supports the conclusion that generally, the theoretical maximum was very pessimistic in our cases. Note that a tenfold reduction in norm gives a tenfold reduction in privacy level for logistic regression, where sensitivity is linear in maximum norm; and a hundred-fold reduction for ridge regression. B.3 Supporting theory Claim B.1. For the "doubling method", the factor 2 increase in ε at each time step gives the optimal worst case ex post privacy loss guarantee. Proof. In a given setting, suppose ε * is the "final" level of privacy at which the algorithm would halt. With a factor 1/r increase for r < 1, the final loss may be as large as ε * /r. The total loss is the sum of that loss and all previous losses, i.e. if t steps were taken: (ε * /r) + r · (ε * /r) + · · · + r t−1 · (ε * /r) = (ε * /r) The final inequality implies that setting r = 0.5 and (1/r) = 2 is optimal. The asymptotic → is justified by noting that the starting ε 1 may be chosen arbitrarily small, so there exist parameters that exceed the value of that summation for any finite t; and the summation limits to 1 1−r as t → ∞.
2017-05-30T19:20:28.000Z
2017-05-30T00:00:00.000
{ "year": 2017, "sha1": "1c1cb3d3a5f398ced79be3de0f1f454b2f30819d", "oa_license": "CCBYNCND", "oa_url": "https://journalprivacyconfidentiality.org/index.php/jpc/article/download/682/682", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "61cc940f7d5726861f033161036dd0a87a6a1a36", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
251192
pes2o/s2orc
v3-fos-license
Characterization of individuals at high risk of developing melanoma in Latin America: bases for genetic counseling in melanoma Purpose: CDKN2A is the main high-risk melanoma-susceptibility gene, but it has been poorly assessed in Latin America. We sought to analyze CDKN2A and MC1R in patients from Latin America with familial and sporadic multiple primary melanoma (SMP) and compare the data with those for patients from Spain to establish bases for melanoma genetic counseling in Latin America. Genet Med 18 7, 727–736. Methods: CDKN2A and MC1R were sequenced in 186 Latin American patients from Argentina, Brazil, Chile, Mexico, and Uruguay, and in 904 Spanish patients. Clinical and phenotypic data were obtained. Genet Med 18 7, 727–736. Results: Overall, 24 and 14% of melanoma-prone families in Latin America and Spain, respectively, had mutations in CDKN2A. Latin American families had CDKN2A mutations more frequently (P = 0.014) than Spanish ones. Of patients with SMP, 10% of those from Latin America and 8.5% of those from Spain had mutations in CDKN2A (P = 0.623). The most recurrent CDKN2A mutations were c.-34G>T and p.G101W. Latin American patients had fairer hair (P = 0.016) and skin (P < 0.001) and a higher prevalence of MC1R variants (P = 0.003) compared with Spanish patients. Genet Med 18 7, 727–736. Conclusion: The inclusion criteria for genetic counseling of melanoma in Latin America may be the same criteria used in Spain, as suggested in areas with low to medium incidence, SMP with at least two melanomas, or families with at least two cases among first- or second-degree relatives. Genet Med 18 7, 727–736. INTRODUCTION Melanoma is the most aggressive of common skin cancers because of its tendency to metastasize. Its incidence is rapidly increasing, especially among Caucasian populations. Melanoma is the second most diagnosed cancer among patients younger than 30 years of age, 1 with metastases is around 15%. 2 Identification of individuals at high risk of developing melanoma is necessary since an early diagnosis improves the disease prognosis. 3 Melanoma is caused by the interaction of environmental, phenotypic, and genetic factors. The main environmental risk factor for melanoma is sun exposure. 4 Individuals with fair skin, red hair, and/or a high nevi count have an increased risk of developing melanoma. 5 To date, CDKN2A, which encodes the tumor suppressor proteins p16INK4A and p14ARF, is the major high-risk gene involved in melanoma susceptibility. 6 CDKN2A has been widely studied in melanoma patients from the United States, Europe, and Australia. 6 The frequency of germline mutations in CDKN2A varies across populations (5-72%) and depends on the selection criteria used. 6,7 Haplotype analysis indicates a founder effect for most of the recurrent mutations detected. 8 Identification of the prevalence of CDKN2A mutations in patients at high risk for melanoma and the correlation of these mutations with clinical data has been crucial for establishing genetic counseling for melanoma. Melanoma risk may also be modulated by common genetic variants acting as low-to medium-penetrance variants. 9 MC1R plays a key role in pigmentation and is responsible for phenotypic characteristics such as hair and skin color and the capacity of response to ultraviolet radiation. 10 Several MC1R variants are associated with a moderately increased melanoma risk and also modulate the effect of CDKN2A mutations in carriers. 11 Genetic counseling and specific dermatological follow-up may be offered to patients at high risk for melanoma. 12 In countries with a low to medium incidence of melanoma, genetic counseling is offered to patients with two primary melanomas and/or to families with two melanoma cases and/or one pancreatic adenocarcinoma and one melanoma in first-or second-degree relatives (the "rule of two"). In countries with a moderate to high incidence of melanoma, however, genetic counseling is offered to patients with three primary melanomas and to families with three cases of melanoma or pancreatic cancer in first-or second-degree relatives (the "rule of three"). 13 It has been demonstrated that melanoma genetic counseling has a positive impact on the improvement of total body skin examination and self-examination of the skin in unaffected individuals carrying germline mutations after test reporting, whereas affected carriers maintain high levels of screening adherence. 14 Furthermore, after melanoma genetic counseling, unaffected members of high-risk melanoma families report improvements in daily routine sun protection, showing that genetic counseling may motivate sustained improvements in prevention behaviors. 15 Thus it is very important for both melanoma patients and unaffected individuals from the family to be included in genetic counseling programs. Few studies have assessed the prevalence of CDKN2A mutations or MC1R variants and phenotypic characteristics in patients at high risk for melanoma from Latin American countries. CDKN2A mutations have been identified in 13.6% of melanoma-prone families from São Paulo, Brazil, 16 whereas one study reported no mutations in Porto Alegre, 17 and in a different cohort the mutation frequency was 7%. 18 In melanoma-prone families from Uruguay, 5/6 families had CDKN2A mutations. 19 Phenotypic and genetic characterization of individuals at high risk for melanoma from Latin America may improve their management and implement genetic counseling in these countries. We present the molecular characterization of CDKN2A and MC1R genes in the largest set of patients at high risk for melanoma from distinct Latin American countries (Argentina, Brazil, Chile, Mexico, and Uruguay), and we compare the data with two sets of Spanish patients at high risk for melanoma to establish bases for genetic counseling in Latin America. MATeRIALs AND MeTHODs The multicenter cross-sectional study included 1,090 patients at high risk for melanoma: 758 patients with familial melanoma (FM) and 332 patients with SMP from Latin American countries and Spain. Because Latin America is a region with a low incidence of melanoma (GLOBOCAN 2012, World Health Organization; http://globocan.iarc.fr), the inclusion criteria followed the rule of two. Overall, 186 Latin American melanoma patients were recruited from Argentina (n = 10), Chile (n = 28), Mexico (n = 6), Uruguay (n = 25), and Brazil (n = 117), which included two sets of patients: Porto Alegre (Southern Brazil) (n = 58) and São Paulo (southeast region) (n = 59). The contribution of each country to the study resulted in a broad representation of a number of Latin American countries. A set of 904 Spanish patients with melanoma from Barcelona (n = 706) and Valencia (n = 198) also were included using the same selection criteria. The number of primary melanomas, age at diagnosis, number of melanoma cases in the family, ancestral origin, and phenotypic data (hair and eye color, skin phototype, and nevi count) were recorded by dermatologists for most of the patients. Although the number of missing values was higher in the set of Spanish patients than in the Latin American patients, this did not introduce a bias, and the information recruited was informative for the whole cohort: Spanish patients were recruited consecutively, and missing data were distributed randomly; two different cohorts from Spain where used to minimize the bias due to the data collection procedure; and the variable with the greatest amount of missing data had information from at least 600 Spanish patients. Partial genetic information of the patients with melanoma from Spain and Brazil, and a subset of pedigrees from Uruguay, has been previously reported. [16][17][18][19][20][21] The study was approved by the ethical committee of the Hospital Clinic of Barcelona. The patients gave their written, informed consent. CDKN2A and MC1R molecular screening Molecular characterization of CDKN2A was performed in all patients. CDKN2A was sequenced in all patients, as previously described. 16,18,20,21 MC1R was sequenced as described elsewere. 22 statistical analyses For the statistical analyses, the most common MC1R variants were classified as r variants (not associated with red hair color: p.V60L, p.V92M, p.R163Q) or R variants (associated with red hair color: p.D84E, p.R142H, p.R151C, p.I155T, p.R160W, p.D294H). 10 SPSS software version 17.0 (IBM, Chicago, IL) was used. Two-sided Pearson χ 2 or Fisher exact tests were used for categorical variables, as applicable. Student's t-test was used for quantitative variables. Adjusted P values were calculated using the Bonferroni correction. The test was considered significant if the P value or adjusted P value (as applicable) was <0.05. ResULTs The study included a set of 1,090 patients with melanoma from distinct Latin American countries and Spain. Latin America and Spain had similar frequencies of FM cases (67.7 and 69.9%, respectively) and SMP (32.3 and 30.1%, respectively; P = 0.600), and there were no gender differences (40.3% male and 58.7% female vs. 41.5% male and 58.5% female, respectively; P = 0.806). Since Latin America is a mixed population from European, Native, African and Asian origin as a result of the colonization process and migratory effects, 24 we collected information regarding the patients' ancestral origin. The four grandparents of more than 70% of Latin American patients were of European origin. Latin American and Spanish patients differed in pigmentation traits. Latin American patients had fairer hair color (adjusted P = 0.016) and skin phototype (adjusted P < 0.001) than Spanish patients. No differences were observed for nevi count or eye color ( Table 1). Considering all patients, CDKN2A mutation prevalence was 19% in Latin America and 12% in Spain. CDKN2A mutation frequency in SMP was similar in Latin America (10%) and Spain (8.5%) (P = 0.623). However, the prevalence of CDKN2A mutations in Latin American melanoma-prone families was higher than in Spain (24 and 14%, respectively; P = 0.019). The frequency of mutations varied among countries. Whereas southern Brazil had a low mutation prevalence, Chile and Uruguay showed a high prevalence of mutations in both SMP and FM ( Table 2). CDKN2A mutations have been previously associated with a lower age at diagnosis, number of primary melanomas, and the number of cases in the family. 6 The whole set of patients also showed these associations ( Table 4). Latin American patients with melanoma carrying a CDKN2A mutation had an increased number of cases in the family and a lower age at diagnosis, but the number of personal primary melanomas did not reach significance. We sequenced MC1R to assess the distribution of MC1R variants across countries ( Table 5). We observed differences in the number and type of variants between Latin America and Spain. We detected MC1R variants in 80.5% of Latin American and 67.9% of Spanish patients (P = 0.003), with a similar R variant frequency (39.6 vs. 36.3%, respectively; P = 0.514) but a higher r variant prevalence in Latin America (40.9 vs. 31.6%, respectively; P = 0.033). We analyzed the frequencies of the most common R and r variants, comparing Latin America and Spain (Supplementary Table S1 online). When adjusting using the Bonferroni correction, we found a significantly increased presence of p.R160W (17.4 vs. 7.5%; adjusted P < 0.005) and p.R163Q (14.1 vs. 5.2%; adjusted P < 0.005) in Latin America, but we should take into consideration that all patients carrying the p.R163Q variant in this study were from only three study sites: Brazil (São Paulo), Chile, or Uruguay. The p.D294H variant was more frequent in Spain (5.4 vs. 13.3%; adjusted P = 0.045). The presence of MC1R variants and R variants correlated with phenotype (Supplementary Tables S2 and S3 online). DIsCUssION Latin America has a low incidence of melanoma (GLOBOCAN 2012). The characterization of melanoma genes has allowed other areas with low to medium incidence of melanoma, such as Spain, to recommend genetic counseling for patients with melanoma. 12,25 To date, only a few specialized centers in Latin America offer melanoma genetic counseling, and there is little knowledge of the implication of high-risk genes in melanoma susceptibility. This study presents the clinical and molecular characterization of CDKN2A and MC1R in the largest set of Latin American patients at high risk for melanoma. CDKN2A mutation frequency in melanoma-prone families was higher in Latin America than Spain, using the same selection criteria. By contrast, both areas had similar SMP CDKN2A mutation prevalence, consistent with that reported in other studies (8.2-9%). 25,26 The age at diagnosis and number of primary melanomas were associated with the presence of mutations in CDKN2A, as previously reported. 6 Otherwise, we did not find associations between CDKN2A mutation and nevi count, suggesting that other genes could play a role in nevogenesis. 27,28 Most CDKN2A mutations identified had been previously detected in European or North American patients with melanoma. The most prevalent mutation in Latin America was c.-34G>T. This mutation occurs at a high Table 1 Characteristics and phenotypic data of patients with melanoma, by country (region) In the familial melanoma (FM) category we included families with at least two melanoma cases.. c Skin color was classified according to the Fitzpatrick phototype classification: fair (phototypes I or II) and dark (phototypes III-V). d The ancestral origin of the Latin American patients with melanoma is indicated according to the origin of their grandparents. "European" means all grandparents were born in a European country. "Latin," "Amerindian," and "African American" indicate that at least one of the grandparents was born in Latin America (but had Spanish or Portuguese ancestors), was a descendent of natives, or had an African-American origin, respectively. In the case that the grandparents were of different origins, for example, African American and Amerindian, the ancestral origin was indicated taking into consideration the darkest skin color: African American/Amerindian/Latin. There were no individuals with Asian origin. <0.001 Statistically significant P values are given in bold. a P values to assess differences in mutation frequency among families with sporadic multiple primary melanoma (SPM) and among all melanoma-prone families were obtained by comparing the global result of Latin America versus Spain. b P values to assess differences in mutation frequency among families according to the number of melanoma cases were assessed separately in Latin America, in Spain, and in the entire set of patients (total). frequency among unrelated families from Chile, suggesting a possible founder effect. In one family from Chile we detected p.R144C (c.430C>T), previously detected at the germline level in a patient with pancreatic cancer. 29 Mutation p.G101W is also frequent in Latin America, as in Mediterranean countries (Italy, France, and Spain) 7 where haplotype analysis showed a founder effect. 30 We identified four other mutations in Brazil: p.P48T (c.142C>A), previously reported in an Italian population with FM, 31 was found in four families, one of them of Italian ancestry, suggesting a possible founder effect 32 ; IVS2-105A>G and p.M53I (c.159G>C), previously reported in melanoma-prone families from the United Kingdom, Australia, and the United States 7 ; and mutation p.V43L (c.127G>C), affecting p14ARF, which has not previously been reported. In Uruguay we detected p.E88X (c.262G>T) in two families, which also was detected in two Spanish pedigrees. In Mexico we identified a mutation in the two probands of one family-p.I49T (c.146T>C)-which was previously reported in a case of FM by Hussussian et al. 33 and did not segregate with melanoma in that case. However, functional analysis showed impairment for this variant. 34 We detected differences in MC1R variant distribution in our set of patients. Latin American patients with melanoma carry more MC1R variants. These genetic results correlate with the phenotypic data, where Latin American patients with melanoma have fairer skin and hair color. The prevalence of MC1R variants varies between populations. 35 In this study, specific variant frequencies differed between Latin American and Spanish patients with melanoma. Latin American patients with melanoma had an increased presence of p.R160W and p.R163Q. However, controls would be needed to assess the melanoma risk associated with carrying these variants in Latin America. p.R160W is associated with an increased risk for melanoma and red hair color. 10 By contrast, p.R163Q, which is not associated with pigmentation or tanning response, favors the development of chronic sun exposure melanomas in the Mediterranean population 22 and increases the risk for melanoma in areas with high ultraviolet radiation. 36 These reports suggest that a possible interaction between p.R163Q and a high ultraviolet radiation dose could favor melanoma development. Most Latin American countries receive a huge amount of ultraviolet radiation compared with northern latitudes; this could explain the increased frequency of SMP and FM with the p.R163Q variant in Latin America, although its frequency in a control Latin American population is unknown. To date, genetic testing in patients at high risk for melanoma is restricted to CDKN2A and CDK4. More studies of patients wild type for these genes should be conducted to assess the role of other melanoma-susceptibility genes such as MITF, BAP1, TERT, POT1, ACD, and TERF2IF 8 for their possible incorporation in melanoma genetic counseling. In this study we demonstrated that CDKN2A germline mutation frequency in melanoma-prone families with at least two melanoma cases is greater in Latin America than Spain (23.9 vs. 14.1%, respectively). Inclusion criteria for genetic testing of melanoma in Spain follow the rule of two. 12 Based on the results of this study, the inclusion criteria for genetic counseling for patients with melanoma in Latin America should also follow this rule because it allows the detection of CDKN2A mutations in a significant number of patients, except for southern Brazil, where the rule of three should be used. Genetic testing allows us to identify mutation carriers in families with a high risk of developing the disease. Carriers can be included in specific follow-up programs that allow the detection of melanomas at early stages, which improves the disease prognosis. 3,37,38 Digital follow-up with specific dermatologic techniques, including total-body photography and digital dermoscopy, allow early detection of melanomas with a low rate of excision. 38 Early melanomas in patients carrying MC1R variants may be difficult to diagnose definitively using dermoscopy, and an integrated approach including clinical history and dermoscopic data should be used when evaluating them. 39 Thus, MC1R sequencing could also help to choose the best screening methods. The experience of genetic counseling in Spain over 10 years shows that melanomas can be diagnosed at any time, so the follow-up of individuals at high risk for melanoma should be maintained over time. 12 In conclusion, Latin American patients with melanoma and at high risk for melanoma had fair skin and European origin. The mutations found also had been detected in Spanish, European, or North American populations, suggesting that they could have a single origin and that there could be a founder effect. Finally, inclusion criteria for genetic counseling in Latin American patients with melanoma should follow the rule of two: two primary melanomas in an individual or families with at least one invasive melanoma and one or more other diagnoses of melanoma or pancreatic cancer in first-or second-degree relatives. SUPPLEMENTARY MATERIAL Supplementary material is linked to the online version of the paper at http://www.nature.com/gim
2017-11-08T18:50:15.249Z
2015-12-17T00:00:00.000
{ "year": 2015, "sha1": "85c25d66dbbc9bc1efb5804d0717165d3f3bf192", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/gim2015160.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85c25d66dbbc9bc1efb5804d0717165d3f3bf192", "s2fieldsofstudy": [ "Biology", "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247146976
pes2o/s2orc
v3-fos-license
Acute Medication Use in Patients With Migraine Treated With Monoclonal Antibodies Acting on the CGRP Pathway: Results From a Multicenter Study and Proposal of a New Index Introduction Assessing the impact of migraine preventive treatments on acute medication consumption is important in clinical evaluation. The number of acute medication intakes per each monthly migraine day (MMD) could provide insights on migraine burden and represent a new proxy of treatment effectiveness in clinical trials and real-life studies. We evaluated the effect of monoclonal antibodies acting on calcitonin gene-related peptide (CGRP) pathway on the consumption of migraine acute medication in real-life. Methods In two headache centers in Prague (CZ), we included and followed up to 6 months consecutive patients treated with MoAbs acting on CGRP (erenumab or fremanezumab). For each month of treatment, we reported monthly drug intake (MDI) in doses of any medication, migraine-specific (MS), and non-migraine-specific (non-MS) medications, and computed a ratio between MMDs and MDI, i.e., Migraine Medication Index (MMI) for MS and non-MS medications. Results We included 90 patients (91.1% women) with a median age of 47 [interquartile range (IQR) 42–51] years; 81 (90.0%) treated with erenumab and 9 (10.0%) with fremanezumab. Median MMDs decreased from 11 (IQR 8–14) at baseline to 4 (IQR 2–5) at Month 3 (p < 0.001 vs. baseline) and 3 (IQR 2–6) at Month 6 (p < 0.001 vs. baseline). Median MDI decreased from 15 drug intakes (IQR 11–20) at baseline to four drug intakes (IQR 2–7) at Month 3 (p < 0.001) and four drug intakes (IQR 2–7) at Month 6 (p < 0.001).The corresponding MDIs for MS medications were 10 (IQR 6–14) at baseline, 3 (IQR 1–5, p < 0.001) at Month 3, and 2 (IQR 0–4, p < 0.001) at Month 6. Monthly drug intakes for non-MS medications were 4 (IQR 0–9) at baseline, 1 (IQR 0–3, p < 0.001) at Month 3 and at Month 6.Median MMI decreased from 1.32 (IQR 1.11–1.68) at baseline to 1.00 (IQR 1.00–1.50, p < 0.001) at Month 3 and 1.00 (IQR 1.00–1.34, p < 0.001) at Month 6. Conclusions We confirmed that MoAbs acting on CGRP pathway decrease acute migraine medication consumption. We proposed a new index that can be easily applied in clinical practice to quantify migraine burden and its response to acute medication. Our index could help optimizing migraine acute treatment in clinical practice. Introduction: Assessing the impact of migraine preventive treatments on acute medication consumption is important in clinical evaluation. The number of acute medication intakes per each monthly migraine day (MMD) could provide insights on migraine burden and represent a new proxy of treatment effectiveness in clinical trials and real-life studies. We evaluated the effect of monoclonal antibodies acting on calcitonin gene-related peptide (CGRP) pathway on the consumption of migraine acute medication in real-life. Methods: In two headache centers in Prague (CZ), we included and followed up to 6 months consecutive patients treated with MoAbs acting on CGRP (erenumab or fremanezumab). For each month of treatment, we reported monthly drug intake (MDI) in doses of any medication, migraine-specific (MS), and non-migraine-specific (non-MS) medications, and computed a ratio between MMDs and MDI, i.e., Migraine Medication Index (MMI) for MS and non-MS medications. Conclusions: We confirmed that MoAbs acting on CGRP pathway decrease acute migraine medication consumption. We proposed a new index that can be easily applied in clinical practice to quantify migraine burden and its response to acute medication. Our index could help optimizing migraine acute treatment in clinical practice. INTRODUCTION Migraine ranks third as the most prevalent disorder in the world and represents the first cause of disability worldwide in both males and females under the age of 50 (1). Acute treatments for migraine include migraine-specific (MS) medications, namely triptans and ergots, and nonmigraine-specific (non-MS) medications such as non-steroidal anti-inflammatory drugs (NSAIDs), opioids, combined, and simple analgesics. The consumption of both MS medications and non-MS medications is often not appropriate and associated with limited control of migraine episodes (2), together with poor tolerability and adverse events (3,4). Acute medication overconsumption is one of the most relevant problems in migraineurs as it may favor migraine chronification and lead to the development of medication overuse headache (MOH) (5). Consumption of acute medication could be an important indicator of the efficacy of migraine preventive treatments. Preventive treatments aim at decreasing the frequency, intensity, duration of attacks and, consequently, the consumption of acute treatments (6). Monoclonal antibodies acting on the calcitonin gene-related peptide (CGRP) pathway (CGRP-MoAbs) represent the first preventive agents specifically designed for migraine (7,8). The effectiveness of those agents in decreasing the consumption of both MS and non-MS medications has been demonstrated in a subgroup analysis of randomized controlled trials (9). However, detailed real-world data are poorly considered, to date (10). When assessing acute medication consumption in patients with migraine, there is a difference between the number of drug intakes and the number of days in which the drug is taken. Patients with migraine may take multiple drug intakes of medication in 1 day to treat severe and longlasting attacks. On the other hand, patients with mild and short-lasting headaches might not take any acute medication on some headache days. Thus, the discrepancy between the drug intakes of drugs taken and the days during which they are taken could be an indirect but simple indication of migraine duration, severity, and response to acute medication. In the present study, we aimed to report the effectiveness of CGRP-MoAbs on the consumption of MS and non-MS medications in a real-world setting and to present an index of acute medication consumption, the Migraine Medication Index (MMI). Study Design and Patients We performed a retrospective observational, multicenter study in two different Czech hospitals, i.e., "Motol University Hospital" and "Military University Hospital, " both in Prague. As a retrospective clinical audit on anonymized clinical practice data, the study was exempt from ethical approval and patients did not have to sign an informed consent. Inclusion criteria were the following: age ≥18 years old; diagnosis of chronic migraine (CM) or episodic migraine (EM) with or without aura and with or without medication overuse (MO), assessed according to the criteria of the International Classification of Headache Disorders (ICHD-III) (11); >4 monthly migraine days (MMD) and ≥2 previous preventive treatments failed or not tolerated, based on the criteria established by the European Headache Federation (12) and the American Headache Society (13) and Czech regulations for the prescription of monoclonal antibodies acting on the CGRP pathway. With a different approach if compared with public reimbursement established by other countries treatment with CGRP-MoAbs was reimbursed to patients by insurance companies according to the described criteria. The study population included all patients treated with at least one dose of MoAbs and followed up for at least 6 months; drug discontinuation before 6 months was recorded as well as its reasons (patient decision due to perceived ineffectiveness, adverse events, loss to follow-up). Patients received subcutaneous administrations of erenumab 140 mg monthly or fremanezumab 225 mg monthly; galcanezumab was not available in the study centers during the inclusion period. All treatments followed common clinical practice; acute treatment withdrawal was not performed in patients with MO. Treatment prescriptions continued despite the outbreak of the SARS-CoV-2 pandemic as the study centers did not close during that period. Following the clinical practice of the study centers, all patients already had a migraine diary, where they reported the number of migraine days, drug intakes, and type of symptomatic drugs assumed to treat migraine; patients continued to record these data throughout the treatment period. Data Collection At baseline visit, for each included patient, we recorded sex, age, comorbidities, history of SARS-CoV-2 infection, family history of migraine, age at migraine onset, migraine duration (years), previous preventive treatment failed or not tolerated (number and type), MMDs in the past 3 months, monthly number of drug intakes and type of symptomatic drugs in the past 3 months, and migraine impact on daily activities, assessed by asking patients to fulfill the Czech version of the Headache Impact Test (HIT-6) (14). During each monthly follow-up visit, we collected: MMDs in the last month, monthly number of drug intakes and type of symptomatic drugs to treat migraine in the last month, adverse events, and SARS-CoV-2 infection. Moreover, at 3rd and 6th month, we asked patients to fulfill the HIT-6. Study Outcomes Primary endpoints included the decrease in monthly drug intake (MDI), monthly MS medications intake, and monthly non-MS medications intake at each month from baseline. Baseline was defined as a mean of the last 3 months before starting erenumab or fremanezumab treatment. We also computed the MMI by dividing MDI by MMDs; hence, MMI values <1.00 indicate that some migraines are so mild that they do not require acute treatment, while the highest values indicate a high need for acute medication on an average MMD. MMI change at 6 months was computed in patients with <50 and ≥50% decrease in MMDs at 6 months and in those with <50 and ≥50% decrease in MDI at 6 months. Factors potentially influencing MMI change (gender, age, years of migraine history, aura, CM status) were also explored. Secondary endpoints included the decrease in MMDs from baseline at each month, the proportion of patients who achieved a ≥50% reduction in MMDs from baseline at 3 and 6 months, and the decrease in mean HIT-6 score from baseline at 3 and 6 months of moAb treatment. Statistical Analysis We descriptively synthesized patients' sociodemographic characteristic, comorbidities, migraine diagnosis and history, failed preventive drugs, moAb treatment withdrawal, and adverse events by using absolute numbers and proportions or medians and interquartile ranges (IQRs), as appropriate. Baseline included the 90-day period preceding treatment with moAbs; monthly follow-ups were performed every 28 days for patients treated with erenumab and every 30 days for those treated with fremanezumab. To ensure comparability, baseline and follow-up variables, including MMDs, MDI, MS, and non-MS medications intake, were all normalized to 30-day periods. All outcomes were calculated over the total of patients with complete follow-up, irrespective of treatment discontinuation. Patients discontinuing the treatment were considered among those with a <50% reduction in MMDs from baseline. Due to the real-world design of the present study, we could not perform a sample size calculation; all outcomes were exploratory and based on a convenience sample, in the same fashion as previous realworld studies (15)(16)(17). We used the Wilcoxon signed-rank test or Spearman's correlation to compare outcome variables. Table 1). Two patients (2.2%) discontinued the treatment due to perceived treatment ineffectiveness. Discontinuation occurred at 6th month for two patients; no other patient discontinued treatment before. At month 3 of treatment, one patient was prescribed with an add-on therapy (cinnarizine). Only two patients (2.2%) experimented as adverse event local redness or pain at drug site injection. We (Figure 3). Although the 0.00-0.99 group increased and the ≥2.00 group decreased numerically over time, the distribution of categories did not change significantly (P = 0.144). DISCUSSION One of the main goals of migraine prevention is to limit the use of acute medication (13). Hence, monitoring the use and efficacy of acute medication is an important goal in clinical practice. The decrease in acute medication use was a secondary outcome in most trials of MoAbs acting on the CGRP pathway (18)(19)(20) and was the specific object of a subgroup analysis (9). In the present study, we specifically aimed to collect real-life data on the use of MS and non-MS medications and their decrease after treatment with MoAbs acting on the CGRP pathway. However, we also aimed to provide an accurate depiction of the change in acute medication that goes beyond simple dose counting. Effective migraine prevention can indeed decrease the duration and intensity of migraine, thus leading to a further decrease in the drug intakes of acute medication required. For that reason, we deemed useful to consider the number of acute medication intakes together with the number of migraine days in a new index, the MMI. We found that the index decreased independently from the decrease in MMDs, suggesting that acute medication intake decreased in patients treated with erenumab or fremanezumab not only because of a decrease in MMDs, but also because of additional factors. Those factors might include a decrease in the duration and/or intensity of headache, or an increased effectiveness of acute medication; however, this is only a hypothesis, as those variables were not collected in the present study. Previous studies already considered the importance of outcomes different from the number of MMDs or MDI. A study of patients treated with galcanezumab, a MoAb directed against CGRP, proposed the introduction of total pain burden, which combines migraine frequency, severity, and duration (21). Compared with the total pain burden, the MMI can be easily evaluated as the ratio between two parameters that are commonly assessed in headache diaries, namely MMDs and MDI. Categorizing the MMI can also give an idea of the need of acute medication for each single migraine day. Therefore, this index might provide information not only about the frequency, but also about the efficacy of acute medication. In our opinion, this is an important part of clinical evaluation to adjust patients' medication. Notably, MDI and MMI decreased in different fashions in our study (Figures 1, 2). The proposed category distribution of MMI showed that most patients used one to two acute drug intakes per each MMD, without substantial changes throughout treatment with erenumab or fremanezumab (Figure 3). We tested the applicability and value of the newly proposed MMI in a real-life study of patients with migraine treated in two centers. The present study mostly included patients with EM, at variance with other available real-life studies on the effectiveness of MoAbs acting on the CGRP pathway, which included only (15,17,22,23) or almost only (16) patients with CM. This difference can be explained by the fact that the present study was performed more recently than the similar ones, at a time when the experience with MoAbs encouraged the spread of MoAb use to patients with high-frequency EM. As regards patients' failed preventive treatments, we considered Czech regulations that were in line with the definition of European Headache Federation and the American Headache Society (i.e., ≥ 2 previous preventive treatments failed or not tolerated) and similar previous trials (13,18,19). These criteria are different from those adopted in some countries such as Italy, where MoAbs acting on the CGRP pathway can be prescribed and reimbursed only for patients reporting ≥3 previous preventive treatments failed or not tolerated. However, also in our sample most of the patients (73.3%) had ≥3 previous migraine prophylaxis failures, highlighting the high presence of EM patients with a long treatment history who had not yet found an effective treatment before MoAbs acting on the CGRP pathway. The strength of our study is that it was conceived to collect complete data about migraine acute medication, including the number of drug intakes of both MS medication and non-MS medication. To our knowledge, this is the first study to provide a complete account of the intake of different medication classes over a 6-month period. However, our data are limited by the collection of MMDs only, while collecting all headache days would have provided more complete data about the patients' pain. Besides, our study included patients treated with more than one drug, i.e., erenumab and fremanezumab; patients treated with fremanezumab represented a small proportion of the sample. The two drugs are taken at different time intervals (28 days for erenumab and 30 days for fremanezumab). To account for this difference, we normalized each monthly period to 30 days; nevertheless, the differences between the two drugs might have introduced heterogeneity. CONCLUSION We confirmed that CGRP-MoAbs are helpful in reducing the consumption of acute migraine medication. We proposed a new index that can provide an accurate estimate of medication consumption in patients with migraine and possibly new insights on the migraine burden and patterns of acute medication use. The MMI could be useful to optimize acute migraine medication in clinical practice. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2022-02-28T14:19:19.976Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "4f5b315622a44989582bf17e4510601701096e3c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "4f5b315622a44989582bf17e4510601701096e3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267638046
pes2o/s2orc
v3-fos-license
Abatacept in individuals at high risk of rheumatoid arthritis (APIPPRA): a randomised, double-blind, multicentre, parallel, placebo-controlled, phase 2b clinical trial , Introduction Over the past decade, there has been progress in understanding the genetic, environmental, and immuno logical risk factors associated with rheumatoid arthritis, and that individuals at high risk of disease can be identified by detecting serum autoantibodies to citrullinated protein antigens (ACPA) and symptoms, such as inflammatory joint pain. 1,2Although the presence of autoantibodies might precede disease onset by a decade or more, the combination of ACPA with symptoms, and evidence of subclinical synovitis by imaging, has increased the predictive power of identifying individuals who are most likely to progress to rheumatoid arthritis within 2 years. 3,4These features have provided a framework for evaluating therapeutic strategies that could delay or prevent disease onset. 5batacept is a biological diseasemodifying antirheumatic drug recommended for the treatment of rheumatoid arthritis that selectively modulates co stimulatory signals required for Tcell activation. 6By binding to CD80 or CD86, abatacept downmodulates CD28mediated costimulation of T cells, suppressing persistent Tcell activity involved in the pathogenesis of immunemediated inflammatory diseases. 7Abatacept has shown efficacy in the treatment of active rheumatoid arthritis when used as monotherapy or in combination with conventional diseasemodifying antirheumatic drugs in patients with an inadequate response to other conven tional or biological diseasemodifying anti rheumatic drugs. 6,8Other studies suggest that abatacept has efficacy in patients with early rheumatoid arthritis (ie, symptom duration <18 months), 9,10 with increased frequency of responses in patients with high ACPA concentrations. 11hese data suggest that costimulatory signals play an important role in perpetuating the early phase of the disease.Given the mechanism of action of abatacept and the likely role of T cells in the earliest detectable phase of disease, we aimed to investigate abatacept in individuals at high risk of developing rheumatoid arthritis. Study design The APIPPRA study was a randomised, doubleblind, multicentre, parallel, placebocontrolled, phase 2b clinical trial undertaken in 28 hospitalbased early arthritis clinics in the UK and three in the Netherlands.The trial protocol was approved by the national regulatory authorities in the UK (National Research Ethics Service Committee London, Westminster; 14/LO/0100) and the Netherlands (Leiden University Medical Centre Medical Ethics Committee).The study was conducted according to the International Council for Harmonisation guidelines, applicable regulations and guidelines governing the conduct of clinical trials, and the principles of the Declaration of Helsinki.Trial oversight was provided by independent trial steering and data monitoring committees.The study protocol has been reported previously. 12 Participants Adults aged 18 years or older, at risk of developing rheumatoid arthritis, were recruited on the basis of clinical and laboratory characteristics.Key inclusion criteria were the presence of inflammatory joint pain (see study protocol for definition) and testing positive for ACPA and rheumatoid factor, regardless of the assay used in local laboratories.Individuals who were negative for rheumatoid factor but had ACPA concentrations three or more times the upper limit of normal were also eligible.Individuals with a previous diagnosis of inflammatory arthritis, or previous use of diseasemodifying antirheumatic drugs or corticosteroids, were excluded Research in context Evidence before this study Although genetic and environmental risk factors associated with rheumatoid arthritis have been documented over many decades, clinical phenotypes of individuals at high risk have emerged from inception cohorts reported in the early 2000s.These studies described the risk of progression to rheumatoid arthritis associated with inflammatory joint pain, or arthralgia, in association with disease-associated serum autoantibodies.Since then, at-risk phenotypes have been reported by many groups with consistent rates of progression over 2 years in excess of 40%, or higher depending on whether additional modalities, such as imaging, were included as part of risk stratification.We searched PubMed, Embase, Cochrane Library, and clinical trial registries with the search terms "rheumatoid arthritis", "prevention", "arthralgia", "anti-CCP", and "randomized controlled trial" for studies published in English up until Jan 1, 2023. Added value of this study To our knowledge, the APIPPRA study is the first randomised controlled trial to test the effects of co-stimulation modulation on the progression of a high-risk state to rheumatoid arthritis. By contrast with previous trials, the APIPPRA study suggests that a subset of individuals at high risk exists who benefit from abatacept beyond the treatment period.In-depth analysis of patient-reported outcomes reveals that the symptom burden characteristic of ACPA-positive individuals with arthralgia is driven, at least in part, by systemic adaptive immune responses targeted by abatacept.These symptom complexes include not only pain, function, and wellbeing, but also sleep problems, anxiety, and work instability.Co-stimulation modulation also reverses subclinical inflammation defined by ultrasonography.The outcome of a 12-month fixed dosing period suggests that longer periods of treatment may be required to prevent progression to rheumatoid arthritis. Implications of all the available evidence Our results show that rheumatoid arthritis prevention trials are feasible and targeting adaptive immunity at an early stage, before clinically apparent arthritis is manifest, can prevent the onset of rheumatoid arthritis.The data provide a framework for future prevention trials and a realistic proposition for disease prevention in routine clinical care.from the study.Simple analgesics or nonsteroidal antiinflammatory drugs were permitted.Individuals with clinically apparent inflammatory arthritis, characterised by soft tissue swelling of one or more synovial joints before random assignment, were also excluded.Subclinical synovitis, detected by ultra sonography or MRI, was not an exclusion criterion.A complete list of eligibility criteria can be found in the published protocol. 12Study participants provided written informed consent before screening assessments and randomisation. Randomisation and masking The King's Clinical Trials Unit randomly assigned participants (1:1) using computergenerated permuted block randomisation (block sizes of 2 and 4) stratified by sex (male and female), smoking (never, former, and current), and country (UK and Netherlands).This unit also oversaw the trial and data management.Masking was achieved through the provision of four kits of abatacept or matching placebo to participants every 3 months.Each kit was identical in appearance and packaging containing four prefilled syringes with coded labels.Participants, investigators, subinvestigators, clinical assessors, sonographers, and hospital trial pharmacists who distributed the study drug were masked to group assignment. Procedures Participants were randomly assigned to abatacept 125 mg subcutaneous injections weekly as recommended for rheumatoid arthritis treatment, 6,8 or placebo for 12 months. 12Participants were trained to selfadminister the study drug subcutaneously using a singledose pre filled syringe according to local practices.Treatment compliance was evaluated at study visits every 3 months and by completing study medication diaries.After 12 months, the study drug was discontinued, and participants were followed up for a further 12 months.When rheumatoid arthritis was diagnosed, the study drug was withdrawn, and treatment was initiated at the discretion of the supervising investigator.Otherwise, diseasemodifying antirheumatic drugs and corticosteroids were not permitted at any time during the study. Following baseline clinical and imaging assessments, participants attended followup every 3 months for evaluation of symptoms and signs of inflammatory arthritis, completion of questionnaires, and patient reported outcomes, regardless of whether they met the primary outcome.Blood was taken for disease activity assessments, routine toxicity monitoring, and biomarker studies.Radiographs of hands and feet were completed at baseline, 12 months, and 24 months, and sub clinical synovitis was assessed via ultrasonography of 24 predefined joints every 6 months until the end of the study, or until the primary endpoint was met (appendix pp 4-6).Clinical assessors were masked to ultrasound assessment and the ultrasonographers were masked to the clinical assessments. Outcomes The primary endpoint was the time to development of clinical synovitis in three or more joints or rheumatoid arthritis according to the American College of Rheuma tology/European League Against Rheumatism (ACR/ EULAR) 2010 criteria, 13 whichever was met first, and where joint involvement was defined as joint swelling determined by two independent assessors.In either case, synovitis in nominated joints was confirmed by ultrasonography.Time was censored at 24 months or earlier withdrawal.A primary endpoint roadmap is provided in the appendix (p 3). Secondary endpoints included Disease Activity Scores (DAS 28).These scores incorporated tender and swollen joint counts, patient global Visual Analogue Score (VAS), Creactive protein or erythrocyte sedimentation rate, and extended 68 or 66 joint counts, simple DAS and clinical DAS, pain VAS, lifestyle factors questionnaire, Health Assessment Questionnaire (HAQ), EQ5D, Hospital Anxiety and Depression Scale (HADS), Rheumatoid Arthritis Work Instability Scale (RAWIS), Functional Assessment of Chronic Illness Therapy-Fatigue (FACITF) questionnaire, the Illness Perception Question naire modified for Rheumatoid Arthritis (IPQR), and the Symptoms in Persons At Risk of Rheumatoid Arthritis (SPARRA) questionnaire.Additional secondary outcomes were the proportion of participants requiring disease modifying antirheumatic drugs or corticosteroid therapy and the time to commencing therapy.Imaging assess ments included xrays of hands and feet at baseline, 12 months, and 24 months using van der Heijde Sharpe Modified Scores for erosions and joint space narrowing, 14 scored by readers in randomtime order, and evaluation of synovial hypertrophy (grayscale) and vascularity (power Doppler) defined by ultrasonography incorporating EULAROMERACT combined severity grading (appendix pp 5-6). 15linical assessments of safety were recorded at all visits.The severity of adverse events and their relation to the study drug were reviewed regularly by an independent data monitoring committee. Statistical analysis The power calculations were based on 40% of participants in the placebo group developing arthritis over 24 months, informed by atrisk cohorts. 16172 participants were needed to provide 80% power to detect a 50% relative reduction in developing arthritis in the abatacept group compared with the placebo group (hazard ratio [HR] 0•437), based on a twosided logrank test at the 5% significance level, without loss to followup of any of the required 52 events.By applying a conservative inflation of 20% to allow for dropout, we aimed to recruit 103 participants per group. See Online for appendix The primary analysis followed the intentiontotreat principle (ie, included all randomly assigned participants using available followup data).For the primary outcome, a perprotocol analysis was done of eligible participants who complied with at least 90% injections and did not use forbidden rescue medication.Sensitivity analyses were done for missing data 17 and potential informative dropout.Kaplan-Meier survival curves for 12month or 24month outcomes were censored at the corresponding 2week perprotocol window, or at dropout if earlier than this.These survival curves were used to estimate the proportion in each group remaining arthritisfree, reported as percentage (SE).A stratified Cox proportional hazards regression model, accounting for randomisation stratifiers, was planned but with few events in most randomisation strata, the unadjusted model alone was used.Proportional hazards assumptions were assessed graphically (log-log plot) and tested by logoftime interaction; failure led to reporting restricted mean survival times.Linear mixed effects models were planned for continuous outcomes and differences in proportions for binary outcomes (appendix p 2).Since the distribution of swollen joint counts was markedly skewed, the Kaplan-Meier method was used to estimate the proportions of patients developing one, two, or three swollen joints. All tests were twotailed and primarily assessed at the 5% significance level.The trial was powered to allow a secondary 1% significance level to be applied to allow safer interpretation of multiple secondary outcomes.Descriptive statistics were reported for measures of acceptability, feasibility, and safety, and percentage measures reported with 95% CIs.There were no interim analyses or stopping rules.Analyses were performed in SPSS (version 28) and the statistical analysis plan was reviewed and signed off by the trial steering and data monitoring committees before data lock. Role of the funding source The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. Results Between 1).Randomisation stratifiers were evenly balanced between the groups.Adherence to study medication was similar between groups, with 81 (74%) of 110 participants in the abatacept group and 77 (75%) of 103 participants in the placebo group administering 90% or more injections in 12 months or until the primary endpoint was met.This translates to nonadherence rates of 26% and 25% for the abatacept and placebo groups, respectively. Although the unadjusted Cox regression model for the study period provided an HR of 0•61 (95% CI 0•37-0•99) in favour of abatacept compared with placebo, the proportional hazards assumption for the study period was not met (p=0•0025).For the first 12 months, the assumption held, with the model providing a HR of 0•20 (95% CI 0•09-0•45).Therefore, the restricted mean survival time was used as a prespecified summary statistic (appendix p 2). 18 At 24 months, the restricted mean survival time was 658 days (SE 16) for abatacept and 558 days (27) for placebo, with a difference in restricted mean arthritisfree survival of 99 days (95% CI 38-161; p=0•0016).For the 12month treatment period, the restricted mean survival time was 368 days (SE 5) for abatacept and 316 days (12) for placebo, with a difference in restricted mean arthritisfree survival of 53 days (95% CI 28-78; p<0•0001).Further analysis at 6 months and 18 months indicated that the effect persisted throughout the study (appendix p 8).The perprotocol analysis of treatmentcompliant individuals showed similar results (difference in restricted mean survival time between groups at 24 months was 114 days [95% CI 43-185; p=0•0018]).Additional sensitivity analysis of the primary outcome criteria is described in the appendix (pp 8-9). At 12 months, the proportions of participants in the placebo group with greater than or equal to one, two, or three swollen joints were appreciably higher than those in the abatacept group.At 24 months, the difference between groups was less substantial (table 2; appendix p 11).Reductions in pain VAS and DAS28 were also greater with abatacept than placebo at the end of the treatment period, but were not sustained to 24 months.Changes in tender joint count, clinical disease activity score, and simple disease activity score did not differ significantly between groups.There were consistent patterns of response in favour of abatacept at 12 months for function and pain (as determined by HAQ); emotional wellbeing and quality of life (EQ5D); anxiety (but not the depression component of the HADS; table 2); FACITF total scores; physical, emotional, and functional wellbeing (appendix p 12); and two of the IPQR domains (appendix p 13).These effects were not sustained by 24 months.Typical changes over time for HAQ and EQ5D are shown in figure 3 in which significant differences in favour of abatacept were observed at 12 months.These differences were not sustained by 24 months.Symptom complexes using the SPARRA questionnaire revealed that after 12 months of treatment, a smaller proportion of participants had symptoms for at least 1 day in the past month in the abatacept group, with improvements of 20% or more for joint pain, perception of joint swelling, and sleep problems (table 2).There was a reduction in work instability favouring abatacept at 12 months, but not at 24 months. At baseline, van der Heijde modified Sharp radiographic scores of 0 were recorded in 176 (84%) of 209 participants for erosions and in 191 (91%) of 209 participants for joint space narrowing (appendix p 16).From the distribution of erosion scores, 197 (94%) of 209 participants had erosion scores of 1 or less (appendix p 17).A betweengroups comparison of the proportion of participants whose scores worsened by at least 1 point at 12 months and 24 months found that the numbers were too small for a meaningful analysis (appendix pp 15-16).Sonographic analysis revealed that at baseline, 70 (33%) of 212 participants had no detectable synovial hypertrophy (grayscale) and 154 (73%) of 212 participants had no detectable vascularity (power Doppler; table 3).At the end of treatment, fewer participants in the abatacept group had grayscale or power Doppler scores that worsened by 1 point from baseline when compared with the placebo group.This effect was not sustained 12 months after stopping treatment.Due to the discontinuation of ultrasonography from when the primary event was met, a composite outcome was generated, defined as the occurrence of a primary event or worsening by 1 point from baseline of the grayscale score by ultrasonography.A significantly lower proportion of participants in the abatacept group than the placebo group met this composite outcome at 12 months and 24 months (table 3).Similar results were observed for power Doppler at 12 months but not 24 months.Using EULAROMERACT severity grading, 15 significantly fewer participants in the abatacept group had worsening of severity scores or a primary event, an effect that was sustained to 24 months.Scores for tenosynovitis of hands and wrists were low at baseline, with maximum grayscale scores of 2 or more being detected in only four (4%) of 91 participants in the placebo group and nine (9%) of 100 in the abatacept group; for power Doppler, three (3%) of 87 participants and six (6%) of 96 participants had scores of 2 or higher.These scores remained low at followup in both groups. Taken together, serial sonographic assessments showed that abatacept reduces the progression of subclinical disease and the effects were partly sustained beyond the treatment period. Of those participants receiving at least one dose of study drug, 100 (92%) of 109 participants in the abatacept group and 91 (89%) of 102 participants in the placebo group had at least one adverse event, and 57 (52%) of 109 and 62 (61%) of 102 had at least one infection (table 4; appendix p 18).The frequency was similar between groups, apart from gastrointestinal, haematological, neurological, and other adverse events, which were higher in the abatacept group.All six serious adverse events related to study drug were reported in the abatacept group, and included genitourinary infection (n=1), ear infection (n=1), nausea (n=1), dry mouth (n=1), fatigue (n=1), and headache (n=1).Laboratory adverse events were infrequent, and similar between groups. 18 serious adverse events were reported (table 4).In the abatacept group, these were death (n=1), cardiovascular events (n=1), admission for joint replacement surgery (n=2), and malignancy (n=3), and in the placebo group, these were death (n=1), infections (n=3), cardiovascular events (n=2), venous thromboembolism (n=1), malignancy (n=1), and admission for joint replacement surgery (n=3).All serious adverse events were deemed unrelated or unlikely to be related to study medication.There were six pregnancies in the abatacept group and one in the placebo group.Of the participants receiving at least one dose of study drug, four withdrew due to adverse events (three Discussion The results of this phase 2B study indicate that treatment of adults at high risk of developing rheumatoid arthritis with abatacept reduces progression to clinically apparent arthritis during the treatment phase.Even after stopping treatment, the number of events in the abatacept group remained lower than the placebo group, suggesting sustained efficacy.However, by 24 months, the symptom burden, including qualityoflife assessments and pain, as well as ultrasonography of subclinical inflammation, was similar between groups, indicating that the effects of 12 months of abatacept treatment are not sustained.These findings could be explained by a number of factors.Mechanistically, the data confirm that Tcell co stimulation plays a role in the progression from the at risk state to rheumatoid arthritis, operating systemically and in synovial joints.The trial also provides evidence that harmful adaptive immune reactions contribute to the symptom burden associated with the atrisk state, as they do in established disease. 8,9Furthermore, the outcome of treatment withdrawal on the intentiontotreat population suggests that pathogenic immune responses reemerge and are not modified permanently by a fixed period of co stimulation modulation.Another factor that might explain why the effect of abatacept on the symptom burden is not sustained relates to the study design and analysis.In APIPPRA, all eligible participants were encouraged to remain in the study throughout the 24 months, regardless of outcomes.Accordingly, changes in secondary outcomes over time reflect the effects of study intervention as well as diseasemodifying antirheumatic drugs and corticosteroids, initiated in those who met the primary endpoint.The APIPPRA pre specified analysis is distinct from that reported for the TREAT EARLIER 19 and ARIAA trials, 20 At each assessment, grayscale and PD scores were recorded for a predefined core set of 24 joints (appendix pp 4-6).Combined scores were generated using the EULAR-OMERACT combined scoring system in which the higher of the two parameters determines the severity grading of synovitis (0, normal synovitis; 1, minimal synovitis; 2, moderate synovitis; 3, severe synovitis).PD=power Doppler. Table 3: Comparison between groups of grayscale synovial hypertrophy, power Doppler, and EULAR-OMERACT combined ultrasound scores for grading the severity of synovitis those who met the primary outcome, changes in patient reported outcomes, such as HAQ, were sustained beyond the treatment period in these studies. Published literature reporting trials of interception in individuals at risk of rheumatoid arthritis is limited to a few studies.Bos and colleagues showed that in 83 participants positive for ACPA or IgM rheumatoid factor with arthralgia, dexamethasone 100 mg injections at baseline and at 6 weeks reduced autoantibody concentrations by 50%.However, by 24 months, arthritis free survival curves had fully converged. 21Among 82 individuals at risk of rheumatoid arthritis in the phase 2b PRAIRI study, progression to rheumatoid arthritis was delayed by about 12 months in those undergoing Bcell depletion with a single 1000 mg intravenous infusion of rituximab.However, the overall risk of developing rheumatoid arthritis was no different from placebo by 48 months of followup. 22In the TREAT EARLIER trial of participants with arthralgia and MRI detected subclinical joint inflammation, and positive or negative for ACPA, 1 year of methotrexate (with a single intramuscular injection of methylprednisolone) did not prevent onset of rheumatoid arthritis by 24 months. 19In individuals positive for ACPA with higher rates of progression, methotrexate delayed onset of rheumatoid arthritis.However, there were consistent and sustained improvements in patientreported outcomes, regardless of ACPA status.The StopRA study, a doubleblind, placebocontrolled study of hydroxychloroquine versus placebo enrolling individuals positive for anticyclic citrullinated peptide 3 with or without symptoms, is yet to report in full. 23An interim analysis of 142 eligible participants indicated that the Kaplan-Meier estimated probabilities of developing rheumatoid arthritis were 34% in the hydroxychloroquine group, and 36% in the placebo group; arthritisfree survival curves over 36 months were superimposed. 23he ARIAA study aimed to examine whether 6 months of abatacept treatment could reverse subclinical inflammation as measured by MRI in 98 participants positive for ACPA with arthralgia and a positive MRI scan in the dominant hand at baseline. 20Compared with APIPPRA, dosing was for 6 months versus 12 months and, although symptoms and serology were similar, the study mandated synovial joint inflammation by MRI.The ARIAA results showed improvement in at least one of three parameters (synovitis, tenosynovitis, or osteitis) in 28 (57%) of 49 participants in the abatacept group compared with 15 (31%) of 49 participants in the placebo group at 6 months, effects that were sustained 1 year after stopping treatment.Furthermore, the proportions of individuals progressing to rheumatoid arthritis at 6 months were four (8%) of 49 in the abatacept group and 17 (35%) of 49 in the placebo group and, although differences converged after stopping treatment, they remained significant at the end of study (17 [35%] of 49 vs 28 [57%] of 49). Another feature distinguishing the APIPPRA study from other interception trials is the risk profile of the cohort itself.For example, in TREAT EARLIER and ARIAA, MRIpositive inflammation was mandated for inclusion.This criterion was not a requirement for inclusion in APIPPRA in which most participants had no or low levels of subclinical inflammation by ultra sonography.This lower risk state was also reflected in the treatment and placebo groups in lower scores for tender joint count (median 1 vs 2) and pain (VAS 24 vs 24), which were similar to the PRAIRI study (median tender joint count two vs none), 22 but lower than baseline symptoms reported in the TREAT EARLIER (median tender joint count 68 of four vs three; pain VAS 50 vs 50) 19 and ARIAA trials (median tender joint count two vs three; pain VAS 43 vs 46). 20These values appear small and the differences modest, but the trends are in keeping with the progression rates in their respective placebo groups, being 67% for participants positive for ACPA with arthralgia in TREAT EARLIER and 57% in ARIAA, compared with 40% in the PRAIRI study and 37% in APIPPRA. 19,20,22Taken together with baseline ultra sonography, these data suggest that the APIPPRA study population resides in an earlier phase of the risk trajectory, representing a lower risk population in terms of progression over 2 years. The strengths of the APIPPRA trial are the inclusion of participants positive for ACPA, the fixedperiod dosing, the real life clinical setting with opportunistic recruitment from early arthritis clinics, the adoption of a robust primary endpoint confirmed by sonography in which all primary events fulfilled the ACR/EULAR 2010 criteria, the low preprimary event withdrawal rate, and the evaluation of the effects of study drug on subclinical synovitis.Sonography allowed us to establish that many study participants had no or low levels of detectable subclinical synovitis, indicating that the APIPPRA study cohort represents a population with minimal joint involvement.Finally, the results show the positive effect of costimulation modulation on the symptom burden of people at risk of rheumatoid arthritis with consistent reductions in symptoms and improve ments in patient reported outcomes across multiple domains. The limitations of the study include the short followup period, leaving the question of delay versus prevention partly unanswered.Longterm followup of the trial population, which is ongoing, might address this limitation.Arthritisfree survival curves show that a substantial proportion of individuals in each group do not progress to rheumatoid arthritis, and some of these participants might have been unnecessarily exposed to the study drug.Such exposure raises the importance of risk assessment and highlights the need for improved stratification tools to identify individuals at highest risk of rheumatoid arthritis.Good examples of such tools include the EULAR criteria for clinically suspicious arthralgia that progresses to inflammatory arthritis, 24 signatures associated with pathogenic adaptive immune responses (ie, autoantibody V domain glycosylation), 25 and features extrapolated from preclinical models. 26tudying the triggers of rheumatoid arthritis is of paramount importance and will probably uncover pathways directly linked to the risk state.Regarding biomarkers to inform therapeutic options for interception, abatacept should be considered in individuals with high titres of anticyclic citrullinated peptide or carrying HLA-DRB1 shared epitope allomorphs. 11,27nother limitation is the assessment choice for capturing clinically meaningful changes in response to the study drug.Although many secondary outcomes improved with the study drug, baseline scores in this atrisk population were low, reflecting a cohort of individuals with a moderately good quality of life but who have a better quality of life with abatacept than placebo.Furthermore, baseline changes might not have achieved minimal clinically important differences, as defined for established rheumatoid arthritis, which might account for the discrepancy between groups for primary endpoints and patientreported outcomes at 24 months.Our data would suggest that radiography, unlike ultrasonography or MRI, might be of limited value in identifying a group at high risk, or evaluating the effect of disease interception in individuals at risk of rheumatoid arthritis.Although 16% of participants had radiographically detected bone erosions at baseline, most had scores of 1 or less; the presence of erosions at baseline would not appear to be associated with progression to rheumatoid arthritis.Therefore, evaluating minimal disease activity states is challenging, 28 since DAS28, simple disease activity score, and clinical disease activity score (in which swollen joint counts are included) have not been validated, nor are likely to be appropriate for assessing atrisk states over time.Thus, the APIPPRA study might not have captured all the features associated with progression, especially those symptom complexes considered important to atrisk individuals.We suggest that the development of new or revised outcome measures informed by patient experts, including symptoms such as those identified with the SPARRA questionnaire, 29 should be a priority. By applying a stringent cutoff of 90% or more injections to define adherence to study medication, 29 participants in the abatacept group and 26 in the placebo group were nonadherent, representing 26% of the intentiontotreat population.Given that the proportion of participants who were noncompliant was similar in both groups (26% for abatacept and 25% for placebo) it seems unlikely that nonadherence was related to study drug.The TREAT EARLIER study offers a trial setting with which to compare rates of nonadherence, because methotrexate was also taken on a weekly basis for 12 months, but in tablet form rather than by injection. 19By the end of the 12month treatment period 27% of participants in the methotrexate group and 19% in the placebo group had discontinued all study tablets. 19The proportion of non censored participants discontinuing methotrexate was even higher in those taking methotrexate 20 mg or more weekly when compared with those taking any dose, suggesting that drug intolerance might have been a contributing factor.Regardless, nonadherence in TREAT EARLIER appears similar to levels recorded in the APIPPRA study.The prevalence of adherence to biological therapies in established rheumatoid arthritis was reported to be 64% in the first 6 months of treatment when adopting a less stringent cutoff of more than 80%. 30By acknowledging that drug adherence might be partly better in clinical trials than in routine clinical practice, nonadherence rates in interception trials reported to date appear similar.These findings might be of value when designing future interception trials. To conclude, we show the feasibility and acceptability of rheumatoid arthritis interception trials and report data to suggest that costimulation modulation during the at risk phase is well tolerated and substantially reduces signs and symptoms associated with the atrisk state during the treatment period.The data indicate that abatacept treatment beyond 12 months might be required to sustain efficacy over time.Intermittent administration at intervals remains to be assessed.This study highlights the need for criteria that distinguish the atrisk phase from early rheumatoid arthritis to support trial design, while targeting treatment at the most appropriate time.Ultimately, when considering reducing the risk of rheumatoid arthritis with biological therapy dosed over a fixed period, the incremental gains of not requiring a diseasemodifying antirheumatic drug over time need to be balanced against the upfront treatment costs and the challenges of predicting individuals at high risk. Figure 2 : Figure 2: Arthritis-free survival by group Table 1 : Baseline characteristics of the intention-to-treat population summarised below, in which participants were censored from the time of detection of clinical arthritis.By excluding data from
2024-02-14T14:16:36.767Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "69d052deb8cef4050302481259eb76d07f20ae70", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S0140673623026491/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "69d052deb8cef4050302481259eb76d07f20ae70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266466439
pes2o/s2orc
v3-fos-license
RiNeo MR: A mixed reality simulator for newborn life support training Neonatal resuscitation is an uncommon, albeit critical task that is more likely to succeed if performed properly and promptly. In this context, simulation is an appropriate way for training and assessing the abilities of all medical staff involved in delivery room care. Recent studies have shown that learning is enhanced if the simulation experience is realistic and engaging. Hence, Virtual Reality can be beneficial for newborn resuscitation training. However, the difficulty of providing realistic haptic interaction limits its use. To overcome this constraint, we have designed RiNeo MR, a simulator for newborn life support training, combining a sensorized manikin to monitor in real time resuscitation skills, with a Virtual Reality application. The system includes a Virtual Reality headset, Leap Motion to track the user’s hands, sensorized bag valve mask, and manikin to monitor head and mask positioning, ventilation, and chest compression. RiNeo MR can be used in two modalities: 2D to let the trainee practice resuscitation manoeuvres on the physical manikin, while receiving real time feedback; 3D that allows the user to be immersed in a virtual environment and practice in an hospital-like setting. In the 3D mode, virtual and real manikins are overlapped and communicate in real time. Tests on 16 subjects (11 controls without medical expertise and 5 paediatric residents) demonstrated that the simulator is well tolerated in terms of discomfort. Moreover, the simulator is high rated for user experience and system usability, suggesting that RiNeo MR can be a promising tool to improve newborn life support training. RiNeo MR is a proof of concept of a mixed-reality newborn life support simulator that can be a promising tool to spread newborn resuscitation high-quality training among healthcare providers involved in perinatal medicine. Introduction The transition from intra-uterine to extra-uterine life is a crucial moment for all newborns, as this is the instant in which spontaneous breathing starts [1].Although the cardio-respiratory extrauterine transition usually occurs spontaneously and at least requires simple stimulation and care manoeuvres, it is estimated that 5-10% of newborns need assistance to establish autonomous breathing [2]; approximately 3-6% of term and late preterm babies receives positive-pressure mask ventilation (PPV), while less than 1% receives chest compressions, advanced airway management and intravenous drugs administration [2].Since the need for assistance is uncommon and cannot always be predicted [3], it is unlikely that healthcare providers are regularly exposed to neonatal resuscitation.Moreover, pregnant women at risk of preterm delivery or pathologic conditions diagnosed prenatally are usually centralised to hub neonatological centres, further contributing to globally reduce providers' exposure to complicated deliveries [4].However, resuscitation is more likely to succeed if it is performed properly and at the right time [2,5].Indeed, all staff involved in perinatal medicine (paediatricians, gynaecologists, anaesthesiologists, midwives, nurses, and paramedics) should be prepared to provide the lifesaving interventions quickly and efficiently.Currently, only a few specialists in the delivery room masters Newborn Life Support (NLS), typically well trained neonatologists and anaesthesiologists [6] (see S1 Appendix for further information on the NLS algorithm). Among the possible ways to increase the number of healthcare provider able to deliver NLS, there is simulation, which allows to practice in a riskless and controlled environment [7].In fact, many studies have shown that an effective NLS training reduces delivery room death rate by 30% [2].NLS training is typically achieved using manikins [8]; however, there is increasing evidence that learning is enhanced if the simulation experience is realistic and engaging [9].Indeed, the use of technologies that increase immersivity, such as virtual reality (VR) and augmented reality (AR) is growing [10,11].VR provides an immersive experience that promotes practitioner engagement and supports the acquisition of optimal levels of practical expertise in a safe and controlled environment [7]. In the specific context of emergency medicine training and adult first aid, some VR applications have been developed in the last years [12,13].These tools can increase the realism of the simulation, but they cannot train and monitor dexterity skills [11].Indeed, one of the main limitations of VR for medical training, that may affect the learning output and limit its use, includes the difficulty of providing haptic interaction with the real environment [11].A possible alternative could be Mixed Reality (MR), since it combines the virtual environment with objects in the real world [14], thus enhancing the VR experience.With MR, user interactions (such as grabbing objects or performing actions with them) occur with the real objects, allowing the user to perceive the shapes and sizes of the objects they see in VR [14], as virtual and real objects are overlapped and aligned.Therefore, mixed reality enables the provision of passive haptics, where this term refers to the ability to oppose a tangible passive object, co-located with the virtual object, to the actions of the user in order to enhance the overall immersive experience [15][16][17]. The use of Extended Reality (XR, acronym that includes VR, AR, and MR [18]) for medical learning is a powerful tool in the clinical training environment, and several adult learning theories supports its use.They include the Constructivist Learning Theory, namely, learning by interacting with the environment; the Situated Learning Theory, promoting the idea that learning should be embedded within cooperative activities; the Embodied Cognition Theory, that is based on the concept that cognition is the result of the relationship between mind and body [19,20].Real situations have always been preferred for adult learning, as learners find it difficult to transfer the theoretical knowledge to the related situation.Also, they prefer engaging learning experiences, where they focus on how to improve their performance [21].MR provides a controlled environment, allowing learners to navigate it and manipulate objects, and to repeat the same task multiple times [19].Also, participants usually feel more relaxed and spontaneous with respect to traditional manikin-based simulation [20].Another important point concerns the sense of presence and engagement of XR simulations, which could help achieving learning outcomes, especially in the case of repetitive activities [22,23].Furthermore, MR allows to integrate manual tasks within a realistic context where the knowledge would later be applied, and to manipulate variables, thus providing personalized training [19].Finally, with MR, users can combine multisensory cues (i.e., visual, auditory, and haptic stimuli) to build new knowledge [21].The involvement of the proprioceptive system during learning in the synthetic environment would deepen learning and recall [20,22,23]. Recently, some research groups implemented MR prototypes for adult life support training [12,[24][25][26][27]; they combine a non sensorized manikin, either half body or full body, with a Head Mounted Display (HMD), tracking devices (controllers, trackers, data gloves).Tools for paediatric resuscitation training are still limited to few serious games or AR applications [28][29][30].Furthermore, to the best of our knowledge, no MR-based NLS tools are available so far, despite its unquestionable value [11,30]. For this reason, we have designed RiNeo MR, a mixed-reality system combining a newborn manikin, sensors to monitor resuscitation skills, and VR application.The goal of the project was to design and implement a realistic and engaging educational tool for the training and evaluation of neonatal life support skills.The simulator combines a VR application with neonatal manikin and a face mask, both sensorized to monitor head and face mask position, ventilation, and chest compression.After the design and implementation phase, we have tested the simulator on a cohort of paediatric residents and of people without medical expertise, to evaluate its usability. Materials and methods RiNeo MR combines a sensorized newborn manikin and face mask (hardware), with a VR application consisting in 3D scenarios of delivery and operating rooms, and a virtual representation of the manikin (software).The two parts communicate in real time via a communication system.The simulator allows to monitor four different tasks: head positioning, mask positioning, positive pressure ventilation, and chest compression (see S1 Video for a demo vide The individuals pictured in S1 Video have provided written informed consent (as outlined in PLOS consent form) to publish their image alongside the manuscript. Hardware The hardware components of RiNeo MR include: a newborn manikin (M-1005745, 3b Scientific, Germany), having a single joint in the neck allowing the trainees to properly position the head during ventilation; two microcontrollers, an Arduino Uno REV3, and an Arduino Nano 33 IoT which integrates a 6-axis inertial measurement unit (IMU; LSM6DS3), that allows to monitor the orientation of the mannikin's head; a force sensing resistor (FSR400; Interlink electronics, USA) sensor used to monitor the positive pressure ventilation; an infrared obstacle detection sensor (IR, GP2Y0A41SK0F; SHARP, Japan) to detect chest compression; two Hall effect sensors (SS443A; Honeywell, USA), which monitor the mask position; a Leap Motion (Ultraleap, USA) device to track the user's hands in real time; an HTC Vive (HTC, Taiwan) system to immerse the trainee into the VR environment (Fig 1) (see S2 Appendix for further information on the sensors). In detail, the IMU on the Arduino Nano 33 IOT board, located inside the manikin's head (Fig 1) monitors the rotation of the head in the sagittal plane (antero-posterior) of the manikin's head performed by the operator.Arduino provides the Arduino_LSM6DS3 library that can be installed directly from the IDE to read the accelerometer and gyroscope data.To merge the accelerometer and gyroscope data, we used a library that contains the official implementation of the MadgwickAHRS algorithm, which allows for a more accurate estimation of the orientation of an object based on raw accelerometer and gyroscope data readings.Specifically, it returns the Euler Angles (Pitch, Roll, Yaw) that describe the orientation of a rigid body in space.Given that: (i) the manikin is always positioned on a stable horizontal plane, although not physically fastened to it; (ii) NLS requires placing the baby's head and neck in a neutral position avoiding neck hyperextension and flexion; (iii) the joint in the neck allows movements only along the sagittal axis, the IMU monitors only antero-posterior rotations.The infrared sensor, that is a system consisting of an IR transmitter and IR receiver used to detect and calculate the distance to an object, is placed in the newborn's back, at the sternum level (Fig 1 ), with the transmitter and the detector facing inside the manikin.Chest compression can be measured with several sensors (e.g., ultrasound, potentiometer, optical, potentiometer); however, our setup has strict space constraints, which require to use a small sensor that could be located on the manikin's back and detect chest compression without being detectable by the user. Moreover, two Hall effect sensors to detect the mask position are located under the dimples of the manikin's lips and oriented in opposite directions (i.e., designed to respond to the north pole on one side and the south pole on the opposite side) (Fig 1).One sensor is activated by the presence of a positive magnetic field (south pole), the other one is activated by the influence of a negative magnetic field (north pole).In both cases, the output is deactivated if this field disappears or reaches a value below the activation threshold.To activate the sensors, the mask has been equipped with two permanent magnets arranged with opposite polarity.They are oriented and located so that both sensors are activated simultaneously when the mask is placed over the infant's mouth in the correct position.Both sensors return a binary value, either a zero or a one, depending on whether the mask is correctly placed.In both cases, a "1" indicates correct coupling between the individual hall sensor and the corresponding magnet, while a "0" means either no magnet or wrong pole exposure. All the sensors mentioned above (i.e., IMU, FSR and IR), are serially connected to the microcontroller Arduino Nano 33 IoT, powered by a power bank.This board receives data from sensors and sends it in real time via Wi-Fi to a computer running the application developed in Unity3D (Fig 2).Finally, Arduino UNO REV 3 acquires data from the Force Sensing Resistor sensor, connected via serial, which monitors how the user is performing the positive pressure ventilation.The sensor has been positioned at the ventilation inlet at the top of the Tpiece cap, below a plastic membrane so as not to be perceived by the user.In this position, it can detect the occlusion of the gas leakage opening and, consequently, monitor the user's performance of the ventilation tasks.For our project, the goal was to capture binary ON/OFF information, specifically whether the sensor was pressed or not, without the need to measure the applied force.Data from this board are sent via serial to the Unity 3D application (Fig 2).Finally, all data received by the computer from both the Arduino can be saved and stored for future evaluation, as well as to assess the trainees' performance. Other than sensors, the system includes an HTC Vive headset system, used both as a tracking system and to navigate the virtual environment.One controller is fixed into the neonatal island to monitor its position, and consequently the newborn position, as the manikin is constrained in the neonatal island (Fig 3).This way, movements of the physical neonatal resuscitation table are monitored in real time and reported in the virtual environment, as movements of the virtual table.This is particularly important, as the match between the real and the virtual manikin is crucial for the success of the simulation in the 3D mode.Indeed, in a previous study [26], the usability of the HTC Vive controller and tracker for a mixed reality medical simulator was assessed.Another tracker is attached to the face mask (Fig 3), to track its position.The system also includes a Leap Motion device mounted on the HTC Vive headset that tracks the user's hands in real time and develop possible interactions within the environment.As data from the HTC Vive system, Leap Motion and sensors are managed by the VR application, there is no requirement for specific calibration phases between the systems. Software RiNeo MR has been designed to be used in two different modalities: 2D and 3D (Fig 3).The former modality does not involve using the HTC Vive visor, as the performance is shown in real-time through the application's two-dimensional interface on a computer screen (Fig 3).In the 3D mode, the user wears the VR headset being immersed into a VR scenario which receives data in real time from the sensors, to create the desired actions.The software has been developed in the Game Engine Unity3D via Steam VR. To have a real time communication between the real and virtual worlds, namely the sensors located in the manikin and face mask and the 2D/3D Unity application, we have implemented a web server communication system and serial communication (Fig 2). The software architecture of RiNeo MR consists of 9 scenes organized as shown in Fig 4. The transition from one scene to another is managed by the user who interacts with different buttons.Scene selection can be done either by mouse click in the 2D scenes, or by using the user's hand, tracked by the Leap Motion device, in 3D environments.Importantly, the mouse is only used in the 2D environment, to make scene selections (for instance, choosing between a tutorial or training scenario or beginning/pausing/ending the scenario).Then, the trainee interacts physically with the manikin (e.g., positioning its head, performing chest compression etc.).In the 3D environment, the same selections are performed using the hands, as the Leap Motion tracks their movements.Furthermore, the Leap Motion allows visualizing the hands in real time, thus guiding the user in the VR without the use of controllers that would limit hand movements during NLS.Importantly, no interactions are measured by collecting Leap motion data, but the trainee's performance is measured using sensors in the manikin. 2D mode.The goal of our study was to create a realistic simulation, without the need of a lifelike setting, However, as the system includes both a sensorized manikin and a VR application, we decided to let trainees and instructors use the manikin in the "traditional" way, 3D mode.In the 3D mode, RiNeo MR combines the functionalities of the 2D mode, with an immersive VR application available by using the HMD (Fig 3).Indeed, the trainee can watch the demonstration video (Tutorial) or proceed with the "Training".With this configuration, the user is immersed into a virtual world where a medical emergency can occur and is able to practice NLS on a virtual manikin.As described in the hardware subsection, sensors and trackers positioned in the manikin, neonatal island, and face mask, monitor in real time the position of the manikin, other than the correct execution of resuscitation manoeuvres.This way, the real and virtual manikin are overlapped and behaves accordingly, thus guaranteeing a correspondence between what the user sees and what he touches, obtaining a coherent environment (Fig 3).At the current stage, the simulator offers to perform NLS in a hospital resuscitation room, that is located between a surgical theatre, as a deliver can occur spontaneously or be a result of a caesarean section, and a delivery room (Fig 5B; [31]).Also, prior to the simulation, the user can familiarize with the virtual environment, by being immersed in a hospital hallway (Fig 5A).All the rooms are furnished with realistic appliance (Fig 5 ) and have realistic dimensions, to immerse the user in an accurate setting [31].Finally, starting from a 3D model of a newborn manikin, we have adjusted its size to those of the real one using Blender (Blender foundation, the Netherlands).This way, when the user interacts with the virtual manikin, for instance by placing the mask, the face of the virtual newborn has the same size and shape of the real one, otherwise the manoeuvre cannot be completed. Like the 2D mode, the 3D contains a menu allowing the user to watch a Video Tutorial or to proceed with simulation training in the immersive virtual environment (Fig 5).As the user wears the HMD and is fully immersed in the virtual scene, in the 3D mode, real time feedback is provided both by a screen located in the resuscitation room (Fig 3).Also, we have added additional feedback by including animations associated to the manikin, such as chest expansion when the mask is properly positioned and ventilation is provided; head movements, according to the angles detected by the IMU in the manikin's head. Tests Subjects.After the design and implementation of RiNeo MR, we enrolled 16 subjects during the period June 2022 -July 2022: 11 people without medical expertise (age mean ± STD: 25.4 ± 2.1 years, 7 women) and 5 paediatric residents (age mean ± STD: 30.0 ± 0.7 years, 5 women) to test the simulator and collect feedback on its usability.Experimental design.The experiment consisted of two phases: 2D simulation, and 3D simulation, followed by post-experiment questionnaires.The overall duration was of about 60 minutes divided as such: 15/20 minutes to fill out the questionnaire 15/20 minutes for the 2D mode, 25/30 minutes for the 3D mode, including the familiarization with the virtual environment.Participants started the experiment with 3D simulation or 2D simulation, randomly selected.During the session, subjects observed the elements of the simulation and interacted with them autonomously.At the end of each phase, users filled out questionnaires to evaluate the usability of the system, the sense of presence and whether any discomfort occurred (see below for more details, Fig 6).After the experiment, we interviewed the participants to collect feedback and opinions about the strengths and weaknesses of the system. During the 2D simulation, subjects were asked to try neonatal resuscitation manoeuvres on the physical manikin; they included head positioning, mask positioning, ventilation, and chest compression, and to look at the screen reporting real time feedback on their performance (Fig 3).At the completion of this simulation, they completed two surveys: User Experience Questionnaire (UEQ; [32]), and System Usability Scale (SUS [33]; see below). To perform the 3D simulation, the user wore the VR headset, and was immersed in the virtual hospital hallway (Fig 5A).As this scene was designed to let the subject familiarize with the virtual environment, he/she could move around, move their hands to see their virtual replica and interact with buttons in the scene (Fig 5).As soon as the participants felt at ease in the virtual environment, they pressed the button "Simulation" to start the session, which took place in the resuscitation room.Inside this second scenario, the user could perform several actions: (i) move the baby warmer; (ii) adjust the manikin's head position; (iii) position the face mask and perform ventilation; (iv) compress the chest.Importantly, as the virtual and real manikin and baby warmer are overlapped, the user could perceive the real object, albeit immersed in the virtual scene.At the end of the 3D VR experience, we provided users with five questionnaires (see below): UEQ, SUS, Simulation Sickness Questionnaire (SSQ; [34]), and the Igroup Presence Questionnaire (IPQ; [35]). Questionnaires.As mentioned above, we have selected different questionnaires to evaluate RiNeo MR in terms of: (i) usability, which is operationally defined as the user's subjective experience when interacting with a system [36]; (ii) user experience, defined as the overall person's experience with the system including design, graphics, interface, physical and manual interactions [37]; (iii) sense of presence, in terms of "being there" and perceive the virtual environment as real [35]; (iv) simulator sickness, or cybersickness, a subset of motion sickness that can be experience during VR experiences [38]. UEQ [32] covers a comprehensive impression of user experience, namely a collection of unique benchmarks that includes traditional usability standards like effectiveness, controllability, and learnability, as well as non-goal-directed or hedonic criteria like stimulation, funof-use, novelty, emotions, and aesthetics [39].The questionnaire is composed by 26 items grouped into 6 scales: attractiveness, efficiency, perspicuity, dependability, originality, stimulation.Scales are not independent; in fact, a user's general impression is captured by the attractiveness scale, that, in turn, is influenced by the values on the other 5 scales [40].Perspicuity, efficiency, and dependability are pragmatic quality aspects (goal-directed), while stimulation and novelty are hedonic quality aspects (not goal-directed).Pragmatic quality describes task related quality aspects, hedonic quality is concerned with features that are not task-oriented, such as the user interface's originality or aesthetic appeal [32].Each UEQ item consists of a pair of terms with opposite meanings, and each item can be rated on a 7-point Likert scale [41].Answer to an item therefore ranges from -3 (fully agree with negative term) to +3 (fully agree with positive term).Half of the items start with the positive term, the rest with the negative term (in randomized order) [40]. To measure the users' level of sickness symptoms caused by virtual reality simulators, the SSQ [34] is frequently used.The questionnaire asks participants to score 16 symptoms on a four-point scale (0-3).Symptoms can be generally divided into three categories: Oculomotor, Disorientation, and Nausea [43].We looked at each score, to assess whether a specific symptom has occurred during the simulation. To determine the users' sense of presence inside a virtual environment (VE) we used the IPQ [35], that is a 14-item, 7-point Likert scale questionnaire [44].The IPQ has three subscales plus one general item not belonging to any subscale: (i) Spatial Presence, the sense of being in the VE; (ii) Involvement/Attention, measuring the attention devoted to the VE and the involvement experienced; (iii) Experienced Realism, the subjective experience of realism in the VE [45]. Data analysis Mean values and standard deviations have been computed for each item, score or subscore (see Data).Residents and controls data have been analysed separately to ensure that usability, user experience, sense of presence and simulator sickness are not affected by the ability in performing resuscitation manoeuvres.Group differences have been assessed using nonparametric Mann Whitney test. Fig 8A shows the SUS scores.As described above, scores greater than 68 indicate excellent or good ratings for usability.In general, all but one control subject reports an overall score equal or greater than 70, without differences between controls and paediatric residents. 3D simulation Firstly, we have assessed whether the 3D version RiNeo MR caused any simulator sickness related to the use of VR., Analysis of the SSQ data did not reveal any discomfort; in fact, subjects reported levels of discomfort lower than or equal to 1 out of 3 for all symptoms listed.As for the 2D simulation, results of the UEQ questionnaire report a good user experience level in all the scales (Fig 7B ; attractiveness, mean ± STD C: 2.2 ± 0.6, PR: 2.8 ± 0.4; perspicuity, C: 2.3 ± 0.6, PR: 2.6 ± 0.3; efficiency C: 1.8 ± 0.7, PR: 2.2 ± 0.6; dependability C: 2.0 ± 0.4, PR: 1.9 ± 0.4; stimulation C: 2.3 ± 0.7, PR: 2.6 ± 0.5; novelty C: 1.5 ± 1.1, PR: 1.8 ± 0.5).In addition, SUS results indicate good to excellent usability scores for both groups, with only one subject rating the usability of RiNeo MR as poor (Fig 8B).Interestingly, in both 2D and 3D simulation, we obtained only one neutral to negative score by a control subject.One might think that the negative scores belong to the same subject.However, as visible in Fig 8, this was not the case.In fact, analysing single subject data, it has emerged that the negative score in the 3D simulation is due to the fact that the person feels he/she would need technical help to use the system.Conversely, in the 2D simulation the control subject giving a low score, would not use the system frequently. Finally, IPQ did not show differences between the two groups (Table 1).Involvement and attention score, as well as experienced realism report mean values lower than 5 out of 7. Altogether, results revealed good levels of user experience and system usability.In particular, the results of the 3D UEQs showed an interesting result: we have found a trend toward significance (p = 0.067, Fig 7B ) when we compared controls and paediatric residents' attractiveness scores, with resident rating the attractiveness higher than controls.This difference could be explained by the fact that paediatric residents were able to understand the potential of RiNeo MR, in improving the NLS training of healthcare providers.Conversely, controls were immersed into a virtual world without having a clear vision of the educational potentiality and the final goal of the tool.This is further supported by the fact that, during the postexperiment interviews, some paediatric residents expressed interest in the use of the 3D version of RiNeo MR simulator and said they would be open to view and evaluate a future release.In addition, looking at Fig 7, paediatric residents, reported slightly higher scores also for novelty values.These results suggest that paediatric residents, and more generally medical students, have a good understanding of the difficulties related to medical learning and the benefits of using simulation techniques and technologies [46].In fact, the practice provided by simulation training builds up confidence and hence satisfaction [47]. Finally, an interesting finding emerged from the analysis about the sense of presence in terms of perceiving the virtual environment as real.As mentioned above, involvement, spatial presence, and experienced realism report mean values lower than 5 out of 7, with no differences between the two groups.This can be explained by the fact that many subjects claimed that they felt the virtual world to be static and that they would have preferred to see more people and dynamic objects in the scene.The relationship between dynamic objects and sense of presence has been investigated in different ways [48,49], suggesting that particular attention should be paid in the virtual environment, even if the main goal of the VR is to let trainees practice, rather than explore the virtual world.Another possible explanation relies on the fact that if on one hand the combination of virtual and real objects overlapped can enhance sense of presence, on the other hand, it can cause mismatches in the user perception, as some objects are both virtual and real (i.e., manikin, baby warmer), while others are not (e.g., walls, sink).Research on sense of presence in MR is still sparse [50]; thus, additional studies will be required to investigate whether sense of presence is different in MR with respect to VR.With the increasing use of immersive technologies, numerous studies have been conducted to assess criticisms that could limit their use in the medical education setting.Even though immersive applications using HMDs are generally engaging and enjoyable [51], some studies highlighted issues, including discomfort due to the use of cumbersome equipment, difficulty of vision, motion sickness, and technical issues [51][52][53].The latter embrace difficulties to start the application, move in the scenario, and interact with the objects [52].Also, the face-to-face communication is limited, as most of the application are single player.This is especially relevant when instructor-based training is required.Another factor limiting the use of VR for medical training is the cost of the hardware (i.e., high-performance computers, and dedicated accessorized), as well as the need of dedicated personnel facing technical issues [52].Nevertheless, constant efforts are made by companies and researchers to advance the technology and overcome these problems. As mentioned above, the training of medical procedures is very important for medical students, residents, and healthcare providers.In fact, they all have different educational needs that can be achieved by using simulators [54].For instance, medical students and residents need to learn manual skills [55] and established medical procedures, while clinicians and healthcare providers may need to refresh skills or to be update on new guidelines.In all these cases, training in an immersive environment combined with physical elements increases user's engagement, immersivity and sense of presence [56], as also reported in our study, and produces better outcomes [57,58]. Conclusions The goals of this project were to: (i) overcome the limitations of existing VR solution for medical simulation, namely the lack of passive haptic (that does not allow to simulate manual skills in a realistic way; (ii) design and develop a mixed reality system for newborn resuscitation training that could increase the number of healthcare providers able to perform high quality NLS; (iii) test the usability of the system. Although the main manoeuvres for NLS training are monitored, additional steps (i.e., checking newborn temperature, advanced airway management, and umbilical line placing to let drugs administration) can be evaluated and implemented to further increase the educational potential of the system.This can be achieved by adding sensors both to the first aid supplies and the manikin. The system, in its current form, uses commercial trackers and cameras to monitor the manikin's position in the real world, and movements of the users's hands.If, on one hand, these technologies are easy to use; on the other hand, they present limitations: (i) the tracking of the hands can be lost when they cross virtual objects; (ii) the trackers used are cumbersome, and, if touched, they may affect the overall experience.Indeed, given the fundamental importance of correct tracking and visualization of virtual hands [59], further research will be pursued to have tracking systems which are accurate and manageable. Results from the questionnaire, as well as interviews with potential users revealed that the system is generally high rated for user experience and system usability, however; it has been reported that adding sounds and movements could further improve the realism and the immersivity of the application.These suggestions are further supported by the literature.Indeed, studies on how to improve realism in VR settings suggest working on the visualization techniques, such as global illumination, dynamic shadowing, ambient occlusion, and physically based rendering materials.Another possibility is to add avatars that move and react to users' action, as this has been reported to enhance realism [60].Therefore, we are planning to modify the lighting to improve scene shadows, incorporate materials with more accurate rendering, introduce animated avatars into the virtual environment, and add realistic elements to the scene (e.g., date and time information, posters and fliers commonly displayed in hospital settings). Acknowledging the existence of other potential methods of training for newborn life support (NLS), it is important to mention augmented reality (AR) [61].AR can be seen as a less immersive alternative to virtual reality (VR) as it combines virtual elements with the user's actual surroundings without completely separating the user from the real world [62].This reduces the full sensory immersion that VR provides and may result in a less focused and engaged training session, which was the main objective of our project.However, AR still has the potential for broader accessibility and the ability to integrate real-world interactions with virtual elements, offering valuable training opportunities, albeit different ones, especially in situations where full VR technology is not feasible [63]. Our study is a pilot study on the design and development of a system combining VR with a physical manikin for NLS training.Consequently, we have chosen to concentrate our efforts on usability and user experience, preliminary yet significant aspects.Nonetheless, we have plans to validate the RiNeo MR in an authentic NLS teaching environment, by conducting a comparison with other existing tools to determine its efficacy in both short-term and longterm learning outcomes.Given the pedagogical interest of our system, we will carry out a more extended study, involving a larger number of healthcare providers to assess the effectiveness of our simulator compared to manikin-based trainings.This might include investigating whether the manoeuvres execution speed and precision of the healthcare providers in the mixed reality environment are comparable to those performed on manikins or real practice.In this usability study we did not assess gesture speed or precision, as participant pool included individuals without a medical background who would naturally perform movements at a slower pace. In conclusion, RiNeo MR is a proof of concept of a mixed-reality NLS simulator.The system can be a promising tool to improve NLS training and spread newborn resuscitation knowledge among all staff involved in perinatal medicine.help during software implementation; Prof. Mohamad Maghnie and the school of specialization in paediatrics at Gaslini Children's hospital; Danilo Canepa, Beatrice Lagormarsino and Alessandra Oliveri for their help with the preparation of the demo video. Fig 3 . Fig 3. 2D and 3D simulation.Left: 3D simulation.The user wears the HMD and is immersed into a virtual environment showing a virtual representation of the manikin.Thanks to an HTC controller that tracks the neonatal island position, the virtual and real manikins are overlapped to each other and move accordingly.Right: 2D simulation.The user can physically interact with the manikin and receives feedback from a User Interface (UI) displayed on a computer screen.Sensors in the manikin and the face mask sends data to the immersive VR application or the GUI to monitor the user performance in real time.UI description: Top left: the UI provides feedback about the manikin's head position.Top right: mask position; simple controls have been implemented to manage the colour code; if the area turns red, completely incorrect positioning; if the mask turns yellow, partially incorrect positioning; if the area turns green, correct positioning.Bottom left: Ventilation rhythm If the sensor in the face mask is pressed (inhalation), the lungs in the UI are filled, during exhalation phase (sensor not touched), they are empty.Bottom right: chest compressions are shown on a line chart according to the data measured by the infrared sensor.https://doi.org/10.1371/journal.pone.0294914.g003 Fig 4 . Fig 4. Scene.First button "Start Menu" allows the user to choose whether to proceed in 2D or 3D simulation mode.The user can then choose whether to view the tutorial (i.e., an explanation video) or enter the simulation training scene.After the training, it is possible to view the results obtained during the simulation.In every scene, there is the possibility to go back to the previous one (dotted lines).https://doi.org/10.1371/journal.pone.0294914.g004 Fig 6 . Fig 6.Pipeline of the experiment.First the subject randomly starts 2D or 3D simulation.After the first simulation, the subject fills out questionnaires to assess user experience, usability of the system, sense of presence and motion sickness.Then, he/she starts the second simulation, followed by additional questionnaires.https://doi.org/10.1371/journal.pone.0294914.g006
2023-12-23T05:06:09.983Z
2023-12-21T00:00:00.000
{ "year": 2023, "sha1": "56f8b0fc296a15922933d5861e48f4647874fc1c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "56f8b0fc296a15922933d5861e48f4647874fc1c", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
258840691
pes2o/s2orc
v3-fos-license
The experiences of hospitals in changing the function of a non-teaching hospital to a teaching hospital: Short-Communication Background: In recent years, there have been many non-teaching hospitals that have become teaching hospitals. Although the decision to make this change is made at the policy level; But the unknown consequences can create many problems. The present study investigated the experiences of hospitals in changing the function of a non-teaching to a teaching hospital in Iran. Methods: A Phenomenological qualitative study was conducted using semi-structured interviews with 40 hospital managers and policy makers who had the experience of changing the function of hospitals in Iran through a purposive sampling in 2021. Thematic analysis using inductive approach and MAXQDA 10 was used for data analysis. Results: According to the results extracted 16 main categories and 91 subcategories. Considering the complexity and instability of command unity, understanding the change of organizational hierarchy, developing a mechanism to cover client’s costs, considering increase management team’ legal and social responsibility, coordinating policy demands with Providing resources, funding the teaching mission, organizing the multiple supervisory organizations, transparent communication between hospital and colleges, understanding the complexity of processes, considering change the performance appraisal system and pay for performance were the solutions for decrease problems of changing the function of non-teaching to teaching hospital. Conclusion: Important matter about the improvement of university hospitals is evaluating the performance of hospitals to maintain their role as progressive actors in hospital network and also as the main actors of teaching future professional human resources. In fact, in the world, hospital becoming teaching is based on the performance of hospitals. Hospitals as the main lever of countries health system are divided into two teaching and non-teaching hospitals (1). According to 2017 statistics in Iran, around 45% of the whole active beds and around 63% of Ministry of Health active beds are teaching which is increasing (2). But according to statistics, this rate is much lower in other countries for example only 5% of all US hospitals (3). According to the preface and initial review, it is expected that the difference in the mission of teaching and non-teaching hospitals will cause differences in the outcome indicators (4)(5)(6)(7)(8)(9)(10). Teaching hospitals in most countries depend on medical university or a part of national or local health system considering organizational system play a strategic role in teaching physicians (5,6). Shahidi Sadeghi N, et al. Meanwhile, non-teaching hospitals in most countries are those with general specialty which have activity beside teaching hospitals to protect societies' health (9). Although the establishment of a hospital should be based on the needs of society and the study of facilities, but the issue of changing medical units, from change of use to change in activity and type of organization, is not a new issue and we have always witnessed this issue in some of countries. The decision to make these changes is made at the policy level; but the unknown consequences and the level of these changes can create problems for everyone. The present study investigated the experiences of hospitals in changing the function of a nonteaching hospital to a teaching hospital in Iran. Methods A Phenomenological qualitative study was conducted in Iran in 2021. The research population consisted of 25 hospital managers at different levels, and 15 policy makers who had the experience of changing the function of hospitals in Iran were recruited through a purposive sampling and by sampling from significant cases. Also, they were from different cities of the country and at least had five years of working experience. Data were collected by using a semi-structured interview to have an in-depth picture of the participants' perspectives until saturated. An interview guide was prepared based on the research goals, the theoretical foundation of the topic, and an extensive review of the literature (5)(6)(7)(8)(9)(10)(11). Interview questions included the following: Are the goals of these two types of hospitals different? What is their goal? Are there other differences between teaching and non-teaching hospital? What were your experiences about these hospitals? Data were gathered during a 4-month period from April-June 2021. The validity of the data was based on four indicators expressed, including credibility, dependability, confirm ability, and transferability. The interviews lasted from 55 to 120 minutes and were immediately transcribed after each session. To analyze data, the inductive thematic analysis approach according to Braun and Clarke was used. MAXQDA was used for data analysis. Accordingly, 1) data coder immersed in the data by listening, reading and re-reading 2) the initial list of ideas behind the data was generated, and the initial codes from the data were produced, 3) the data were coded and then analyzed thoroughly 4) the themes and sub-themes were reviewed and refined with the research team, 5) reviewed final themes were noted considering the cross-links 6) the report was produced. According to ethical considerations, the code of ethics IR.IUMS.REC.1398.259 was obtained from the research deputy of Iran University of Medical Sciences with COI 675856. Results The results of this stage obtained from the analysis of 40 interviews created 354 primary codes, which were minimized to 135 codes after deleting duplicate codes and merging similar codes. Ultimately, the leading codes from the data analysis were assigned in 91 subcategories and 16 categories (table 1). Discussion Most The present study investigated the experiences of hospitals in changing the function of a non-teaching hospital to a teaching hospital in Iran. According to the results of the present study, teaching hospitals are more expensive. Actually, financial and budget resources in these hospitals are mainly pushed to one or two items of tree main functions in various cases (4). The present study findings mentioned the factors which increase the cost of such hospitals. The most important problem in representing health and medical services is its economic problem and hospitals in Iran are still faced with this problem (10). According to the results of the present study, the important point is that in some countries extra resources have been appropriated to university hospitals through specific budget called education, research and or serving complex cases (12). Another element of an organization is its structure. According to theory, "structure follows strategy" (13). along with the change in the goal, its structure will also change. Also, according to studies, by changing the purpose of the hospital, many changes occur in human resources (14). This is because building or changing an organization is not like building or changing the layout of a construction (1,14,15). Also, another change is necessary, especially for internal and external communications. It should be noted that the organization is not a simple structure or chart (15). The next issue to be changed is the "patient" as the main customer of medical institutions (14). The present study emphasizes that more complex cases are referred to teaching hospitals, which is confirmed by the other studies. Medical assistants and interns in medical education system involved in patient medical and care processes may expose the patient to physical, mental and even economic complications (15). In addition, the results indicate that internal and external customer satisfaction was another category. according to distributive justice although community future interest in training experienced physicians are considered in teaching medicine, it seems that the effect of this matter has not been evaluated on health system (16). Based on the results of the present study, the chain of results of teaching and non-teaching hospitals in Iran and the world is different. According to the other studies the output indicators of teaching and non-teaching hospitals are different (4)(5)(6)(7)(8)(9)(10)(11). Organizational behavior has been changed during the change of hospital. in the world, Matrix structure is the helper factor in university medical centers considering their multiple missions (11). In most of world's university medical centers, the head of clinical unit is accountable to CEO of hospital and also the head of medical university (11). The head of clinical unit is a key connection between the head of university and hospital CEO (12,17). Another issue in this organizational change will be the issue of facility management and in particular the issue of physical space requirements and prerequisites. Facility management in a hospital is the process of reassuring supervisors that a hospital's facilities, equipment and facilities, access, and engineering and architecture support its missions as an important medical institution (18). In the design of a building, it is necessary to study the basic needs related to the list of spaces in relation to the types of functions and its capacity as a physical program. Finally, Considering the complexity and instability of command unity, understanding the change of organizational hierarchy, developing a mechanism to cover costs for clients, considering increase the legal and social responsibility of the management team, prioritizing organizational goals, coordinating policy demands with Providing resources, funding the teaching mission, organizing the multiple supervisory organizations, transparent communication between hospital and colleges, understanding the complexity of processes, considering the change of individual and group communication, considering change the performance appraisal system and pay for performance were the solutions for decrease problems of changing the function of nonteaching to teaching hospital. Important matter about the improvement of university hospitals' performance is evaluating the performance of hospitals to maintain their role as progressive actors in hospital network and also as the main actors of teaching future professional human resources. In fact, for a hospital to become a teaching hospital, its performance is of importance. Limitations: Some experts were not willing to cooperate or participate in the interviews. Some attempts were made to solve this issue and to attract their participation by sending official recommendation letters via our colleagues. As regards the present research was a qualitative study; it was able to study what happens after the change of function in hospitals. However, this can be explored from other perspectives, such as differences in challenges and problems or other study methods can be used.
2023-05-24T05:06:56.627Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "cd0742b04ce6060d96623639b8b29ad1f78c52cd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cd0742b04ce6060d96623639b8b29ad1f78c52cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199834865
pes2o/s2orc
v3-fos-license
Genome‐Wide CRISPR‐Cas9 Screening Identifies NF‐κB/E2F6 Responsible for EGFRvIII‐Associated Temozolomide Resistance in Glioblastoma Abstract Amplification of epidermal growth factor receptor (EGFR) and active mutant EGFRvIII occurs frequently in glioblastoma (GBM) and contributes to chemo/radio‐resistance in various cancers, especially in GBM. Elucidating the underlying molecular mechanism of temozolomide (TMZ) resistance in GBM could benefit cancer patients. A genome‐wide screening under a clustered regularly interspaced short palindromic repeats (CRISPR)‐Cas9 library is conducted to identify the genes that confer resistance to TMZ in EGFRvIII‐expressing GBM cells. Deep sgRNA sequencing reveals 191 candidate genes that are responsible for TMZ resistance in EGFRvIII‐expressing GBM cells. Notably, E2F6 is proven to drive a TMZ resistance, and E2F6 expression is controlled by the EGFRvIII/AKT/NF‐κB pathway. Furthermore, E2F6 is shown as a promising therapeutic target for TMZ resistance in orthotopic GBM cell line xenografts and GBM patient‐derived xenografts models. After integrating clinical data with paired primary–recurrent RNA sequencing data from 134 GBM patients who received TMZ treatment after surgery, it has been revealed that the E2F6 expression level is a predictive marker for TMZ response. Therefore, the inhibition of E2F6 is a promising strategy to conquer TMZ resistance in GBM. Introduction Glioblastoma (GBM) is the most aggressive primary brain tumor with high proliferation and invasion and easy recurrence after surgery. [1] Following advanced standard treatment, including resection followed by radio-and chemotherapy, the median survival time of GBM patients is only ≈14 months. [2] Epidermal growth factor receptor (EGFR) is one of the important genes driving GBM. EGFR is amplified or mutated which leads to tumor cell invasion and tumor-related angiogenesis via the aberrant activation of downstream signaling networks. EGFRvIII is the most common mutant that can be detected in up to 30% of GBM patients. [3] EGFRvIII is derived from the deletion of exons 2-7, which results in an in-frame deletion of 267 amino acids from the extracellular domain of wild-type (wt) EGFR. EGFRvIII is incapable of binding any known ligands, but its tyrosine kinase is constitutively activated. Although some new therapeutic strategies were emerging, such as systemic delivery of monoclonal antibodies, [4] temozolomide (TMZ) is the oral alkylating agent that serves as the current standard therapeutic for newly diagnosed GBM. TMZ reportedly causes cell cycle arrest in the G2/M phase and mediates DNA damage and, subsequently, apoptosis. [5] Although oral TMZ administration contributes to an overall increase in the survival of GBM patients, cancer cells eventually develop resistance to TMZ. [6] Currently, overcoming TMZ resistance remains a major challenge for GBM treatment. Searching for vulnerabilities of TMZ-resistant GBM is a promising strategy to improve the therapeutic efficiency of TMZ. Here, we used a clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 screening approach by applying a recently described GecKOv2 human library containing 123 411 sgRNAs that target 20 914 human genes, including 19 050 protein-coding genes and 1864 microRNAs. [7] We transduced U87 parental cells and U87EGFRvIII cells with lentiviruses of the pooled library. The sgRNA abundance was determined by deep sequencing upon the chemostress of TMZ. Among the candidate genes, we focused on E2F6 because its expression was increased in EGFRvIII-overexpressing cells as determined by RNA-seq. Furthermore, we found that E2F6 is a direct target gene of NF-κB. We also demonstrated that E2F6 is a pivotal gene that mediates TMZ chemoresistance using gain-or loss-of-function experiments. Overall, our results suggest that E2F6 inhibition is a promising therapeutic strategy for TMZ-resistant GBM. A Genome-Wide Pooled sgRNA Library Screen Identifies Vulnerabilities in EGFRvIII-Expressing GBM Treated with TMZ We first transduced U87 cells with lentiviruses expressing EGFRvIII and vector to obtain U87-EGFRvIII and U87wt cells, respectively, which were used for genome-wide pooled sgRNA screening. To identify the vulnerabilities of TMZ-resistant GBM cells, we infected the U87-EGFRvIII and U87wt cells with a sgRNA lentiviral library harboring 123 411 sgRNAs targeting 20 914 human genes, including 19 050 annotated protein-coding genes and 1864 microRNA expression genes. [7] The cells were treated with DMSO vehicle and TMZ for 7 and 14 days at 350 × 10 −6 m, a concentration that predominantly inhibits U87wt cell proliferation. The coverage of sgRNAs in the cells was then calculated by high-throughput sequencing after amplification of the sgRNA sequence in the genome (Figure 1a). Compared with the untreated group, 1287 and 1196 genes in the U87EG-FRvIII cells were identified as TMZ-resistant candidates after treatment with TMZ for 7 and 14 days, respectively ( Figure 1b). Further, we compared the U87wt and U87-EGFRvIII libraries treated with TMZ for 7 and 14 days, and found 981 and 929 differentially enriched sgRNAs to be resistant or sensitive to TMZ, respectively (Figure 1c). Among these four groups, 191 sgRNAs were differentially enriched (Figure 1d). Kyoto encyclopedia of genes and genomes (KEGG) pathway analysis showed that these commonly differentially expressed sgRNAs were enriched in the spliceosome, ribosome, and cell cycle pathways (Figure 2a). We further analyzed these genes by ingenuity pathway analysis (IPA) and demonstrated their association with cancer, the cell cycle, and the DNA repair pathway in U87-EGFRvIII cells treated with TMZ for 14 days (Figure 2b; Table S2, Supporting Information). Compared with U87 cells, the most significantly altered pathways in U87-EGFRvIII cells treated with TMZ were the CD28, NF-κB signaling, and NFAT ( Figure 2c; Table S3, Supporting Information). We simulated the pathways revealed by IPA and found that the EGFRvIII/ PI3K/AKT/NF-κB and G protein-coupled receptor (GPCR)/ PLA/PKC/NFAT pathways are the most responsible for TMZ resistance in EGFRvIII-expressing GBM cells (Figure 2d; Table S4, Supporting Information). E2F6 Was Induced by EGFRvIII and TMZ Treatment To identify the vulnerabilities in EGFRvIII-expressing cells that were resistant to TMZ treatment, we initially explored the genes possibly regulated by EGFRvIII through comparing the transcriptomic alterations between U87wt and U87-EGFRvIII cells. Totally, 19 genes were upregulated in U87-EGFRvIII cells with adjusted P value less than 0.05 and fold change larger than 2 ( Figure S2a, Supporting Information). Subsequently, we screened the sgRNAs that were depleted in response to TMZ treatment for 14 days ( Figure S2b, Supporting Information). We speculated that cells with overexpression of these genes showed therapeutic-resistance to TMZ. Meanwhile, we conducted the RNA sequencing for U87-EGFRvIII and U87wt cells under TMZ or DMSO treatment. Principal component analysis (PCA) showed that the TMZ-induced differentially expressed genes were totally different from those in the DMSOtreated counterparts ( Figure S2c, Supporting Information). By overlapping the differentially expressed sgRNAs, mRNAs, and EGFRvIII-regulated genes, E2F6 was the only hit induced by both the EGFRvIII mutation and TMZ treatment. E2F6 Expression Is Correlated with Glioma Grade in Classical Subtype To gain insight into the expression profile of E2F6 in glioma samples, we employed The Cancer Genome Atlas (TCGA) RNAseq data and microarray data of Rembrandt. Gliomas of World Health Organization (WHO) grade II, III, or IV were selected for analysis in our following study. As shown in Figure 3a, the E2F6 expression level was significantly associated with tumor grade (P < 0.0001). Hence, the E2F6 expression level was higher in classical subtype than proneural (PN) and mesenchymal (MES) subtypes (Figure 3b, P < 0.003). In addition, analyses of TCGA HG-U133A and Agilent 4502A microarrays and the RNA-seq data cohorts revealed that the E2F6 expression was significantly elevated in GBM than normal brain tissues (NBT) ( Figure S2d-f, Supporting Information). Furthermore, the immumohistochemical (IHC) staining of 31 WHO II patients, 32 WHO III patients, and 54 WHO IV patients proved that elevated levels of E2F6 expression are associated with high grade of glioma ( Figure 3c). Additionally, E2F6 was found to be a valuable gene for GBM diagnosis with a high sensitivity and specificity in these three datasets ( Figure S2g-i, Supporting Information). Thereafter, the correlated genes of E2F6 in the TCGA RNA-seq data were selected (Figure 3d). KEGG analysis showed that these genes are associated with the cell cycle and DNA repair (Figure 3e). E2F6 Is a TMZ-Resistant Gene We next investigated the effects of E2F6 on TMZ resistance. Four GBM cell lines (U87, N5, N9, and N33) were stably infected with lentiviruses expressing EGFRvIII or vector control and were subsequently treated with TMZ or DMSO vehicle. Immunoblot analyses revealed that E2F6 level was significantly increased in EGFRvIII cells, as compared with vector-infected GBM cells, and in the cells treated with TMZ (Figure 4a,b), suggesting that E2F6 plays a pivotal role in TMZ resistance. Furthermore, we ectopically expressed and knocked down E2F6 in U87, U87-EGFRvIII, N5, and N5-EGFRvIII cells ( Figure S3a,b, Supporting Information). Among the three small interfering RNAs (siRNAs) designed, siRNA#1 knocked down E2F6 with the greatest efficiency (E2F6 KD) and was therefore used in subsequent experiments ( Figure S3c,d, Supporting Information). Following treatment with and without TMZ, the cells were subjected to CCK8 viability assay. We found that overexpression of E2F6 increased the resistance to TMZ, while silencing E2F6 reduced the resistance of EGFRvIII cells to TMZ (Figure 4c; Figure S3e, Supporting Information). Furthermore, the clonogenic assay also showed that E2F6 expression was closely correlated with TMZ resistance (Figure 4d). It has been shown that EGFRvIII confers radiation resistance by accelerating the repair of DNA Adv. Sci. 2019, 6, 1900782 Figure 1. Vulnerabilities of TMZ identified by a genome-wide pooled screening in EGFRvIII-expressing GBM. a) Schematic representation of the flowchart of genome-wide screening of EGFRvIII-regulated/TMZ resistance-associated genes using the pooled GeCKOv2 human lentiviral library. b) The sgRNAs of each gene were weighted, and 1287 and 1196 TMZ resistance-associated genes were identified in U87EGFRvIII cells treated with TMZ at days 7 and 14, respectively (Table S5, Supporting Information). c) By comparing sgRNA profiles from the U87 and U87EGFRvIII cells following treatment with TMZ for 7 and 14 days, 981 and 929 differentially expressed genes were identified to be associated with TMZ resistance and sensitivity, respectively (Table S6, Supporting Information). d) Venn diagram shows differentially expressed genes in four groups described above: EGFRvIII-expressing group (vIII), negative-control group (NC), TMZ-treated group (TMZ), and DMSO-treated group (DMSO) ( double-stranded breaks (DSBs). [8] Because E2F6 was induced by EGRFvIII, we reasoned that E2F6 could increase the ability of DSB repair leading to TMZ resistance. To test this, we examined γ-H2AX, a hallmark of DSB, level in U87, U87-EGFRvIII, N5, and N5-EGFRvIII cells after treatment with and without TMZ. Fluorescence staining revealed that DSB was induced upon the cells exposure to TMZ. Ectopic expression of either E2F6 or EGFRvIII dramatically reduced TMZ-caused DSBs. (Table S6, Supporting Information). b) Differentially expressed genes were calculated by interactive genetic algorithm (IGA), and the genes were enriched in DNA replication, repair, and the cell cycle. c) Pathway enrichment analysis for up-and downregulated genes between EGFRvIII and control cells treated with TMZ at days 7 and 14. d) Schematics illustrating the EGFRvIII/PI3K/AKT/NF-κB and GPCR/PLC/DAG/NFAT signaling pathways which are most significantly changed in EGFRvIII-overexpressing cells treated with TMZ at two time points. However, knockdown of E2F6 largely abrogated the protective effect of EGFRvIII on DSBs (Figure 4e). Additionally, cancer stem cells are well recognized to be more resistant to TMZ. Thus, we assessed if E2F6 was highly expressed in neurosphere glioma cells. As shown in Figure 4f, E2F6 expression was significantly higher in neurosphere glioma cells than in differentiated glioma cells. The representative stem cell gene, CD133, was only detected in neurosphere glioma cells, and representative differentiated cell gene, GFAP, was downregulated in neurosphere glioma cells. We then performed a limiting dilution assay for sphere formation with two EGFRvIII-expressing cell lines transduced with shE2F6 or shScr. We found the E2F6-depleted U87vIII and N33vIII cells give rise to fewer and smaller spheres than the control cells in the context of 375 × 10 −6 m TMZ treatment (Figure 4g). Taken together, these results demonstrate that E2F6 acts as a critical player by which EGFRvIII cells acquire resistance to TMZ by inhibition of DSBs. . E2F6 is a key gene in TMZ resistance. a) Immunoblotting analysis of four GBM cell lines, which were infected and treated with and without EGFRvIII and TMZ, with indicated antibodies (left). E2F6 protein levels were quantified (right). (mean ± SD, n = 3, *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001). b) Quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR) analysis of E2F6 mRNA level in four cell lines infected and treated as described above. GAPDH was served as the negative control. (mean ± SD, n = 3, *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001). E2F6 Is Regulated by NF-κB in the EGFRvIII/PI3K/AKT Pathway Previous studies showed that EGFRvIII preferentially activates phosphoinositide-3-kinase (PI3K)-AKT, [9] and subsequently noncanonical NF-κB pathway. [10] To identify whether E2F6 is regulated by AKT/NF-κB pathway, we used MK-2206 to inhibit AKT phosphorylation, JSH-23 to inhibit nuclear translocation of NF-kB ( Figure S4a,b, Supporting Information). Subsequently, we treated the EGFRvIII cells with an AKT inhibitor MK-2206 (10 × 10 −6 m) for 48 h and found that the mRNA and protein levels of E2F6 were decreased ( Figure S4c,d, Supporting Information). We next examined the effect of NF-κB on E2F6 expression by treatment and transfection of GBM cells with NF-κB inhibitor JSH-23 (10 × 10 −6 m) and RelA/p65 expression plasmid. Western blot and qRT-PCR analyses revealed that the expression of E2F6 was positively associated with NF-κB activation in GBM cells at the protein (Figure 5a) and mRNA levels ( Figure 5b). Additionally, NF-κB and AKT were both activated by EGFRvIII and TMZ in total lysate of four GBM cell lines (Figure 5c,d). Furthermore, we separated the cytoplasmic fraction and the nuclear fraction and found translocation of the p-NF-kB to the nucleus was increased after EGFRvIII and TMZ treatment ( Figure 5e). These observations suggest that E2F6 is induced by the EGFRvIII/PI3K/AKT/NF-κB axis. Moreover, we evaluated the chromatin state of the E2F6 promoter regulated by EGFRvIII in GBM cell lines. ChIP-PCR analysis was performed using antibodies against H3K4me3 and p-NF-κB (p-p65) and four pairs of genomic PCR primers specific for the E2F6 promoter ( Figure S4e, Supporting Information). We found that the enrichment levels of H3K4me3 and p-NF-κB was increased at E2F6 promoter regions in EGFRvIII transfected GBM cells compared with the control cells (Figure 5f). Further, blocking AKT signaling using MK-2206 abolished the enrichment of H3K4me3 and NF-κB ( Figure 5g). To further assess the functionality of the NF-κBbinding sites, we cloned ChIP-qPCR fragments of the E2F6 promoters into the luciferase vector. GBM cells were cotransfected with RelA/p65 expression plasmid and reporter vectors. Luciferase test showed that NF-κB directly activated E2F6 expression ( Figure 5h). These data indicate that E2F6 transcription is regulated by NF-κB downstream of the EGFRvIII/PI3K/ AKT pathway (Figure 5i). E2F6 Is a Promising Therapeutic Target for TMZ Resistance Next, we investigated whether E2F6 acts as a critical factor in TMZ resistance in vivo. We stably infected U87 cells with lentiviruses expressing E2F6 or vector alone and U87EGFRvIII cells with a lentivirus encoding E2F6 siRNA#1 (E2F6 KD) or vector alone. Orthotopic GBM mouse models were constructed by intracranially injecting these four groups of cells into mice. 1 week later, the mice were intraperitoneally injected with DMSO or TMZ (5 mg kg −1 d −1 ) for 2 weeks at 5 days on and 2 days off (Figure 6a). The bioluminescence imaging analyses showed that the tumors established with U87/E2F6 cells resisted TMZ treatment (Figure 6b), whereas the tumors created with U87EG-FRvIII/E2F6siRNA cells were sensitive to TMZ (Figure 6c), when compared to the tumors established with their vector control cells. Moreover, we observed that TMZ treatment significantly reduced the body weight lost in vector infected U87 and U87/EGFRvIII mice, especially in U87/EGFRvIII/E2F6KD mice when compared to untreated mice (Figure 6d,e), but had no notable effect on U87/E2F6 mice ( Figure 6d). Notably, Kaplan-Meier survival curve analysis showed that TMZ treated U87 and U87/EGFRvIII/E2F6KD mice had much longer survival time than untreated mice (Figure 6f,g), but had no notable effect on U87/E2F6 (Figure 6f). These findings indicate that E2F6 plays a pivotal role in TMZ resistance in vivo. Immunohistochemical staining of xenografts revealed that tumors derived from U87 cells treated with TMZ exhibited elevated levels of p-NF-κB and E2F6 ( Figure 6h). Moreover, p-NF-κB and E2F6 were significantly increased in EGFRvIII-bearing tumors, especially the tumors treated with TMZ ( Figure 6i). Taken together, these data showed that E2F6 is a vulnerability related to TMZ-resistant GBM and that inhibition of E2F6 could be a promising therapeutic strategy for TMZ-resistant GBM. Inhibition of NF-κB/E2F6 Axis Sensitizes GBM to TMZ Because there is no small molecule inhibitor of E2F6 available, we first assessed whether p-AKT inhibitor (MK-2206) and NF-κB inhibitor (JSH-23) had synergistic effect with TMZ treatment. Compared with JSH-23 or TMZ alone, JSH-23 plus TMZ exhibited enhanced cytotoxicity in GBM cells. The confidence interval (CI) values were all <0.8, indicating a strongly synergistic interaction between JSH-23 and TMZ in GBM cells ( Figure S5a, Supporting Information). Similarly, MK-2206 also has synergistic effect with TMZ treatment in GBM cells ( Figure S5b, Supporting Information). Subsequently, we investigated whether pharmacological inhibition (JSH-23) of NF-κB, a critical regulator of E2F6, inhibits tumor growth and sensitizes TMZ in orthotopic patient-derived GBM xenograft mouse model. Tumor samples from primary GBM following three courses of TMZ treatment from a patient were orthotopically injected into mice. This patient is also a patient with a recurrent GBM after surgery and TMZ treatment. After 1 week, the mice were randomly divided into four groups (seven mice per Adv. Sci. 2019, 6, 1900782 c) Cell viability assay in U87 and N5 cell lines. The P value was calculated at 96 h. ***P < 0.001, ****P < 0.0001. d) Clonogenic assays were performed to reveal the effects of E2F6 on TMZ sensitivity in colony formation (left). Relative clones were calculated (right). (mean ± SD, n = 3, ns = no significance, *P < 0.05, **P < 0.01, ****P < 0.0001). e) Immunofluorescence staining of U87 cells with γ-H2AX antibody after the cells were infected with EGFRvIII, E2F6, and/or siRNA/E2F6 and subsequently treated with and without TMZ. Cells overexpressing E2F6 resisted to TMZ-induced DNA damage whereas depletion of E2F6 had opposite effects. Scale bar: 20 µm. f) Western blot analysis of E2F6, CD133, and GFAP levels in neurosphere and differentiated glioma cells. g) Representative images of spheres in dose of 100 cells per well are shown. Scale bar, 100 µm. In vitro limiting dilution assays culturing decreasing number of glioblastoma stem cells (GSCs) with or without E2F6 knockdown in U87vIII and N33vIII cells. The concentration of TMZ was 375 × 10 −6 m. After 3 weeks, frequency and probability estimates were calculated using the ELDA software. **P < 0.01, ***P ≤ 0.001. qRT-PCR analysis of E2F6 expression in GBM cells which were infected or treated with and without NF-κB (p65/RelA), EGFRvIII, or JSH-23. (mean ± SD, n = 3, ****P < 0.0001). c) Immunoblotting analysis of GBM cells with indicated antibodies after the cells were infected and treated with and without EGFRvIII and/or TMZ. p-NF-κB protein levels were quantified (right). (mean ± SD, n = 3, *P < 0.05, ***P < 0.001, ****P < 0.0001). d) Western blot analysis of AKT and p-AKT (Ser 473) with indicated treatment in four GBM cell lines. e) The cytoplasmic fraction and the nuclear fraction were separated for Western blot analysis of p-NF-κB P65. f,g) ChIP-PCR assay: EGFRvIII increased the enrichment of H3K4me3 and p-NF-κB in E2F6 promoter (d) whereas AKT inhibitor MK-2206 treatment reduced the interaction of H3K4me3 and p-NF-κB to E2F6 promoter (e). (*P < 0.05). h) Four GBM cell lines were cotransfected with Renilla group) which were treated with TMZ (5 mg kg −1 d −1 ), JSH-23 (6 mg kg −1 d −1 ), TMZ (5 mg kg −1 d −1 )/JSH-23 (6 mg kg −1 d −1 ) and phosphate buffered saline (PBS) vehicle control for 2 weeks at 5 days on and 2 days off (Figure 7a). Bioluminescent imaging revealed that the mice treated with a combination of TMZ and JSH-23 exhibited sustained tumor regression compared with mice treated with either one alone (Figure 7b). In addition, the combination treatment group significantly reduced body weight loss rate (Figure 7c) and had a longer survival time when compared with the other groups (Figure 7d). On day 14 of the treatment, the representative mice from each group were euthanized and the xenograft tumor was removed for immunohistochemistry analysis. We found that that tumors treated with TMZ alone expressed elevated levels of p-NF-κB and E2F6, whereas JSH-23 treatment inhibited p-NF-κB and E2F6 expression (Figure 7e). Taken together, these data indicate that suppression of E2F6 by inactivation of NF-κB could increase TMZ sensitivity. E2F6 Is a Poor Prognostic Marker in GBM To further evaluate E2F6 as a biomarker in GBM patients, we collected 134 GBM samples from the Chinese Glioma Genome Atlas (CGGA) RNA-seq database with PFS (progression-free survival) data available. Among the 191 candidate genes, univariate Cox analyses revealed that 18 PFS-associated genes include E2F6, MUC1, and TRAF1 (Figure 8a). High level of E2F6 was significantly correlated with poor PFS (Figure 8b). Further analysis showed that E2F6 may serve as an independent poor prognostic marker (Figure 8c; hazard ratio: 1.689, 95% confidence interval: 1.172-2.433, P = 0.005). Moreover, we analyzed the relationship between the E2F6 protein level and PFS in 53 GBM patients (Primary tumors) who had undergone standard treatment in Beijing Tiantan Hospital. Immunohistochemical staining analysis and statistical analyses revealed that E2F6 expression level was negatively related to PFS (Figure 8d), i.e., patients whose tumors express high levels of E2F6 had a significantly short PFS (Figure 8e,f). Discussion In this study, we performed a genome-wide screening in GBM cells treated with and without EGFRvIII and TMZ and identified the PI3K/AKT/NF-κB pathway responsible for TMZ resistance in EGFRvIII-expressing GBM cells. More significantly, EGFRvIII rendered GBM cells to TMZ resistance and E2F6 was identified as a critical target in TMZ resistance and GBM bearing EGFRvIII. TMZ treatment and EGFRvIII induce E2F6 at both protein and mRNA levels through activation of the PI3K/Akt/NF-kB pathway. Depletion of E2F6 sensitizes GBM cells to TMZ and abrogates EGFRvIII-associated TMZ resistance. These findings are important in several folds: first, demonstration of E2F6 as a pivotal downstream target gene of EGFRvIII/ PI3K/AKT/NF-κB axis; second, established direct connection of E2F6 with EGFRvIII-promoted GBM progression and TMZ resistance; and finally, provided evidence of E2F6 as a potential therapeutic target and a valuable prognostic marker for GBM, especially the tumors harboring EGFRvIII. Currently, TMZ is the most commonly used and most effective drug in the treatment of GBM. TMZ administration alone can improve the median survival of GBM patients by 8 months. [11] However, almost all patients eventually develop resistance. Moreover, recurrent GBMs are generally more aggressive than primary tumors. Therefore, there is a critical need to understand how TMZ resistance is acquired. TMZ mediates cytotoxicity primarily via the formation of an O6-methylated guanine (O6MeG) adduct, which causes the attachment of thymine-substituted cytosine residues to O6-methylated guanine during DNA replication. This mismatch repair injury leads to replication-associated DNA DSBs, often triggering cell death. [12] The increase in the expression of O6-methylated guanine-DNA methylase (MGMT) has been shown to cause the resistance, which makes the tumor resistant to TMZ via repair of the O6 guanine adduct. [13] In addition, growing evidence suggests that other proteins mediate tumor resistance to TMZ. A large body of literature has demonstrated that the PI3K/AKT pathway plays an important role in TMZ resistance, [14] and evidence exists that glioma is resistant to TMZ via the PI3K/AKT/NF-κB signaling cascade. [15] Especially, activation of NF-κB is a general cellular response to anticancer drugs. [16] Therefore, the treatment of TMZ is likely to also activate NF-κB through some mechanisms. In this report, we showed that EGFRvIII rendered GBM cells resistant to TMZ by activating the PI3K/AKT/NF-κB pathway. Furthermore, we employed a patient-derived xenografts (PDX) mouse model to reveal that inactivation of NF-κB is a promising therapeutic strategy to increase TMZ sensitivity. Recently, the bortezomib, a therapeutic proteasome inhibitor, [17] in combination with TMZ lacked efficacy in advanced refractory solid tumors or melanoma patients in a phase II clinical trial (Clinicaltrials.gov identifier: NCT00512798). Although bortezomib and JSH-23 both inhibited NF-κB in different ways, the key downstream factors are distinct. The inhibitory effects of bortezomib on cell growth are potentially because of downregulation of cytokines CXCL8 and CXCL1, which are regulated by NF-κB and play an important role in promoting growth and metastasis of melanomas. [17] However, CXCL8 and CXCL1 maybe are not the TMZ resistance genes, so the combined effect with TMZ might just be synergy of toxicity, not chemotherapy sensitization. It is recognized that activation of NF-κB is associated with drug résistance in many tumors, but we believe there should be other critical gene-mediated drug resistance. In this study, we identified E2F6 as a key gene associated with TMZ resistance in EGFRvIII GBM using the CRIPSR-Cas9-based genome-wide screening Adv. Sci. 2019, 6, 1900782 plasmid, NF-κB (p65/RelA) plasmid, and reporter vectors containing cloned ChIP-qPCR fragments of the E2F6 promoters. Luciferase activity was normalized to Renilla. (mean ± SD, n = 3, **P < 0.01, ***P ≤ 0.001). i) Schematic illustration of the mechanism of E2F6 regulation by the EGFRvIII/ PI3K/AKT pathway. NF-κB was simultaneously activated by EGFRvIII/PI3K/AKT and TMZ, leading to transcriptional activation of E2F6. In addition, the EGFRvIII/PI3K/AKT induced H3K4me3 modification which further activated E2F6 transcription. Figure 6. E2F6 is a causal factor of TMZ resistance in vivo. a) Schematic illustration of the evaluation of E2F6 as a causal factor in TMZ resistance in vivo. Mice were intracranially injected with U87, U87/E2F6, U87/EGFRvIII, or U87/EGFRvIII/E2F6 KD cells and subsequently treated by intraperitoneal injection of TMZ (5 mg kg −1 d −1 ) or DMSO for 2 weeks (5 days on and 2 days off) after being randomly divided into four groups (seven mice per group). Tumor growth was monitored by bioluminescence. b,c) Representative pseudocolor bioluminescence images from each group with indicated treatment underneath the images. Quantitative analysis of the photon flux is below. P value was calculated on day 28. (mean ± SD, n = 7, **P < 0.01, ***P < 0.001, ****P < 0.0001). d) The body weights of GBM-bearing mice, which were established by intracranial injection of U87 and U87/E2F6 cells, and e) U87/EGFRvIII and U87/EGFRvIII/E2F6KD cells, after treatment with TMZ or vehicle control (*P < 0.05, ***P < 0.001, ****P < 0.0001, system. What's more, E2F6 expression is controlled by the EGFRvIII/AKT/NF-κB pathway. Thus, combination with JSH-23 and TMZ should be a promising treatment for EGFRvIII GBM or EGFR amplification GBM or classical subtype GBM. Previous studies have shown that the NF-κB is activated by both canonical and noncanonical pathways. As a noncanonical pathway, EGFRvIII activates NF-κB via PI3K/AKT. In addition, NF-κB may also be activated by DNA damage response and is thought to play a key role in cell survival. [18] Many genotoxic drugs also induce both NF-κB and canonical and noncanonical signaling pathways. [19] In addition, NF-κB is activated by TMZ-induced mismatch repair and AKT-dependent activation protects cells from cytotoxicity. [15] The role of DNA damageinduced NF-κB activation in cell protection or apoptosis induction depends on the cell line and dose and drug characteristics. [19] In the present study, we found the NF-κB activation induced by EGFRvIII and TMZ to promote GBM cell survival and TMZ resistance. Notably, we identified that E2F6 is transcriptionally activated by NF-κB. In addition, EGFRvIII/AKT increases the accumulation of H3K4me3 in the E2F6 promoter region, thereby further enhancing E2F6 expression at the transcriptional level. The E2F6 protein is an Rb-independent transcriptional repressor. E2F6 inhibits E2F-dependent S-phase transcription and provides transcriptional conditions for G1/S-related genes. [20] Studies have shown that due to replication pressure, Chk1 maintains E2F1-3 transcriptional activity via the phosphorylation of E2F6. [21] Previous studies showed that E2F6, E2F7, and E2F8 are induced in cells treated with a DNA-damaging agent and play important roles in DNA damage repair, increasing cell resistance to chemotherapeutic drugs and maintaining cell survival. [22] E2F6 has been recently shown to promote cancer , or PBS vehicle control for 2 weeks (5 days on and 2 days off). Tumor growth was evaluated by bioluminescence once a week, and the mouse body weights were measured every 2 days. b) Representative bioluminescence images of the intracranial PDX mice treated with indicated agents at days 7 and 14 (mean ± SD, n = 7, **P < 0.01, ***P < 0.001, ****P < 0.0001). c) Body weights of PDX mice treated with indicated agents (*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001). d) Kaplan-Meier curve shows that the overall survival time was increased in TMZ/JSH-23 combination treatment group (**P < 0.01, ***P < 0.001, ****P < 0.0001). e) Representative immunostaining results of expression of NF-κB, p-NF-κB, and E2F6 in PDX tumors after treatment with indicated agents. Scale bar: 50 µm. ns: not significant). f) Kaplan-Meier curve showed the overall survival time of GBM-bearing mice, which were established by intracranial injection of U87 and U87/E2F6 cells, and g) U87/EGFRvIII and U87/EGFRvIII/E2F6KD cells, after treatment with TMZ or vehicle control (*P < 0.05, ***P < 0.001, ****P < 0.0001, ns: not significant). h) Representative immunostaining images of EGFRvIII, p-NF-κB, and E2F6 in the GBM xenografts, which were derived from U87 and U87/E2F6 cells, and i) U87/EGFRvIII and U87/EGFRvIII/E2F6KD cells, after treatment with TMZ or vehicle control. Scale bar: 50 µm. The cutoff of E2F6 was identified using X-tile software. c) Univariate and multivariate Cox analyses showed that E2F6 is an independent prognostic marker in the CGGA GBM cohort. d,e) Immunohistochemical staining GBM tumors from Tiantan Hospital with E2F6 antibody. Expression levels of E2F6 were negatively associated with the PFS (Spearman's correlation: R = −0.5007, P < 0.0001). f) Representative magnetic resonance images, E2F6 immunochemistry, and progression-free survival data in two GBM patients. stem cell survival and growth and to be frequently overexpressed in a number of cancers. [23] Here, using CRISPR based genomewide screening we identified E2F6, but not other members of E2F family, as an important TMZ-resistant gene in EGFRvIIIexpressing GBM cells. TMZ-induced DNA damage activates NF-κB, resulting in significantly increased E2F6 expression. Elevated E2F6 abrogates the inhibitory effect of TMZ. In summary, our study provides a strategy to identify the genetic determinants of TMZ resistance in GBM using the CRIPSR-Cas9-based genome-wide screening system, which leads to uncover additional promising therapeutic targets for GBM patients. We demonstrated that E2F6 plays a critical role in TMZ resistance in GBM, especially GBM harboring EGFRvIII. E2F6 was induced by EGFRvIII and TMZ through the noncanonical NF-κB pathway. Depletion of E2F6 largely abrogates EGFRvIII-associated TMZ resistance. These findings suggest that inhibition of E2F6 is a plausible strategy for the development of therapies and that an assessment of E2F6 expression could be useful as a predictive biomarker. Pooled Genome-Wide CRISPR Screening: U87 and U87-EGFRvIII cells were treated with a series of concentrations of TMZ ( Figure S1a,b, Supporting Information). At 375 × 10 −6 m concentration, TMZ inhibited cell growth much more significantly in U87 than U87-EGFRvIII cells ( Figure S1c, Supporting Information). Therefore, 375 × 10 −6 m TMZ was chosen in the following experiments. The U87 and U87-EGFRvIII cells were infected with the pooled GeCKOv2 human lentiviral library at a multiplicity of infection (MOI) of 0.3 to ensure that most cells received only one stably integrated RNA guide. On day 2, puromycin was added to the cells to select the positively transduced cells. On day 9, the cells were treated with 375 × 10 −6 m TMZ. After 7 and 14 days of TMZ treatment, the cells were harvested, and genomic DNA was isolated for the amplification of sgRNAs, which were then sequenced on an Illumina X instrument. The sequences data were aligned to the sgRNAs using Bowtie 2 (GSE 112733). Quantitative RT-PCR: Total RNA was isolated using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions. cDNA library was constructed by reverse transcription using the GoScript Reverse Transcription System (Promega, Madison, WI, USA) according to the manufacturer's protocol. The cDNA was amplified using SYBR Green Master Mix (Applied Biosystems/Thermo Fisher, Austin, TX, USA) and normalized by glyceraldehyde 3-phosphate dehydrogenase (GAPDH). The experiments were performed in triplicate on a CFX96 PCR cycler (Bio-Rad, Hercules, CA, USA). The primer sequences used are listed as below. E2F6 Forward: 5′-TCAGCAAAGTGAAGAATTGC-3′ E2F6 Reverse: 5′-CGAGAGCACTTCATGGATAA-3′ GAPDH Forward: 5′-TTGGTATCGTGGAAGGACTCATG-3′ GAPDH Reverse: 5′-GTTGCTGTAGCCAAATTCGTTGT-3′ Relative gene expression was calculated as the 2 −ΔΔCt fold change. RNA Sequencing: Total RNA was isolated as described above for RNA-seq analysis. The cDNA library was generated and amplified by PCR. For RNA-seq of EGFRvIII-overexpressing cells, the cDNA library was sequenced using Illumina HiSeq4000, and sequence data were aligned to the hg19 reference using Hisat2 (GSE 112734). For TMZtreated cells, U87 and U87-EGFRvIII were individually treated with TMZ or DMSO for 7 and 14 days. The cDNA library was constructed, and sequencing was performed by the Beijing Genomics Institute on the BGISEQ-500 platform. Sequence data were aligned to the hg19 human genome reference using Bowtie2 (GSE 112735). The expression levels of the genes were measured by fragments per kilobase of exon model per million mapped reads (FPKM). Clonogenic Assay: Cells (2 × 10 3 per dish) were seeded in 60 mm Petri dishes and cultured for 21 days. Subsequently, the cells were washed twice with PBS, fixed with methanol and acetic acid at a 3:1 ratio, and then stained with 6% v/v glutaraldehyde and 0.5% crystal violet (Solarbio) for 5 min at room temperature. The plates were thoroughly washed with water and air-dried at room temperature. The number of colony was accounted. Cell Viability Assay: Cell viability assay was conducted using cell counting kit-8 (CCK-8 Kit) (Dojindo Laboratories, Kumamoto, Japan) according to the manufacturer's instructions. Briefly, the cells (2 × 10 3 cells per well) were cultured in 96-well plates in triplicate. After allowing the cells to attach to the bottom of the plate for 12 h, the cells were treated with TMZ or DMSO vehicle for 0-96 h. At the indicated time, 10 µL of the CCK-8 solution was added to each well, and the absorbance of the converted dye was measured using a microplate reader (Synergy2, BioTek, USA). ChIP and ChIP-qPCR Assays: ChIP was performed using a ChIP Assay Kit (Millipore) according to the manufacturer's instructions. Briefly, after fixing with 1% formaldehyde for 10 min and neutralizing with glycine for 5 min at room temperature, 1 × 10 6 mL −1 cells were washed with cold PBS, scraped, and stored on ice. The cells were then resuspended in a ChIP lysis buffer and sonicated 90 times using the high-power setting of the 3 s on/9 s off sonication cycle in ice water with a Sonics Vibra-Cell processor (Sonics & Materials Inc., Newtown, CT, USA) to generate 300 to 1000 bp DNA fragments. In total, 10% of the lysate was used as the DNA "input" control. The remaining samples were immunoprecipitated with antibody-coupled magnetic beads on a rotator at 4 °C overnight. The following antibodies were used: anti-p-NF-κB (p-p65) (CST), www.advancedscience.com Adv. Sci. 2019, 6, 1900782 anti-H3K4me3 (CST), and normal mouse immunoglobulin G (IgG) (Millipore). The immunoprecipitates were collected using a magnetic rack. The beads were washed and bound chromatin was eluted in a ChIP Elution Buffer. The chromatin products were treated with RNase A (15 min at 37 °C) and proteinase K (2 h at 55 °C). After DNA purification, the E2F6 promoter binding site was quantified using qPCR and normalized by total chromatin (input). Normal mouse IgG was used as a negative control, and the primers used are listed as below. Immunohistochemistry: Immunohistochemistry was performed on 5 µm thick formalin-fixed, paraffin-embedded (FFPE) sections, and the sections were deparaffinized in xylene and rehydrated in ethanol. Antigen retrieval was performed with a citrate-buffered solution at 95 °C for 15 min, and the sections were stained with antibodies against E2F6 (GeneTex, 1:100), EGFRvIII, and p-NF-κB (CST, 1:100) at 4 °C overnight. After washing with PBS (pH 7.4) three times for 5 min each, the primary antibody was detected using a ZLI-9023 kit (ZSGB-BIO, Beijing, China), and nuclear counterstaining was performed with hematoxylin for 10 min. The slides were analyzed using National Institutes of Health (NIH) ImageJ software. The scores of the E2F6 IHC analyses were determined by combining the proportion of positively stained tumor cells and the intensity of the staining. The positivity of stained tumor cells was graded as follows: I, <5% positive tumor cells; II, 5-20% positive tumor cells; and III, >20% positive tumor cells. The intensity of the staining was recorded on a scale of 1 (weak staining), 2 (moderate staining), and 3 (strong staining). The staining index was calculated as follows: staining index = staining intensity × the percentage of positive cells. High E2F6 expression was defined as a staining index score ≥6. Clinical Specimens: In total, 53 pairs of glioma samples (primary and recurrent tumors) who had subjected to standardized TMZ treatment were collected from Tiantan Hospital. Clinical data were collected (Table S1, Supporting Information). Formalin-fixed, paraffin-embedded tumor specimens were prepared for immunohistochemistry study. The samples were processed in accordance with ethical standards of the 2008 Helsinki Declaration. All patients provided written consent for the use of their samples for biomedical research. The tumors were histologically subtyped and graded according to the 2007 World Health Organization classification of nervous system tumors. Clinical data in Figure 3c were collected (Table S9, Supporting Information). Patients and Samples for Microarray Data: HG-U133A, Agilent-4502A microarray, and RNA-seq data from The Cancer Genome Atlas were downloaded from the University of California Santa Cruz (UCSC) Cancer Genome Browser (https://genome-cancer.ucsc.edu). The relevant clinical and molecular information were obtained from the TCGA database (https://tcga-data.nci.nih.gov/docs/publications/lgggbm_2015/). [26] Microarray data for validation were obtained from the US National Cancer Institute Repository for Molecular Brain Neoplasia Data (REMBRANDT: https://gdoc.georgetown.edu/gdoc) cohort (n = 313). [27] RNA-seq data were acquired from the Chinese Glioma Genome Atlas database (http://www.cgcg.org.cn/, n = 325). [28] RNA-seq data with progression-free survival information in patients treated with TMZ after the first surgery were obtained from the CGGA database (n = 134). Neurosphere-Forming Assay and Limiting Dilution Analysis: Neurosphere-forming assay was performed in low attachment six-well plates (Corning). U87EGFRvIII and N33EGFRvIII cells were dissociated into single cells and seeded into six-well plates containing 3 mL serumfree medium. Spheres were collected for protein extraction after 3 weeks. U87EGFRvIII and N33EGFRvIII cells were seeded in 96-well plates containing 100 µL completed stem-cell medium at different densities. 60 wells per plate were planted and 100, 50, 20, 10 cells per well. Each well was examined for the formation of tumor spheres after 3 weeks. The frequency was calculated using extreme limiting dilution analysis (http://bioinf.wehi.edu.au/software/elda/). Luciferase Reporter Assay: GBM cells were planted in 12-well plates, and were cotransfected with 100 ng RelA/p65 expression plasmid, 100 ng luciferase reporter vectors, and 20 ng pRL-TK plasmids using Lipofectamine 3000 reagent (Invitrogen). After 24 h post-transfection, cells were cultured in normal complete medium. Luciferase and renilla activities were detected by microplate reader. Immunofluorescence Assay: Immunofluorescence assay was performed according to a previously described protocol. [29] Briefly, cells were seeded on poly-l-lysine-coated glass coverslips and treated with or without 200 × 10 −6 m TMZ for 48 h. Following fixation in ice-cold polyformaldehyde at 4 °C overnight, the cells were permeabilized with 0.3% Triton-X100 for 5 min, blocked with 1% bovine serum albumin for 1 h at room temperature, and then stained with a γ-H2AX-specific primary antibody (Abcam, 1:200) and phalloidin (CST, 1:200) overnight at 4 °C. After washing three times, the primary antibody was detected by Alexa Fluor 488-conjugated secondary antibodies (Molecular Probes/ Life Technologies, Eugene, Oregon, USA, 1:100) for 2 h, and nuclei were stained by DAPI for 5 min at room temperature. Images were captured using an Olympus FluoView 1200 confocal microscope (Olympus, Tokyo, Japan). To objectively compare differences in immunofluorescence due to various treatments, all confocal scanning parameters were kept constant, and images were minimally processed to maintain data integrity. Intracranial Mouse Model: All experimental protocols were conducted within Tianjin Medical University guidelines for animal research and were approved by Institutional Animal Care and Use Committee. 5 week old female nude mice, which were purchased from the Chinese Academy of Medical Science Cancer Institute, were used to establish intracranial GBM xenografts. Previously, U87 cells were infected with E2F6-overexpressing lentivirus, and U87-EGFRvIII cells were transduced with an E2F6 siRNA lentivirus. U87, U87+E2F6, U87-EGFRvIII, and U87-EGFRvIII+E2F6 KD cells labeled with luciferase were used for the mouse xenograft model. A total of 5 × 10 5 cells were intracranially injected into each mouse. The mice were then treated with intraperitoneal injection of TMZ (5 mg kg −1 d −1 ) for 2 weeks at 5 days on/2 days off a week. Bioluminescence imaging was taken on days 7, 14, 21, and 28 to monitor intracranial tumor growth. The data were normalized to bioluminescence detected at the initiation of treatment for each animal. Patient-derived GBM tumor samples were resected, minced, and suspended with ice-cold PBS before centrifugation at 13 000 rpm for 5 min at room temperature. Then, the supernatants were discarded and the pellets were dispersed to separate cells with trypsin for 5 min. The cells were collected after being transduced by lentivirus expressing luciferase for 1h, and then a total of 5 × 10 4 cells were intracranially injected into each mouse. A week later, mice were randomly divided into four groups (seven mice per group). The mice were then intraperitoneally injected with PBS, TMZ (5 mg kg −1 d −1 ), JSH-23 (6 mg kg −1 d −1 ), or TMZ combined with JSH-23 for 2 weeks at 5 days on/2 days off per week. Body weight was measured every 2 days until the mice succumbed to the disease. The error bars shown in the figures indicated standard deviation (SD). At the end of the experiment, Kaplan-Meier survival curves were plotted to show survival. After the whole treatment of TMZ and JSH-23, the representative mice from each group were euthanized and the xenograft tumor was removed for immunohistochemistry analysis. Anti-EGFRvIII www.advancedscience.com Adv. Sci. 2019, 6, 1900782 (CST), anti-p-NF-κB (p-p65, CST), anti-NF-κB (p65, CST), and anti-E2F6 (GeneTex) antibodies were used to detect the expression of each protein. Pathway Analysis: Ingenuity Pathway Analysis (Qiagen, Redwood City, www.qiagen.com/ingenuity) was utilized to identify significant biological pathway(s) in both the CRISPR library and the RNA-Seq datasets. A set of focus genes were used for the data input for both individual and canonical pathway analyses using P < 0.01 such that only significant genes were considered for significant pathways. Canonical pathway analysis was performed with the IPA library which contains most significant genes in canonical pathways. Networks were then algorithmically generated based on their connectivity. A Fisher's t-test value of P < 0.05 was used to determine the statistical significance of a pathway. Statistical Analysis: Statistical analysis was performed using SPSS software (version 22.0, SPSS Inc., Chicago, IL, USA). Data were presented as the mean ± SD of three independent experiments. T-test was used to analyze the differences between two independent groups, and one-way analysis of variance (ANOVA) with the post hoc least significant difference (LSD) test was used to analyze differences in more than two groups. Log-rank tests were used to analyze significant differences between Kaplan-Meier survival curves using GraphPad Prism software. Cox regression was used to determine the prognostic value of each variable with PFS in CGGA GBM patients. Experiments were repeated at least three times. Values of P < 0.05 were considered statistically significant. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2019-08-16T18:38:41.319Z
2019-07-24T00:00:00.000
{ "year": 2019, "sha1": "4d9752f8d733ef38e8c3c1c2b5e3495c8034cf7b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.201900782", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "823b3151ec353e175d94b69d1948bee6e9718ef4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
54598758
pes2o/s2orc
v3-fos-license
Spectroscopic Characterization and Quantitative Estimation of Natural Weathering of Silicates in Sediments of Dikrong River, India The sediments samples were collected from the Dikrong River at various sites to assess the weathering nature and mineral characterization. The Fourier Transform Infrared (FTIR) and X-ray fluorescence (XRF) spectroscopic techniques have been used to characterization of minerals in the sediment samples. The plagioclase index of alteration (PIA), chemical index of alteration (CIA) and index of compositional variation (ICV) are investigated for evaluating the weathering nature in the sediment. The obtained results show the presence of quartz, feldspar in different structure and kaolinite as major minerals. Carbonates and organic carbon are found as minor minerals. The correlations of SiO2 with major elements are authenticated the presence of bulk quartz grains and primary depositional environment. The presence of metamorphosed pyrophanite (MnTiO3) in the adjoined areas is reported. The presence of infrared absorption peaks in between 1611 1622 cm−1 in this study is indicative to the weathered metamorphic origin of the silicate minerals. The index of compositional variation indicates the presence of less clay minerals and more rock forming minerals such as plagioclase and alkali-feldspar. The obtained results exhibit the area belongs to the intermediate silicate weathering. Introduction Weathering of rocks is one of the most important processes which modify the Earth's surface.The weathering and mineralogical studies of sediments are helpful in understanding the different sediment sources, environmental parameters influencing the weathering of source rocks, duration of weathering, transportation and post-depositional processes, element distribution pattern and evaluating the environmental conditions existing in an area.The focus on mineralogical, geochemical and geophysical studies and chemical composition of sediments of many Indian rivers were done by many authors [1]- [13].The variations in bulk rock composition or weatherable rocks can generate significant differences in dissolved chemical components.The dissolved chemical load and sediment flux of Brahmaputra river has significantly higher rates of physical and chemical weathering than other large Himalayan catchments [14]- [20].The total sediment budget of Brahmaputra particularly depends on the nature of weathering of the adjoin areas and erosion of its tributaries.The weathering of silicate minerals exposed on the continents is the largest sink of atmospheric CO 2 on geological time scales [21].In many weathering environment, the chemical weathering of silicate minerals results in the formation of secondary clays.As deposition occurs over time i.e. the deep sediments become a historical record of the temporal trends of chemicals in the environment.However, studies of river sediments especially big rivers and sedimentary rock geochemistry have made important contributions all over the world to interpret tectonic settings and estimates of average upper crustal composition.The heavy metal contaminations and silicate mineral distribution due to weathering of the Subansiri river sediments, one of the most important tributaries of Brahmaputra, is discussed elsewhere by Saikia et al. [22] [23].This study is conducted to make a systematic assessment of the sediments due to weathering of Dikrong river, one of the major tributary of Subansiri River, using spectroscopic method. Experimental Methods The present study covers a total length of 60 Km of Dikrong river, from which 6 locations were selected at a separation distance of 10 Km approximately.The river basin consist of the Bomdila Group (Precambrian), the Gondwanas, the Siwaliks and the Quaternaries.The Gondwanas are thrusted over the Siwaliks along the Main Boundary Thrust (MBT).The river is flowing through the Kimin Formation of Upper Siwaliks and the Quaternaries comprising the Pleistocene and Recent deposits.In the dry season the sediment sample were hand-dug at <5 m distance from the stems of Dikrong river, and sampled at a depth 100 -150 cm.Each sample has a weight of 2 -3 kg approximately.Bulk sediment samples were dried at 40˚C for 48 h and stored in black polythene bags.A part of the moisture removed samples are sieved to >2 mm.Further the sample was crushed into fine powder for analysis, by using agate mortar.The powdered sample was homogenized in spectrophotometric grade KBr (1:20) in an agate mortar and was pressed with 3 mm pellets using a hand press.The infrared spectrum was acquired using Perkin-Elmer system 2000 FTIR spectrophotometer with helium-neon laser as the source reference, at a resolution of 4 cm −1 .The spectra were taken in transmission mode in the region 400 -4000 cm −1 .The room temperature was 30˚C during the experiment.The composition of the samples were determined using a Philips MagiX PRO wavelength dispersive X-ray spectrometer with a rhodium anode X-ray tube was used, which may operated at up to 60 kV and current up to 125 mA, at a maximum power level of 4 kW.The calibration and reproducibility of this apparatus is discussed elsewhere [24].The precision and accuracy of the data is ±2%, and average values of three replicates were taken for each determination. Results and Discussions Compositions of the major oxides and elements of the samples are provided in Table 1.The observed concentration is reported as %wt.The major oxide composed of SiO 2 (75.44 Na 2 O (0.33) with moderate negative correlation with K 2 O (0.58).This co-variation indicates that alkali-bearing minerals have significant influence on Al distribution and suggests that the bulk of Al, Ca and Na are primarily contributed by clay minerals [25].The strong negative correlation of SiO 2 with CaO (−0.75) and Al 2 O 3 (−0.78) is indicative to the primary depositional environment of carbonates [26] and increase in clay fraction respectively.The manganese titanium oxide mineral pyrophanite (MnTiO 3 ) is usually found in metamorphosed manganese deposits.The positive correlation of MnO with TiO 2 (0.26) may indicative to the presence of metamorphosed pyrophanite (MnTiO 3 ) deposition.The presence of pyrophanite (MnTiO 3 ) in the adjoin areas of the study sits has already reported by Saikia et al. [23]. In order to estimate the nature of weathering intensity in the sediments, we applied commonly used weathering indices Plagioclase Index of Alteration, Chemical Index of Alteration and Index of Compositional Variation which were proven to be well applicable to lithology [27]- [31]. Plagioclase is one of the most abundant mineral in the earth's crust and is highly vulnerable to alteration and weathering.In basaltic to andesitic rocks, the plagioclase group ranging sodium feldspar to calcium feldspar as the major constituents.The end members of their constituents demonstrate the parent environment and material.Plagioclase index of alteration (PIA) values are generally used to quantify the degree of source rock weathering [28].The PIA can be calculated using the relation proposed by Fedo et al. (1995) [28] as: ( ) The maximum PIA value is (equal to 100) indicative to completely altered material such as kaolinite, gibbsite etc. whereas the half of the maximum PIA value indicates unweathered plagioclase.The studied sample exhibits the range of PIA value from 73.25 -80.31 with average value 76.47 which is indicative to the weathering nature of the source rocks (Table 3).The prominent plagioclase weathering is observed in the adjoin area of site S-3.The site S-6 suffers less plagioclase weathering among the study sites.The rest sites suggest almost moderate plagioclase weathering in source area. The chemical index of alteration (CIA) is a constructive technique to evaluate the progressive alteration of plagioclase and K-feldspars to clay minerals.The study of Nesbitt and Young (1982) [27] reveals the degree of weathering can be estimated by calculation of the Chemical Index of Alteration (CIA), based on molecular proportions (i.e.mass% of the oxide of an element divided by molar weight of the oxide) given as: In this relation, Al is considering as static and the changes in CIA reflects the changing proportions of feldspar and Al-rich secondary minerals in the depositional environment.In weathering, feldspars are dissolved by acid hydrolysis and hence their constituting cations Na, Mg, Ca, and K are leached [27] [28].The more static elements such as Si and Al remain stable in the same environment and forms oxidic minerals.Therefore, low values of CIA indicate little chemical alteration while a high values infers an intensive alteration and leaching of the mobile cations relative to the residual Al during weathering [27] [28].Therefore CIA values of the sediments are used as an important indicator of the intensity of weathering in the provenance area.CIA values of unweathered igneous rocks and fresh feldspar ranges from 40 -50, whereas in intensely weathered residue rocks it approaches to 100 [27].The observed CIA value of the studied sediment samples are in between 62.48 -72.44 with an average of 65.81 which is considered to represent low to moderate degree of weathering (Table 3).The sample site-3 has highest weathering condition whereas the sample site-6 has undergoes least weathering condition among the samples. The composition of non-quartz components of the sample can be evaluated by calculating the Index of Compositional Variation (ICV) proposed by Cox et al., (1995) as: ( ) Fe O K O Na O CaO MgO MnO TiO ICV TiO The ICV value less than 1 indicates the presence of more clay minerals whereas its value greater than 1 indicates more rock forming minerals such as plagioclase, alkali-feldspar, pyroxenes etc. [29].The ICV values of the samples varied from 25.32 to 89.36 with average value 51.32 (Table 3).The average ICV value indicates the presence of less clay minerals and more rock forming minerals such as plagioclase, alkali-feldspar etc. [29]. The PIA and CIA describe the weathering in source area as to weathering during long distance transportation, i.e. even if there was intensive weathering in the source area, the sediments as well may not travel far before been deposited [28] [32].A high CIA and PIA values (ranged 75 to 100) indicative to intensive weathering in source area with residue of little amount of feldspar.The CIA and PIA values in between 60 to 70 indicates moderate weathering and their value less than 60 indicates low weathering of the source area [28]- [32].The study samples have the average CIA and PIA values 65.81 and 76.46 respectively (Table 3).Therefore, moderate to intensive nature of weathering of the source areas may be considered. The K 2 O/Al 2 O 3 ratios indicates how much of alkali feldspar versus plagioclase and clay minerals were present in the original rock.The K 2 O/Al 2 O 3 ratio are less than 0.3 and 0.3 -0.9 respectively for clays and feldspars.This ratio of the studied samples is ranged between 0.135 -0.337 with average value 0.214 (Table 3).These values indicate predominance of clay minerals over alkali-bearing minerals such as K-feldspars and micas [29]. The extent of the weathering of the silicate is shown in Figure 1 in the plots of PIA, Al/Na and K/Na against chemical index of alteration (CIA).It demonstrates the varying degrees of chemical weathering as: no silicate weathering, low silicate weathering, intermediate silicate weathering and extreme silicate weathering after Nesbitt and Young (1982); Roy et al. (2008) [28]- [32].The interrelation between both indexes reflects the silicate weathering intensity.It exhibits the studied samples belong to the intermediate silicate weathering. The observed infrared frequencies of the studied sediment samples are comparing with the available literature of Gadsden (1976) and the minerals such as such as quartz, microcline, orthoclase, albite, kaolinite, illite, vermiculite, calcite, aragonite and organic compounds were identified [33].The observed frequencies are interpreted in Table 4. The mid infrared spectra of quartz in between the range 1200 -400 cm −1 are classified into four characteristic bands around 1080 -1175, 780 -800, 695 and 450 -464 cm −1 due to Si-O asymmetrical stretching vibration (v3), Si-O symmetrical stretching vibration (v1), Si-O symmetrical bending vibration (v2) and Si-O asymmetrical bending vibration (v4) respectively [34] [35].In the observed infrared spectra of the samples (Figure 2 and Table 4), the absorption bands appearing at 458 -462, 512 -520, 693 -696, 777 -781 and 1080 -1090 cm −1 is suggested the presence of quartz in the samples.The bands around 1000 cm −1 appears due to the silicon-oxygen stretching vibrations and the tetrahedral-tetrahedral ion vibrations affected the band around 780 cm −1 for silicate, the tetrahedral dimensions are generally considered to be little affected by pressure and temperature.The absorption band at 695 cm −1 arises due to the octahedral site symmetry.The tetrahedral site symmetry is stronger to that of octahedral site symmetry.Therefore, for any structural change, the damage occurs first in octahedral site symmetry then in tetrahedral site symmetry.The intensity of the bands due to the vibrations of these two symmetries will provide direct information on the crystallinity.It is well known that in the infrared spectra of amorphous silica the symmetrical bending vibration of the Si-O group found at 695 cm −1 is absent.Therefore, the symmetrical bending vibrations of Si-O group obtained at 695 cm −1 is diagnostic peak in determining the short range parameter of the quartz, whether it is crystalline or amorphous [36]- [39].In the all studied samples we observed this characteristic peak at 695 cm −1 .It suggests that the observed quartz in the samples were well crystalline in nature.The absorption peaks at 1615 -1620 cm −1 indicates the presence of quartz in river sediments are weathered from metamorphic origin [40] [41].The presence of absorption peaks in between 1611 -1622 cm −1 is indicative to the origin of the observed silicate minerals [23]. In the mid infrared spectra of alkali felspars in between the range 1200 -400 cm −1 are classified as: the bands at 1145 cm −1 and 1110 cm −1 are due to Si-O stretching vibration, band at 1051cm −1 and 1110 cm −1 are assign to Al-O stretching vibration, the bands at 768 cm −1 and 728 cm −1 were assigned to Si-Si and Al-Si stretching vibration respectively, the bands at 648 cm −1 and 585 cm −1 were assigned to O-Si-O and O-Al-O bending vibrations, bands at 538 cm −1 and 467 cm −1 were assigned to coupling between O-Si-O deformation and K-O stretching vibrations, and the band at 428 cm −1 is assigned to Si-O-Si deformation [39] [42]- [48].In the Table 4, the peak in the range 586 -589 cm −1 arising due to O-Si-(Al)-O bending vibration in the studied samples indicates the presence of microcline.The peak corresponding to the range 532 -540 cm −1 is arising due to Si-O asymmetrical bending vibrations and 644 -651 cm −1 is arising due to Al-O-coordination vibrations and these peaks are indicative to the presence of orthoclase.The weak or shoulder assigned at 408 -422 cm −1 and 721 -725 cm −1 is corresponding to Si-O-Si deformation and Al-Si stretching vibration respectively which is indicative to the presence of albite in the observed samples [39] [42]- [48]. In the infrared spectra of the samples (Figure 2 and Table 4) OH vibrations has been investigated, whose absorption bands appear at different frequencies depending on the cations directly linked to the hydroxyls.This permits the determination of cation distribution around hydroxyls and thus allows assessing short-range cation ordering [36].The structure of kaolin minerals consist of a sheet of corner-sharing tetrahedra, sharing a plane of oxygens and hydroxyls (inner hydroxyls) with a sheet of edge-sharing octahedral with every third site vacant (dioctahedral).The general features of the OH stretching absorption bands are well established for kaolin.The band observed at around 3624 -3627cm −1 has been ascribed to the inner hydroxyls, and the bands observed at around the other three characteristic bands are generally ascribed to vibrations of the external hydroxyls.The studied sample exhibits the bands 3695, 3660 -3671, 3644 -3646 and 3624 -3627 cm −1 nearer the characteristic OH stretching bands at 3696, 3669, 3645 and 3620 cm −1 of kaolinite [36].The absorption bands observed around 3400 cm −1 could be assigned to the OH vibrational mode of the hydroxyl molecule, which is observed in almost all the natural hydrous silicates.The observed bands at 1005 -1015 cm −1 are close to the SiO deformation band obtained for theoretical kaolinite.The absorption band at 1118 cm −1 is identical to the Si-O normal to the plane stretching found around 1120 cm −1 .The observed bands in the range 870 -871 cm −1 and 920 -926 cm −1 are assigned to (Al-Mg-OH) deformation and (Al-Al-OH) deformation respectively.The peak at 920 cm −1 is attributed to illite [43] [48]- [50].All studied samples exhibits weak absorption bands at 2849 -2854 cm −1 and 2922 -2988 cm −1 arises due to symmetric and asymmetric stretching of CH group which suggest the presence of organic carbon [51] [52]. The observed bands at 1428 -1433 cm −1 is due to (CO 3 ) 2− stretching mode vibration (Table 4).The other peak at 1410 cm −1 is arises due to doubly degenerate asymmetric stretching mode vibration.These vibrations are generally sensitive to the side symmetry for the carbonate group [43] [53]- [57].The carbonate structure contains isolated 2 3 CO − group with a doubly degenerate symmetric stretch (ν3) at the region 1508 -1555 cm −1 [58] [59].Another bands at 1792 -1793 cm −1 and 1796 cm −1 arises due to C=O stretching mode vibration and combinational mode of vibration respectively.Another combinational mode of vibration band is observed at 1821 -1825 cm −1 .These bands are indicative of the presence of calcite.The bands at 1455 -1459 cm −1 and 2512 -2519 cm −1 arises due to C-O bending mode vibration and O-H stretching mode vibration respectively.These bands are significant to calcite and aragonite group minerals [60]. Conclusion The present study indicates the principal constituents of the studied sediments are quartz, feldspar (microcline, orthoclase and albite), carbonates (calcite and aragonite) and clay (kaolinite and illite) minerals.Among the different minerals, quartz, feldspar and kaolinite are most abundant in the samples.Hence, these minerals are considered to be main or major constituents of the samples.The presence of infrared absorption peaks in between 1611 -1622 cm −1 in this study is indicative to the weathered metamorphic origin of the silicate minerals.The elemental correlation is indicative to the metamorphosed pyrophanite (MnTiO 3 ) deposition.The negative correlations of SiO 2 with major elements is authenticated the presence of bulk quartz grains.The strong negative correlation of SiO 2 with CaO and Al 2 O 3 is indicative to the primary depositional environment of carbonates and increase in clay fraction respectively.The interrelation between CIA, PIA, Al/Na and K/Na reflects the silicate weathering in-tensity.The present study exhibits the studied samples belong to the intermediate silicate weathering. Figure 1 . Figure 1.The plot of Plagioclase index of alteration (PIA), Al/Na and K/Na against Chemical index of Alternation (CIA). Figure 2 . Figure 2. FTIR absorption spectra for the studied sediment samples in the range 4000 -500 cm −1 . Table 2 of correlation matrix, it is observed that SiO 2 shows negative correlation with all major elements except for K 2 O with a weak positive correlation (0.26).The negative correlations of SiO 2 with major elements is authenticated the presence of bulk quartz grains.The weak positive correlation of SiO 2 with K 2 O indicates increase of clay content with decrease of quartz.Al 2 O 3 shows moderate positive correlation with CaO (0.41) and Table 1 . Major oxide and elemental compositions (wt%) of the samples. Table 2 . Pearson's correlation coefficient between different oxides of the sediment samples.
2018-12-03T12:34:34.264Z
2015-09-10T00:00:00.000
{ "year": 2015, "sha1": "7d25b1bd9e0dd08d531882c5975b8f5ca8ae5cdf", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=59877", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7d25b1bd9e0dd08d531882c5975b8f5ca8ae5cdf", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
252773788
pes2o/s2orc
v3-fos-license
Design and Analysis of a Compliant End-Effector for Robotic Polishing Using Flexible Beams : The contact force between the polishing tool and the workpiece is crucial in determining the surface quality in robotic polishing. Different from rigid end-effectors, this paper presents a novel compliant end-effector (CEE) for robotic polishing using flexible beams. The flexibility of the CEE helps to suppress the excessive displacement caused by the inertia of the polishing robot and avoids damaging the polishing tool and workpiece surface. In addition, the contact force can also be precisely estimated via the measurement of the CEE’s displacement using a capacitive position sensor. The design, modeling and experimental validation of the CEE are presented. Firstly, the analytical model of the CEE is established using the stiffness matrix method. Subsequently, the analytical model is verified by finite element analysis. Further, a prototype is manufactured, and its characteristics and performance are experimentally tested. The equivalent stiffness is measured to be 0.335 N/ µ m, and the first natural frequency along its working direction is 42.1 Hz. Finally, the contact force measurement using the CEE is compared with a force sensor. Under open-loop condition, the resolution of the contact force measurement is found to be 0.025 N, which makes the fine tuning of the contact force possible in robotic polishing. Introduction Polishing is an extremely important processing technology in improving the surface quality of the workpiece [1].The control of the contact force between the polishing tool and the workpiece is crucial in polishing.Currently, manual polishing is still very common, especially in the fine polishing of complex surfaces [2].However, the quality of manual polishing is highly dependent on the skills and experience of the practitioner, resulting in low production efficiency and poor consistency [3].On the contrary, in various precision industries, robots have been more and more frequently used for deburring and polishing the workpiece surface [4].Compared with manual polishing, in robotic polishing, the contact force can be precisely regulated using force sensing techniques.Therefore, how to precisely sense and maintain the contact force has become an important issue in the robotic polishing process [5,6]. In order to realize contact force control, there are two kinds of control methods: passive compliance and active compliance.The most straightforward way to control the contact force between the polishing tool and the workpiece surface is to use a linear spring or a compliance mechanism [7], i.e., the passive compliance control.The active compliance control is to realize the closed-loop control of the contact force through the force sensor.In active compliance system, the most widely used drivers are pneumatic cylinder [8] and voice coil motor [9].In order to avoid excessive contact between the polishing tool and the workpiece surface, the polishing end effector should exhibit certain compliance characteristics.Currently, the end-effectors can be briefly divided into mechanical [10], pneumatic [11], electrical [12] and electromagnetic [13].With the force feedback, the active compliance control can obtain a nearly constant contact force via adjusting the displacement between the polishing tool and the workpiece surface [14,15].Li et al. proposed a novel macro-mini robot with active force control for robotic polishing [9].In this design, a macro robot provides the posture control during polishing operations, and a high-bandwidth end-effector, the mini robot, realizes constant contact force control.Tian et al. set up an active and passive compliance control polishing model by explicit force control based on position, using a tilting polishing tool with an elastic sponge disk to achieve a relatively constant force control effect [16].Fan et al. presented a novel smart end-effector for active contact force control by using a gravity compensated force controller and two novel eddy current dampers for vibration suppression in the robotic polishing of thin-walled blisks [17]. Although the above researchers realized the constant force control in polishing, they did not consider the impact of the excessive displacement caused by the inertia of the robot on the workpiece and the polishing tool.Du et al. designed a compliant end-effector, where the deformation of the rubber support was used to improve the compliance of the polishing system.At the same time, an adaptive anti-saturation integral separated fuzzy proportional-integral (PI) controller was designed to control the contact force by using a force sensor [18].Wu et al. proposed a novel force-controlled spherical polishing tool combined with self-rotation and co-rotation motion, providing compliance and polishing force in the polishing process [19].Mohammad et al. presented a novel design of a forcecontrolled end-effector for automated polishing.In this design, the polishing tool can be extended and retracted by a linear hollow voice coil actuator to provide compliance [1]. The compliant mechanism has the advantage of no gap and friction, no lubrication and assembly, and integrated design and processing [20][21][22].It has been utilized in a wide range of applications, such as micro-electromechanical systems, scanning probe microscopes, ultra-precision machining, and biological cell operations [23][24][25].Wei et al. proposed a novel end-effector based on constant force mechanism (CFM), a specially designed compliant mechanism featuring almost zero stiffness within the effective range.This is the first time that CFM is used in robotic polishing.The constant-force motion range acts as a damper to counteract the excessive displacement caused by the inertia, and thus the end-effector regulates the contact force passively [26].Ding et al. proposed a novel CFM based on the combination of positive and negative stiffness mechanism by using folding beam and bi-stable beam mechanisms.Without using any additional sensors and control algorithms, the proposed CFM can produce a travel range in constant force manner [27].However, in the above CFMs, the value of the constant force is predefined cannot be adjusted.Therefore, when the contact force needs to be adjusted to another value, the current CFM has to be replaced to another CFM with a new set of dimensions. In this paper, a flexible beam based compliant end-effector (CEE) for robotic polishing is proposed, which is beneficial to solve the problem that the polishing tool and workpiece surface are damaged due to excessive displacement caused by the inertia when the polishing robot approaches the workpiece surface quickly.When the polishing tool contacts the workpiece surface, the elastic deformation of the CEE can act as a damper.Further, the contact force can also be calculated from the displacement and the stiffness coefficient of the CEE without the use of the force sensor.As a capacitive position sensor is used to measure the displacement, the accuracy of the contact force sensing can be guaranteed, which is important to improve the force control accuracy of robotic polishing. The rest of this paper is arranged as follows: Section 2 gives the mechanical structure design and analytical stiffness model of the CEE for robotic polishing.Sections 3 and 4 give the finite element analysis (FEA) results and the experimental verification, respectively.Finally, the conclusion is provided in Section 5. Mechanical Design of the Compliant End-Effector The schematic diagram of robotic polishing is illustrated in Figure 1, where a robot arm is used to hold and moves the polishing tool across the workpiece surface.A pneumatic cylinder is used to push the polishing tool against the workpiece with controllable contact force.Different from the conventional active compliance design, an additional CEE is inserted between the slider of the pneumatic cylinder and the polishing tool.During polishing, the pneumatic cylinder moves the polishing tool toward the workpiece.When the polishing tool touches the workpiece surface, the moving plate of the CEE can move back and forth, following the variation of the contact force, as shown in the inset of Figure 1.The elastic deformation of the CEE can act as a damper, which is beneficial to stabilize the contact force. matic cylinder is used to push the polishing tool against the workpiece with controllable contact force.Different from the conventional active compliance design, an additional CEE is inserted between the slider of the pneumatic cylinder and the polishing tool.During polishing, the pneumatic cylinder moves the polishing tool toward the workpiece.When the polishing tool touches the workpiece surface, the moving plate of the CEE can move back and forth, following the variation of the contact force, as shown in the inset of Figure 1.The elastic deformation of the CEE can act as a damper, which is beneficial to stabilize the contact force. Considering the possibility of the CEE's failure and making it compatible for different robotic polishing tasks, a modular design is adopted herein such that the CEE can be easily replaced, as shown in Figure 1.In addition, due to the harsh polishing environment and the long working time, it is necessary to improve the adaptability of the CEE.Therefore, in the mechanical design of the CEE, very complex and tiny structures are not pursued.Considering the eccentric force generated by the rotation of the motor and the transverse force generated during polishing, six groups of flexible beams with completely symmetrical distribution are adopted in the design of the CEE, as shown in the inset of Figure 1.A flexible beam is adopted in construct the CEE because of its lower stress concentration and ease of manufacture when compared with the other types of flexure hinges.In order to further reduce the stress concentration, fillets are adopted at both ends of the flexible beams.The proposed CEE also features precise contact force sensing capability.Within the elastic deformation range, the CEE can be treated as a linear spring with constant stiffness.In applications, the stiffness of the CEE can be calibrated in advance, and thus the contact force can be obtained via the displacement measurement of the CEE without the use of force sensor.In the prototype, high-precision capacitive displacement sensor is used as displacement feedback.Therefore, precise contact force sensing is also available. The key design parameters of the CEE are the length l, the width b, the thickness t, and the rotating angle θ, as shown in Figure 2. As the designed CEE is used to obtain the contact force during robotic polishing, the design target value of the resolution of contact force is set to be below 0.1 N.Moreover, the deformation range should be as large as possible, because the CEE also acts as a damper when the polishing tool contacts the workpiece surface.In addition, the structure of this design is simple, and it does not need a Considering the possibility of the CEE's failure and making it compatible for different robotic polishing tasks, a modular design is adopted herein such that the CEE can be easily replaced, as shown in Figure 1.In addition, due to the harsh polishing environment and the long working time, it is necessary to improve the adaptability of the CEE.Therefore, in the mechanical design of the CEE, very complex and tiny structures are not pursued.Considering the eccentric force generated by the rotation of the motor and the transverse force generated during polishing, six groups of flexible beams with completely symmetrical distribution are adopted in the design of the CEE, as shown in the inset of Figure 1.A flexible beam is adopted in construct the CEE because of its lower stress concentration and ease of manufacture when compared with the other types of flexure hinges.In order to further reduce the stress concentration, fillets are adopted at both ends of the flexible beams. The proposed CEE also features precise contact force sensing capability.Within the elastic deformation range, the CEE can be treated as a linear spring with constant stiffness.In applications, the stiffness of the CEE can be calibrated in advance, and thus the contact force can be obtained via the displacement measurement of the CEE without the use of force sensor.In the prototype, high-precision capacitive displacement sensor is used as displacement feedback.Therefore, precise contact force sensing is also available. The key design parameters of the CEE are the length l, the width b, the thickness t, and the rotating angle θ, as shown in Figure 2. As the designed CEE is used to obtain the contact force during robotic polishing, the design target value of the resolution of contact force is set to be below 0.1 N.Moreover, the deformation range should be as large as possible, because the CEE also acts as a damper when the polishing tool contacts the workpiece surface.In addition, the structure of this design is simple, and it does not need a complicated optimization program, so it is set manually herein.Finally, the main parameters of the flexible beam used are shown in Table 1. Actuators 2022, 11, 284 4 of 12 complicated optimization program, so it is set manually herein.Finally, the main parameters of the flexible beam used are shown in Table 1. Analytical Modeling of the Compliant End-Effector When the polishing tool touches the workpiece surface, the CEE can generate elastic deformations.The deformation of the CEE is dependent on its stiffness and the allowable stress.In order to obtain the displacement of the moving plate, a capacitive displacement sensor is placed behind the moving plate.The stiffness matrix method [28] shown below is used to obtain stiffness of the CEE. where the mass matrix M = diag[mx my Jz] corresponds to the inertia mass and moment of inertia of the moving plate in its generalized coordinates q, mx and my represent the mass of the moving plate, and Jz indicates the moment of inertia of the moving plate on Z axis.The stiffness matrix K is where K ͂ is the stiffness matrix of a single flexible beam, and As presented in Equation ( 3), [dxj 0 dzi] T represents the distance from O-XYZ to Oi-XiYiZi.θi is the rotating angle of Oi-XiYiZi with respect to O-XYZ, as displayed in Figure 3.The values of all the above parameters are shown in Table 2. Analytical Modeling of the Compliant End-Effector When the polishing tool touches the workpiece surface, the CEE can generate elastic deformations.The deformation of the CEE is dependent on its stiffness and the allowable stress.In order to obtain the displacement of the moving plate, a capacitive displacement sensor is placed behind the moving plate.The stiffness matrix method [28] shown below is used to obtain stiffness of the CEE. M where the mass matrix M = diag[m x m y J z ] corresponds to the inertia mass and moment of inertia of the moving plate in its generalized coordinates q, m x and m y represent the mass of the moving plate, and J z indicates the moment of inertia of the moving plate on Z axis.The stiffness matrix K is where K is the stiffness matrix of a single flexible beam, and As presented in Equation ( 3), [ 3. The values of all the above parameters are shown in Table 2.The flexible beam used in this CEE is a straight beam, and its flexibility matrix C̃ expressed in its local coordinate is where E and G are the elastic modulus and shear modulus of the material, respectively, and the stiffness matrix K ͂ of the flexible beam is The generalized force, F = diag[Fx Fy Mz], is applied on the moving plate.In terms of the static analysis, the relationship between the static force and displacement can be expressed as follows: According to the vibration theory, the natural frequency f of the CEE can be obtained by solving the characteristic equation of |λI−M −1 K| = 0, When the contact force Fy is generated on the polishing tool, the output motion of the CEE can be calculated by Equation (6).Based on the above-mentioned method, the theoretical displacement u and stiffness k of the CEE along its working direction can be given by the following formulas: The flexible beam used in this CEE is a straight beam, and its flexibility matrix C expressed in its local coordinate is where E and G are the elastic modulus and shear modulus of the material, respectively, and the stiffness matrix K of the flexible beam is The generalized force, F = diag[F x F y M z ], is applied on the moving plate.In terms of the static analysis, the relationship between the static force and displacement can be expressed as follows: According to the vibration theory, the natural frequency f of the CEE can be obtained by solving the characteristic equation of |λI − M −1 K| = 0, When the contact force F y is generated on the polishing tool, the output motion of the CEE can be calculated by Equation (6).Based on the above-mentioned method, the theoretical displacement u and stiffness k of the CEE along its working direction can be given by the following formulas: Finite Element Analysis and Verification In order to verify the static and dynamic performance of the CEE, the 3D model is constructed and imported into ANSYS Workbench for the FEA.The selected material is aluminum alloy 7075, and the elastic modulus E and Poisson's ratio v are 70 GPa and 0.3, respectively. Static Analysis Validation The maximum force output of the pneumatic cylinder is 350 N. Therefore, in order to detect the maximum deformation of the proposed CEE, a force of 350 N is applied along its working direction.In this case, the deformation along the CEE working direction can reach 1.14 mm, as shown in Figure 4a.Through calculation, the stiffness k of the proposed CEE is 0.307 N/µm.its working direction.In this case, the deformation along the CEE working direction can reach 1.14 mm, as shown in Figure 4a.Through calculation, the stiffness k of the proposed CEE is 0.307 N/μm. Moreover, the static stress analysis is conducted to verify whether the safety factor is qualified, as shown in Figure 4b.The yield strength of Al 7075 is 503 MPa, and the maximum stress is 239.38 MPa.Hence, the safety factor is calculated as 2.10 (>1). According to the different polishing requirements, the contact force generally varies within 10~40 N [1,16].In this force range, the deformation and stress of the CEE are also obtained, which are 32.48~129.90μm and 6.84~27.36MPa, respectively, which are within the allowable elastic deformation range of the material. Dynamic Analysis Validation When the load of the CEE is 0 kg, the first natural frequency along its working direction is 501.93 Hz, as shown in Figure 5a.It is consistent with the analytic result (498.84Hz).The second and third mode shapes are the rotations in the vertical direction and the horizontal direction, respectively, as shown in Figure 5b,c.Moreover, the static stress analysis is conducted to verify whether the safety factor is qualified, as shown in Figure 4b.The yield strength of Al 7075 is 503 MPa, and the maximum stress is 239.38 MPa.Hence, the safety factor is calculated as 2.10 (>1). According to the different polishing requirements, the contact force generally varies within 10~40 N [1,16].In this force range, the deformation and stress of the CEE are also obtained, which are 32.48~129.90µm and 6.84~27.36MPa, respectively, which are within the allowable elastic deformation range of the material. Dynamic Analysis Validation When the load of the CEE is 0 kg, the first natural frequency along its working direction is 501.93 Hz, as shown in Figure 5a.It is consistent with the analytic result (498.84Hz).The second and third mode shapes are the rotations in the vertical direction and the horizontal direction, respectively, as shown in Figure 5b,c.In applications, the polishing tool and the connectors need to be installed on the moving plate of the CEE, i.e., the external loads to the CEE.In this case, the dynamic performance of the CEE will be influenced.Therefore, the corresponding theoretical calculation and FEA are also carried out to evaluate the influence of the external loads.For example, the load of the connecting flange installed on the CEE's moving plate is 13.82 N, and the loads of the plastic and aluminum polishing tools (model: YQ060501, Shenzhen Han's Robot Co., Ltd., Shenzhen, China) are 6.86 N and 9.02 N, respectively.When these loads are installed on CEE's moving plate, the first natural frequency will be decreased.The variation of the first natural frequency against the external load is listed in Table 3.In applications, the polishing tool and the connectors need to be installed on the moving plate of the CEE, i.e., the external loads to the CEE.In this case, the dynamic performance of the CEE will be influenced.Therefore, the corresponding theoretical calculation and FEA are also carried out to evaluate the influence of the external loads.For example, the load of the connecting flange installed on the CEE's moving plate is 13.82 N, and the loads of the plastic and aluminum polishing tools (model: YQ060501, Shenzhen Han's Robot Co., Ltd., Shenzhen, China) are 6.86 N and 9.02 N, respectively.When these loads are installed on CEE's moving plate, the first natural frequency will be decreased.The variation of the first natural frequency against the external load is listed in Table 3.In order to further verify the performance of the proposed CEE for robotic polishing, a prototype was fabricated using Al 7075 in a monolithic piece.In the fabricated prototype, the width b and the thickness t of the flexible beam slightly vary from the nominal values, and an increment of 0.1 mm is found.This magnitude of manufacturing error cannot be ignored.As a result, the measured dimensions of the CEE are adopted in the subsequent calculations.A force sensor (model: ZNLBS-V1-30 kg, Bengbu chino sensor Co., Ltd., Bengbu, China) is used to calibrate the contact force.A capacitive displacement sensor (model: NS-CDCS10L-400 with a resolution of 35 nm, Sanying Motion Control Instruments, Ltd., Tianjin, China) is used to measure the displacement.All force and displacement signals are acquired by a real-time target machine (model: Performance with a data acquisition card of IO133, Speedgoat).The overall experimental setup is shown in Figure 6. Test of Equivalent Stiffness Firstly, the equivalent stiffness of the CEE is calibrated.A real-time control system is built on the real-time target machine.A stair signal with a step of 180 μm is set to the pneumatic cylinder under the open-loop condition.Under the excitation of this signal, the pneumatic cylinder pushes the CEE against the force sensor.The reaction force is measured by the force sensor, and the displacement of the moving plate of the CEE is measured by the capacitive displacement sensor.The measured force-displacement relationship is provided in Figure 7. Based on the measurements, the equivalent stiffness k of the CEE along its working direction can be calculated as follows: where Test of Equivalent Stiffness Firstly, the equivalent stiffness of the CEE is calibrated.A real-time control system is built on the real-time target machine.A stair signal with a step of 180 µm is set to the pneumatic cylinder under the open-loop condition.Under the excitation of this signal, the pneumatic cylinder pushes the CEE against the force sensor.The reaction force is measured by the force sensor, and the displacement of the moving plate of the CEE is measured by the capacitive displacement sensor.The measured force-displacement relationship is provided in Figure 7. Based on the measurements, the equivalent stiffness k of the CEE along its working direction can be calculated as follows: where Actuators 2022, 11, 284 Test of Contact Force Sensing In order to verify the consistency between the contact force estimated by the that measured by the force sensor, a triangular wave is used to drive the cylinder rocate and record the signal of force sensor and deformation displacement of th real time.The comparison results are shown in Figure 8a.It can be seen that the force estimated by the CEE is consistent with the force sensor.It further verifies contact force can also be calculated from the displacement and the stiffness coeff the CEE without the use of force sensor.Within the contact force range shown i 8a, the maximum deformation and maximum stress of the CEE are 65.67 μm a MPa, respectively, within the allowable elastic deformation range of the material In the meantime, a stair signal with 0.025 N height is used to drive the cyli force resolution test.As shown in Figure 8b, it can be found that the noise level of sensor is in the magnitude of 0.2 N, whereas the noise level of the CEE measurem the magnitude of 0.04 N. The contact force resolution obtained by using CEE in force sensor is 0.025 N, which is important to improve the force control of the rob ishing process.All the above experimental tests are carried out under the open-l dition.After the above treatment, the equivalent stiffness k of the CEE is calculated to be 0.335 N/µm.The equivalent stiffness k obtained from the analytical model and FEA results are 0.312 N/µm and 0.307 N/µm, respectively.The analytical and FEA results are also shown in Figure 7 as a comparison.Both the analytical model and FEA results slightly underestimate the equivalent stiffness, and the errors are calculated to be 6.87% and 8.36%, respectively. Test of Contact Force Sensing In order to verify the consistency between the contact force estimated by the CEE and that measured by the force sensor, a triangular wave is used to drive the cylinder to reciprocate and record the signal of force sensor and deformation displacement of the CEE in real time.The comparison results are shown in Figure 8a.It can be seen that the contact force estimated by the CEE is consistent with the force sensor.It further verifies that the contact force can also be calculated from the displacement and the stiffness coefficient of the CEE without the use of force sensor.Within the contact force range shown in Figure 8a, the maximum deformation and maximum stress of the CEE are 65.67 µm and 15.05 MPa, respectively, within the allowable elastic deformation range of the material. In the meantime, a stair signal with 0.025 N height is used to drive the cylinder for force resolution test.As shown in Figure 8b, it can be found that the noise level of the force sensor is in the magnitude of 0.2 N, whereas the noise level of the CEE measurement is in the magnitude of 0.04 N. The contact force resolution obtained by using CEE instead of force sensor is 0.025 N, which is important to improve the force control of the robotic polishing process.All the above experimental tests are carried out under the open-loop condition. In the meantime, a stair signal with 0.025 N height is used to drive the cylinder for force resolution test.As shown in Figure 8b, it can be found that the noise level of the force sensor is in the magnitude of 0.2 N, whereas the noise level of the CEE measurement is in the magnitude of 0.04 N. The contact force resolution obtained by using CEE instead of force sensor is 0.025 N, which is important to improve the force control of the robotic polishing process.All the above experimental tests are carried out under the open-loop condition. Test of Natural Frequency In order to obtain the natural frequency of CEE for robotic polishing, a modal hammer is used to apply an impact load along its working direction to excite the CEE.At the same time, the capacitive displacement sensor is used to measure its displacement.The time domain signal is recorded and shown in Figure 9a.The natural frequency of the signal is 42.1 Hz after fast Fourier transform processing, as shown in Figure 9b.The measured first natural frequency is lower than the FEA results.This might result from the compliance of the pneumatic cylinder and the additional loads and accessories installed on the CEE during the test, such as the end cap. Test of Natural Frequency In order to obtain the natural frequency of CEE for robotic polishing, a modal hammer is used to apply an impact load along its working direction to excite the CEE.At the same time, the capacitive displacement sensor is used to measure its displacement.The time domain signal is recorded and shown in Figure 9a.The natural frequency of the signal is 42.1 Hz after fast Fourier transform processing, as shown in Figure 9b.The measured first natural frequency is lower than the FEA results.This might result from the compliance of the pneumatic cylinder and the additional loads and accessories installed on the CEE during the test, such as the end cap. Conclusions This paper presents a novel design of the CEE for robot polishing using flexible beams.Compared with the conventional rigid end-effector, the elastic deformation of the CEE can act as a damper when the polishing robot installed polishing tool approaches the workpiece surface quickly.This design can help to solve the problem of excessive displacement caused by the inertia of the polishing robot and avoid damaging the polishing Figure 1 . Figure 1.Schematic diagram of the proposed CEE installed on an industrial robot arm. Figure 1 . Figure 1.Schematic diagram of the proposed CEE installed on an industrial robot arm. Figure 2 . Figure 2. Design parameters of the CEE: (a) overall structure; (b) the parameters of the flexible beam. Figure 2 . Figure 2. Design parameters of the CEE: (a) overall structure; (b) the parameters of the flexible beam. Figure 3 . Figure 3.The moving plate connected with the ith flexible beam. Figure 3 . Figure 3.The moving plate connected with the ith flexible beam. Figure 5 . Figure 5.The first three mode shapes of the CEE without load: (a) the first mode; (b) the second mode; and (c) the third mode. Figure 5 . Figure 5.The first three mode shapes of the CEE without load: (a) the first mode; (b) the second mode; and (c) the third mode. Figure 6 . Figure 6.Experimental setup of the proposed CEE for robotic polishing. 7 Figure 6 . Figure 6.Experimental setup of the proposed CEE for robotic polishing. Figure 7 . Figure 7. Comparative analysis of the stiffness k of the CEE. Figure 7 . Figure 7. Comparative analysis of the stiffness k of the CEE. Figure 8 . Figure 8.Comparison of contact force obtained by force sensor and CEE: (a) consistency of measurement results; (b) resolution of contact force. Figure 8 . Figure 8.Comparison of contact force obtained by force sensor and CEE: (a) consistency of measurement results; (b) resolution of contact force. Figure 9 . Figure 9. Natural frequency test for the CEE: (a) time domain signal; (b) frequency domain signal. Figure 9 . Figure 9. Natural frequency test for the CEE: (a) time domain signal; (b) frequency domain signal. Table 1 . Key parameters of the flexible beam. Table 1 . Key parameters of the flexible beam. Table 2 . Detailed parameters of matrix T i . Table 2 . Detailed parameters of matrix Ti. Table 3 . Analytical model and FEA results of first natural frequency under different loads.
2022-10-10T15:14:30.774Z
2022-10-05T00:00:00.000
{ "year": 2022, "sha1": "50d1187d24ffd2bc26332a6d261410cf5c7ce529", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0825/11/10/284/pdf?version=1666594698", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "970a6d6a2b15c6a40878a6a36479c8a9378c408d", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
54648533
pes2o/s2orc
v3-fos-license
Experimental Evaluation of ZigBee-Based Wireless Networks in Indoor Environments ZigBee is an emerging standard specifically designed for wireless personal area networks (WPANs) with a focus on enabling the wirelesssensornetworks(WSNs).Itattemptstoprovidealow-datarate,low-power,andlow-costwirelessnetworkingonthedevice-levelcommunication.Inthispaper,wehaveestablishedarealisticindoorenvironmentfortheperformanceevaluationofa51-node ZigBeewirelessnetwork.Severalsetsofpracticalexperimentshavebeenconductedtostudyitsvariousfeatures,includingthe(1) nodeconnectivity,(2)packetlossrate,and(3)transmissionthroughput.TheresultsshowthatourdevelopedZigBeeplatforms couldworkwellundermultihoptransmissionoveranextendedperiodoftime. Introduction For the past few decades, in order to access networks and services without cables, wireless communication is a rapid growing technology to provide the flexibility and mobility [1].Obviously, reducing the cable restriction is clearly one of the benefits of wireless with respect to cabled devices.Other benefits include the dynamic network formation, easy deployment, and low cost in some cases.In general, the wireless networking has followed a similar trend due to the increasing exchange of data in services such as the Internet, email, and data file transfer.The capabilities needed to deliver such services are characterized by an increasing need for data throughput.However, other applications in fields such as industrial [2], vehicular, and home sensors have more relaxed throughput requirements.Moreover, these applications require lower power consumption and low complexity wireless links for a low cost (relative to the device cost).ZigBee over [3][4][5] IEEE 802.15.4 [6] is the one that addresses these types of requirements. Based on the IEEE 802.15.4 standard, ZigBee is a global specification created by a multivendor consortium called the ZigBee Alliance.Whereas 802.15.4 defines the physical (PHY) and medium access control sublayer (MAC) layers of an application, ZigBee defines the network and application layers, application framework, application profile, and the security mechanism.ZigBee provides users in specific applications with a simple, low-cost global network that supports a large number of nodes with an extremely low power drain on the battery.The ZigBee stack architecture, as shown in Figure 1, is based on the standard open systems interconnection (OSI) seven-layer model but defines only those layers relevant to achieving functionality in the intended market space. Wireless links under ZigBee can operate in three license free industrial scientific medical (ISM) frequency bands, including 868 MHz in Europe, 915 MHz in the USA and Australia, and 2.4 GHz in most jurisdictions worldwide.Data transmission rates vary from 20 to 250 kbps.A total of 27 channels are allocated in 802.15.4,including 1 channel in the 868 MHz, 10 channels in the 915 MHz, and 16 channels in the 2.4 GHz band.The ZigBee network layer supports star, tree, and mesh topologies.Each network must have one ZigBee coordinator (ZC) to create and control the network.In the tree and mesh topologies, the ZigBee routers (ZRs) are used to extend the communication range at the network level.ZigBee devices are of three types, besides the mentioned ZC and ZR.For sensing applications, the sensors are usually programmed as ZigBee end devices (ZED), which contain just enough functionality to communicate with their parent node (either a ZC or ZR) and cannot relay data from other devices. Up to now, many performance analyses for wireless sensor networks (WSNs) have been proposed [7][8][9][10].Kohvakka et al. [7] studied the performance of ZigBee-based sensor networks based on a cluster-tree topology.The study in [9] also analyzed the reliability of IEEE 802.15.4 cluster trees.For beacon-enabled IEEE 802.15.4 WPAN, Chen et al. [10] evaluated the performance for industrial monitoring applications using OMNeT++ tool.On the other hand, the evaluation of IEEE 802.15.4 for wireless medical applications was performed in [8] via systematic simulations.However, most of the previous work is based on simulation rather than practical experiments.Lee [11] established a five-node sensor network to experimentally evaluate the IEEE 802.15.4 performance with various features.In [12,13], the performance of realistic IEEE 802.15.4 was evaluated and compared with existing simulation models in NS-2.They also analyzed the coexistence of IEEE 802.15.4 with IEEE 802.11 and Bluetooth which are operating in the same 2.4 GHz ISM band.Cano-García and Casilari [14] presented an empirical study of the effects of the channel occupation on the consumption of actual ZigBee/IEEE 802.15.4 motes.However, the WSN literature provides few experimental analyses for ZigBeebased sensor networks. This paper evaluates the performance of a ZigBeebased wireless network under multihop transmission over an extended period of time.The ITRI ZBnode [15,16], developed by the Industrial Technology Research Institute (ITRI), is applied and deployed in an indoor environment.Totally, 51 sensor nodes are deployed in a hallway and a room.One of the nodes is a coordinator, and all the others are sensors with the routing capability.Each sensor regularly transmits packets to the coordinator during a long-term operation time.The practical performance of this wireless sensor network is evaluated in terms of (1) node connectivity, (2) packet loss rate, and (3) transmission throughput.This paper is an extension of our previous work [17] with newly added experiments on the node connectivity. The rest of the paper is organized as follows: Section 2 introduces the developed ZigBee devices, and then experimental configurations are described in Section 3. The experimental results are shown in Section 4, and finally, Section 5 gives the conclusions. Proposed Platform: ITRI ZBnode Based on an IEEE 802.15.4 radio module and an ARM processor, the developed ZBnode [15,16] is an autonomous, light weight wireless communication and computing platform.As ITRI performed a project for developing a small device with sensing, computing, and networking (SCAN) capabilities, the first version of ZBnode was designed and implemented.Since the first version used a powerful 32-bit processor, it was named SCAN-ZB32.Since individual sensor configurations are required depending on applications, the ZB32 device has no integrated sensors for general-purpose applications.In the development stage, ZB32 can be used with various serial devices through predetermined sockets.The serial devices include sensors, actuators, power chargers, RFID readers, and even user interface components. ZigBee Hardware Design. The ZBnode hardware adopts a 32-bit RISC processor which features an ARM 720T CPU core running up to 70 MHz, as shown in Figure 2. Several integrated peripherals including timers, counters, 10-bit AD converter, USB, UARTs, LCD controllers, infrared communications, controller area network (CAN) interfaces, pulsewidth modulation (PWM), and JTAG for debugging are built around the processor.The external memory comprises an insystem programmable Flash ROM (16 MB) and an SDRAM (16 MB).To implement a visual user application interface, ZB32 provides four buttons and five LEDs.An IEEE 802.15.4 compliant RF transceiver, Chipcon CC2420, is connected to an on-board 2.4 GHz chip antenna and to one of the serial peripheral interfaces of the processor.Moreover, the RF module can be switched to an external antenna via a standard SMA (subminiature type A) connector for a better performance.The sensing module, where several sensors are supplied for detecting environment conditions (such as temperature, humidity, and light), is shown in Figure 3.For general-purpose development, a prototype area is provided for other extended sensing functions.In addition, a sounder which could be used as an alarm notification is also available.About the energy source, the ZBnode can be powered with an AC adapter or an Li-ion battery with a DC input range from 3.5 to 5.5 V. Power Management Mechanism. In in-door applications to environmental monitoring, for the most time only the sensing module is active while the computing and communication modules are power down.As shown in Figure 4, ZB32 uses a separate power-gating circuit to perform the power management.The power streams of the computing module (including ARM processor, ROM, and SDAM), communication module (IEEE 802.15.4 RF), and sensing module (temperature, humidity, and light sensors) when not in use can be completely disconnected from the energy source.A complex programmable logic device (CPLD) is adopted to control power stream via three power switches and also to handle power management requests in the developed ZBnode platform.The power management requests could be issued from either the computing, communication, or the sensing module.For example, as soon as the sensing module detects an abnormal situation, it would signal the CPLD to wake up the computing module to process such events. ZigBee Protocol and Software Stack. In our design, the SCAN-ZB32 platform uses an ARM Linux kernel 2.4.In addition, the system software is a Linux-based framework written in ANSI C. For general-purpose development, an open ARM-Linux compiler suite is applied.Within this framework, applications are typically partitioned into (1) a ZigBee protocol layer for communicating with the IEEE 802.15.4 front end through commands/events and keeping track of point-to-point connections with individual state machines, (2) a command line terminal for debugging and control, and (3) a user-defined application object.The whole application is defined at compile time and then programmed into the flash memory using an Xmodem terminal via the RS-232 port if the Linux kernel is loaded in advance.In addition, an in-system programmer through the JTAG interface is provided to download the application program.Furthermore, the developed protocol stack is embedded into the driver with convenient APIs available for application developers. Experimental Configurations 3.1.Performance Metrics.In this paper, the node connectivity, packet loss rate, and transmission throughput are used as the performance metrics. (i) Node Connectivity.In our work, if a node can send out packets to the coordinator within a specified period of time, the node is defined as "connected." Otherwise, the node is "disconnected." Obviously, if more nodes in a sensor network are connected, the network is more stable. (ii) Packet Loss Rate.The packet loss rate of a node is defined as the number of packets lost by the coordinator divided by the number of packets transmitted by the node.The less a packet loss rate is, the better a network performance is.Moreover, the ZigBee standard provides an optional acknowledged service in application support sublayer (APS) for reliable transmission.(iii) Transmission Throughput.For simplicity, the transmission throughput is measured under a two-hop communication (from a sensor to the coordinator) in our experiments.In a specific period, the more packets received by the coordinator results in a higher transmission throughput. Node Deployment. The experimental network structure is presented in Figure 5.The coordinator stores data received from each node in an MySQL database of a gateway server via RS-232 port.Meanwhile, there is a sniffer near the coordinator (within one hop) to show and record the overthe-air data. Figure 6 shows the deployment layout for sensor nodes.There are totally 51 sensor nodes in the network, of which one is the coordinator (node number C) and the others are sensors with the routing capability (node number 1-50).Nodes 3-29 are attached to the ceiling of the hallway.The coordinator and nodes 1, 2, and 30-50 are in the same room due to the security policy.In our experiments, each node is equipped with an AC power source for long-term operation time.Figure 7 shows several sensor nodes attached to the ceiling of the hallway and rooms.Figure 8 draws the node status and network topology monitored via internet using a web browser.Each circle respects a sensor, and the coordinator is the root node.Obviously, a cluster tree network with multihop transmission can be successfully formed. Transmitted Packets. Figure 9 shows the content of sniffed data packets during transmission.The APS overhead in packets used for analyzing network performance is 12 bytes (8 bytes for destination address and 4 bytes for serial number), and the APS payload in a packet is 64 bytes.Moreover, with the overhead of other protocol layers (network, MAC, and physical layers), the total data size of a packet is 91 bytes. Node Connectivity. In this experiment, the transmission rate is 1 packet per 10 seconds for each node, and the operation time is seven days.As defined before, if a node can send out packets to the coordinator within a specified period of time, the node is defined as connected.Figure 10 shows the average number (50 nodes) of disconnections for varied specified period of time (20 seconds to 1 minute).Obviously, as the specified period of time is tight as 20 seconds, the number of disconnections is the highest.Also, the fact that the average disconnection of 30 seconds is significantly less than that of 20 seconds shows that sensors with the routing functions are able to retransmit packets via APS ACK mechanism. Figure 11 shows the number of disconnections for each node with varied time interval.According to the empirical results, when the time interval changes from 20 seconds to 30 seconds, the number of disconnections between the coordinator and each sensor node significantly decreases. Packet Loss Rate. The average packet loss rate (50 nodes) with varied operation time has been recorded as shown in Figure 12.In this experiment, the transmission rate is 1 packet per 10 minutes for each node with APS ACK.When the operation duration is 7 days, the packet loss rate has the highest value.Overall, the shorter an operation time is, the lower packet a loss rate is.However, the reason why the packet loss rate of 1-hour operation time is higher than that of 12hour one is the fact that only a small number of packets are transmitted of 1-hour operation time so that losing one packet increases the packet loss rate a lot. Figure 13 shows the average packet loss rate with and without APS ACK.The operation time for this experiment is 1 hour.The maximum loss rate is 0.147% and 3.86% with and without APS ACK, respectively.The result shows that packet loss rate could be significantly decreased to almost zero by using the APS ACK mechanism. Figure 14 shows the packet loss rate of each node with APS ACK.The operation duration for this experiment is 7 days.The experimental results show that the more hops to the coordinator, the higher packet loss rate will be.coordinator receives fewer packets at the similar transmission rate. Conclusion This paper presents the practical performance of a ZigBee wireless network with multihop transmission in an indoor environment during a long-term operation time.Totally, 51 sensor nodes are deployed in a hallway and a room.Several sets of practical experiments are conducted to study its various features, including the (1) node connectivity, (2) packet loss rate, and (3) transmission throughput.The results show that our developed ZigBee platforms could work well under multi-hop transmission over an extended period of time. The overall goal of this paper is to contribute and help through realistic measurements towards the dimensioning of the sensor networks for future applications using Zig-Bee/IEEE 802.15.4 technology.The developed ITRI ZBnodes are employed for realistic experiments.During our experiments, we found that the achieved transmission rate is around 25 kbps.Note that this is substantially below the nominal value of 250 kbps, and reasons may be due to the transmission overhead (such as frame headers), the CSMA-CA random backoffs, the presence of interframe spacing, and the concurrent transmission of multiple nodes.Note that the CSMA-CA mechanism in 802.15.4 automatically backs off initially when a transmission is imminent; that is, each data and command frame transfer will at least have one backoff.This would also reduce the performance. Future work includes the measurements on different deployments to evaluate the impact of the deployment on the ZB32 platform's performance, including the evaluation of network parameters under a mesh (data flow) topology [18] and the power consumption under a node mobility condition Figure 3 :Figure 4 : Figure 3: Sensing module for ITRI SCAN-ZB32.A prototype area is provided for other extended sensing functions. Figure 7 : Figure 7: Sensor nodes attached to the ceiling of the hallway and room. Figure 8 : Figure 8: Node status and network topology monitored via internet. Figure 9 : Figure 9: Sniffed packet: the packet size used for performance analysis is 91 bytes. Figure 10 : Figure 10: Average number of disconnections with varied time intervals. Figure 11 :Figure 12 : Figure 11: Number of disconnections for each node with varied time interval. Figure 14 : Figure 14: Packet loss rate of each node for seven-day transmission. Throughput.In this experiment, the transmitted packets are with and without APS ACK.The empirical results are shown in Table1.If the APS ACK is applied, the coordinator totally received 1300 packets in 27 seconds, indicating the transmission rate is 24.65 Kbits/sec.As the transmission is without APS ACK, the result shows the Table 1 : Empirical results of transmission throughput.
2018-12-09T02:40:19.572Z
2013-02-18T00:00:00.000
{ "year": 2013, "sha1": "fabbc7d8e1a9ee9ce2ed22bdba71e8c553d7425c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/je/2013/286367.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fabbc7d8e1a9ee9ce2ed22bdba71e8c553d7425c", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Engineering" ] }
225212624
pes2o/s2orc
v3-fos-license
Neurophysiological and Psychological Consequences of Social Exclusion: The Effects of Cueing In-Group and Out-Group Status Abstract Exclusion by outgroups is often attributed to external factors such as prejudice. Recently, event-related potential studies have demonstrated that subtle cues influence expectations of exclusion, altering the P3b response to inclusion or exclusion. We investigated whether a visual difference between participants and interaction partners could activate expectations of exclusion, indexed by P3b activity, and whether this difference would influence psychological responses to inclusion and exclusion. Participants played a ball-tossing game with two computer-controlled coplayers who were believed to be real. One period involved fair play inclusion while the other involved partial exclusion. Avatars represented participants, with their color matching participant skin tone, and either matching or differing from the color of coplayer avatars. This created the impression that the participant was an ingroup or outgroup member. While ingroup members elicited enhanced P3b activation when receiving the ball during exclusion, outgroup members showed this pattern for both inclusion and exclusion, suggesting that they formed robust a-priori expectations of exclusion. Self-reports indicated that while these expectations were psychologically protective during exclusion, they were detrimental during inclusion. Ultimately, this study reveals that expectations of exclusion can be formed purely based on visual group differences, regardless of the actual minority or majority status of individuals. Introduction Social exclusion has affected virtually every person at least once in their life. Extensive research has contributed to our understanding of the effects of social exclusion on individuals, and has found that these effects are intense and often longlasting (MacDonald and Leary 2005). The self-reported consequences of social exclusion suggest that exclusion threatens at least four fundamental needs: sense of belonging, selfesteem, sense of control, and sense of meaningful existence (Zadro et al. 2004;Williams 2009). In addition, research has identified a host of negative outcomes that continue after social exclusion, including cognitive impairments (Themanson et al. 2014), behavioral dysfunction (Twenge et al. 2002;Twenge and Baumeister 2004), and motivational changes (Maner et al. 2007;Park and Baumeister 2015), among others. While the experience of social exclusion will at some point affect all of us, it is more prevalent for some individuals than others. Although social exclusion may occur for many reasons, demographic factors are often involved. In particular, exclusion is often triggered by visual features such as skin tone as these are salient markers that signify membership of specific social groups, some of which are categorized as outgroups (Harrison and Thomas 2009). Categorization of an individual as an outgroup member can set in motion a number of psychological processes that result in prejudice and discrimination toward that individual (Perdue, Dovidio, Gurtman, and Tyler 1990;Gaertner and Dovidio 2009;Ho, Sidanius, Cuddy, and Banaji 2013). It is well established that skin tone modulates the perception of, and behavior toward, observed individuals even when it is task irrelevant (Ito and Urland 2003). Those who appear different to others in their everyday environment, such as members of racial minorities, are more likely to experience chronic social exclusion throughout their lives (Silver 2007). Attributing Social Exclusion to Prejudice The effects of social exclusion are modulated when thought to be due to race, but perhaps not in an immediately intuitive way. For example, Crocker et al. (1991) found that when African American students believed they were being rejected by a White peer, their self-esteem suffered only when they believed they could not be seen. In contrast, when they believed they were visible to their peer, they did not show reductions in self-esteem and self-report measures raise the possibility that suspicion of prejudice could have been a factor. Crocker and colleagues suggested that African American students may have invoked a protective mechanism in conditions where their race was known to the evaluator, which buffered the effects of negative feedback (see also Crocker and Major 1989). Interestingly, this protective mechanism is not only sensitive to negative messages in interpersonal contexts. Evidence also indicates that the (positive) effects of positive feedback and messaging might also be attenuated by members of stigmatized groups because these messages are perceived as an insincere overture to compensate for prejudiced views (Cohen et al. 1999). Interestingly, Mendes et al. (2008) directly compared same-race and different-race interactions among White and Black students and measured responses with accepting and rejecting social feedback. They found that regardless of the race of the participant, different-race rejections were more likely to be attributed to prejudice, and led to more detrimental physiological and performance outcomes including anger. However, Black participants responded less positively to different-race acceptance than did White participants. Taken together, these results suggest that members of stigmatized groups deploy cognitive mechanisms that help buffer the effects of evaluative information emanating from members of another group. This goes hand in hand with the idea of heightened vigilance and threat-sensitivity, and an expectation of rejection, among members of stigmatized groups (Meyer 2003;Fingerhut and Abdou 2017). It has been established that heightened social threat vigilance leads to biases in attention and memory, which highlight the frequency of negative social experiences and downplay the frequency of positive social experiences, and can even lead to behaviors that seek to confirm these biases by perpetuating negative social interactions (see Hawkley and Cacioppo 2011). Following these studies, Goodwin et al. (2010) directly investigated the mediating role of attributing social exclusion to racism among White and Black individuals. In their study, 614 White and Black adults from a broader noncollege sample engaged in a virtual ball-tossing game called Cyberball (Williams et al. 2000; Williams and Jarvis 2006). This is perhaps the most widely used paradigm for measuring and inducing social exclusion and has proved to be both effective and reliable in this regard (see Hartgerink et al. 2015 for a meta-analysis). As per the standard Cyberball paradigm, Goodwin et al.'s participants believed they were playing an online ball-tossing game with other players represented by avatars, but these players were in fact computercontrolled. The game involved a period of inclusion ("fair play") in which the participant received the ball one third of the time, and a period of exclusion, in which they received the ball only for the first two throws. A critical difference in Goodwin et al.'s modified paradigm was that the participant and coplayer avatars were represented in full color; participant avatars matched their true skin tone, while the skin tone of the coplayers was manipulated to match, or differ from, the participant. In addition, stereotypical White and Black coplayer names were displayed (e.g., Bill vs. Tyrone). Following the game, participants' attributions of their experience to racism were assessed and their sense of fundamental needs fulfillment was recorded. In particular, reflexive and reflective responses to social exclusion were assessed. Reflexive responses are immediate and brief, and appear to be equally strong regardless of contextual or individual differences; ostracism immediately hurts even when it is caused by technical difficulties (Eisenberger et al. 2003) or by a despised outgroup (Gonsalkorale and Williams 2007). Reflective responses occur over time and usually show recovery from the initial harmful effects of exclusion; however, these responses are mediated by contextual and individual differences (see Williams 2007Williams , 2009, for an overview of the distinction between reflexive and reflective reactions to exclusion). In Goodwin et al.'s (2010) skin tone study, reflexive responses to exclusion by different-race coplayers were more negative for Black participants. However, among both Black and White participants, different-race exclusion was attributed to racism and this impeded reflective recovery from its harmful effects. This study demonstrates that attributions of exclusion to prejudice and their consequences for psychological wellbeing can occur regardless of stigmatized group membership-the role of attributions may in fact be primarily driven by simple visual differences between groups during interactions such as Cyberball. While Goodwin et al. (2010) demonstrate that visual differences between group members influence psychological responses to exclusion, the mechanisms by which this takes place are unclear. That being said, an emerging line of research has begun to uncover the important influence of expectations on subsequent responses to experienced exclusion (see Wesselmann et al. 2017, for an excellent overview). It has been proposed that individuals monitor their environment for exclusionary cues using a "sociometer" (Leary 1999;Leary and Baumeister 2000;Williams 2009). According to sociometer theory, self-esteem is essentially a psychological gauge that tracks the quality of interpersonal relationships. This so-called sociometer constantly monitors the social environment for cues that indicate acceptance or rejection, and is thought to act as a mechanism for avoiding the potentially disastrous consequences of ostracism in the ancestral world (Leary 1999). Some evidence suggests that the perceived accuracy of the sociometer is related to responses to exclusion. For example, Wesselmann et al. (2010) observed more aggressive responses to exclusion when it was unexpected, compared with when it was expected, and this was associated with decreased confidence in the sociometer. In addition, Wirth et al. (2017) found that fundamental needs were more threatened after unexpected compared with expected exclusion, with the same decrease in sociometer confidence also observed in this study. Given that expectations about participation can be modified by both unambiguous cues (e.g., Williams et al. 2000;Maner et al. 2007) and subtle cues (e.g., Wirth et al. 2010;Böckler et al. 2014), it is worth investigating whether visual differences between group members may act as a subtle cue for exclusion, by providing an external factor to which exclusion can be attributed. In this way, it is possible that the participants in Goodwin et al.'s (2010) Cyberball study who attributed exclusion to racism formed exclusionary expectations upon seeing the visual skin tone difference between themselves and the coplayers. Addressing this possibility requires understanding the cognitive mechanisms at play during Cyberball, which would offer insights into how visual differences, attributions, and expectations shape the cognitive appraisal of exclusion as it is experienced. The P3b in Social Exclusion Paradigms While self-report and behavioral measures have been useful for inferring the effects of social exclusion after it has occurred, the EEG technique has proved valuable in observing neural responses to social exclusion while it takes place. In particular, the eventrelated potential (ERP) technique has highlighted that the P3 component is sensitive to specific inclusionary and exclusionary events, and to the wider context of inclusionary and exclusionary interactions. This component can be subdivided into components that are thought to reflect different levels of attentional allocation; specifically, the P3a, which is generated in frontal regions, and the P3b, which is generated in temporo-parietal regions (Polich 1989;Kok 1997;Rusby et al. 2005;Polich 2007). It is theorized that the P3a is more linked to earlier processing of attended stimuli (Potts et al. 1996;McCarthy et al. 1997;Verbaten et al. 1997), whereas the P3b reflects later interactions between allocation of attentional resources and memory operations (Knight 1996;Squire and Kandel 1999;Brázdil et al. 2001). In particular, the P3b is believed to mark a process of updating working memory representations (Polich 2007, though see Kessler 2016, 2019). While the P3b has often been elicited in response to unexpected or unlikely outcomes (e.g., Kopp et al. 2016), its presence in social exclusion paradigms such as Cyberball probably reflects context-related attentional changes (see Kiat et al. 2018), which relate to the assignment of subjective relevance to specific events. One of the earliest studies to investigate the P3 response during Cyberball was conducted by Gutz et al. (2011); but see also Crowley et al. 2009). Their study measured the P3a and P3b while participants engaged in a modified game of Cyberball, in which partial exclusion occurred (16% of throws toward the participant, as opposed to 33% during inclusion). Partial, as opposed to full, exclusion is used to ensure that there are sufficient "events" in which participants receive the ball during the wider context of exclusion. They found that both components were sensitive to whether coplayer throws were inclusionary (directed to the participant) or exclusionary (directed to the other coplayer), but that this interacted with the wider context of inclusion or exclusion. Specifically, these components were larger in response to inclusionary throws during exclusionary periods of the game. In addition, this effect was attenuated if exclusion took place first, and larger if exclusion followed initial inclusion. Based on research indicating that the P3 is modulated by stimulus probability in oddball paradigms (Donchin and Coles 1988), and that it marks the allocation of attentional resources and updating of working memory representations (Polich 2007), Gutz et al. (2011) interpreted their results as reflecting a violation of expectations: when exclusion took place, throws toward the participant violated their representation of their participatory status as excluded individuals, resulting in enhanced P3a and P3b activation. Importantly, P3a enhancements were associated with affective processing of exclusion, while P3b enhancements were linked to the perceived intensity of the episode of exclusion. Since the P3 can be modified by stimulus probability, this raises the possibility that it is not related to any form of social processing, and that its enhancement in response to inclusionary throws within exclusionary contexts is simply driven by the reduced probability of these inclusionary events-receiving the ball during exclusion is less likely to occur, thus rendering it an oddball event (Donchin and Coles 1988). Importantly, Weschke and Niedeggen (2015) ruled out the possibility that P3b enhancements during Cyberball were due to probability, and were in fact sensitive to expectations about social participation. In an elegant design, they independently manipulated the probability of ball throws and the social expectation of receiving the ball. All participants underwent a typical period of inclusion with two coplayers, followed by a period in which the probability of receiving the ball decreased from 50% to 20%. However, one group played this second period with 5 coplayers, instead of 2. For this group, receiving the ball only 20% of the time was in line with their expectations regarding the social nature of the game. Indeed, these participants did not show any P3b enhancements when receiving the ball, despite the low likelihood of this event. As such, Weschke and Niedeggen proposed that the P3b effect observed within Cyberball paradigms is primarily driven by a cognitive process that is sensitive to expectancy violations, and is sensitive to high-level social expectations related to exclusion. Given the link that Gutz et al. (2011) propose between the P3 and expectations, and the fact that this can be measured within an ERP Cyberball paradigm, the question arises as to whether the formation of exclusionary expectations caused by external factors (rather than the actual experience of inclusion or exclusion) can influence the P3 response during Cyberball. Though this has not yet been studied in detail, some research provides convincing evidence that this is the case. For example, Gutz et al. (2015) ran a similar Cyberball study on participants with borderline personality disorder (BPD), who tend to express interpersonal dysfunction and are thought to engage in biased processing of social threat information (Clark and Wells 1995;Arntz et al. 1999). While healthy controls showed the established pattern of enhanced P3b responses to inclusionary throws during exclusion (but not inclusion), BPD participants showed this pattern during both inclusion and exclusion, suggesting that the representation of participatory status formed by BPD participants was one of exclusion, regardless of actual experience. This was mirrored in self-report data, which highlighted increased threatening of needs during inclusion, and lower perceived proportion of throws received; that is, even when BPD participants received the ball equally as often as their coplayers, they reported receiving it less often. This study demonstrates that different psychological states can result in negatively biased perceptions of social interactions, which in turn influence P3b activation patterns. The relationship between exclusionary expectations and the P3b has also been demonstrated outside of the Cyberball paradigm, and in the context of stigmatized group status. Kiat et al. (2017) employed a simple task in which White and Black participants made choices that were either neutral ("Dog" vs. "Cat") or stereotyped with regard to Black American culture ("Hip Hop" vs. "Rock and Roll"). Following their choice, an image was presented in which their avatar was seated at a lunchroom table either with their "best friends" (inclusion) or alone (exclusion). ERP results revealed an enhanced P3b response to inclusion images only after Black participants made stereotyped choices, suggesting that these choices provided a cue for attributing exclusion to their race. Since this resulted in exclusionary expectations, subsequent images of inclusion presented a discrepancy with participants' representation of their participatory status, possibly triggering mechanisms involved in updating these representations in memory, as indexed by the P3b component. The Current Study When taken together, these ERP studies suggest that subtle cues can enhance expectations of exclusion and result in modified neural responses to the experience of exclusion, and inclusion, as it takes place in real time. Considering previous self-report Cyberball studies such as Goodwin et al. (2010), in which attributing exclusion to visual group differences altered the effects of exclusion, it is possible that such visual differences would also enhance exclusionary expectations, which could be measured using the P3b. To address this, we employed a modified Cyberball paradigm in which participants were represented as avatars with white or brown skin tones and believed they were playing with other players, who were in fact computer-controlled and were represented with avatars whose skin tone either matched or differed from the participant. ERPs were measured in response to inclusionary and exclusionary throws, during wider periods of inclusion and exclusion, with a focus on the P3a and P3b components. After the game was complete, we assessed fundamental needs through self-report measures. We predicted that participants made to appear visually different (in terms of skin tone) from their coplayers would show enhanced P3b responses to inclusionary throws relative to exclusionary throws during both periods of inclusion and exclusion, while those made to appear similar to their coplayers would only show this pattern during exclusion. This would suggest that skin tone differences act as a subtle, task-irrelevant cue that creates exclusionary expectations. Since previous research suggests that this would specifically influence processes related to subjective relevance and working memory representations (Gutz et al. 2015;Kiat et al. 2017), we predicted that skin tone differences would not modulate the pattern of P3a activation. With regard to the psychological outcomes resulting from exclusion, previous findings are not straightforward. While behavioral (Crocker et al. 1993;Crocker 1999;Major et al. 2003) and neuroimaging research (Masten et al. 2011) suggests that attributing exclusion to factors such as skin tone can act as a protective buffer, reducing reactivity and thereby limiting its harmful effects, other studies have demonstrated clearly negative outcomes, including anger (Mendes et al. 2008) and slowed recovery of needs fulfillment (Goodwin et al. 2010). Overall, there is a consensus that intergroup rejection evokes external negative reactions such as anger, while intragroup rejection results in internalized reactions such as self-blame, which has implications for self-esteem (see Crocker and Major 1989). In view of these considerations, we predicted that needs fulfillment as measured immediately after the Cyberball game would be less harmed by exclusion from different skin tone coplayers relative to same skin tone coplayers. Participants Forty-nine McMaster university students were recruited for the experiment in exchange for course credits. Because McMaster University contains large populations of White and South Asian individuals, these groups were the focus of our skin tone manipulation. Prescreening ensured that only participants who self-reported as Caucasian or South Asian were recruited. One participant was excluded from analysis due to excessively noisy EEG activity (more than 15% of trials removed during artifact rejection). The remaining 48 participants consisted of 28 Whiteskinned (20 female and 8 male) and 20 Brown-skinned (12 female and 8 male) individuals aged between 18 and 21 (mean age 19.13). Sample size was determined based on a simulation-based power analysis; this sample size was estimated to achieve 81% power to observe a three-way interaction between the factors possession, social context, and skin tone with a "large" effect size (η 2 p ) of 0.14. Informed consent was provided by all participants in accordance with the Declaration of Helsinki (1991, p. 1194) and they were debriefed at the end of the experiment, at which point the true nature of the study was revealed. Ethical approval was granted by the McMaster Research Ethics Board (MREB, certificate # 0669). Procedure Upon arrival, participants were informed that the experiment was investigating the neural correlates of visual imagination during interactions, and that two other participants had also recently arrived and would be tested simultaneously in separate testing rooms. In fact, there were no other participants being tested. After providing informed consent, participants were given the Vividness of Visual Imagery Questionnaire (VVIQ, Marks 1973); this has been administered in several other studies using the Cyberball paradigm (e.g., Gutz et al. 2011Gutz et al. , 2015 and acts to provide a false cover story to prevent participants' awareness of the true nature of the study. During completion of the VVIQ, the experimenter entered the participant's age, name, and gender (Male or Female) as provided by the participant, and discretely recorded the participant's skin tone (White or Brown) and hair color (Blonde, Brown, Black, or Red). Participants were randomly and discretely assigned to one of two skin tone conditions (same or different). This program was run on MATLAB 9.2 (R2017a) (Mathworks, Inc., MATLAB 2017), using the Cogent 2000 Toolbox (www.vislab.u cl.ac.uk/Cogent/) to display stimuli on a gray background. The following instructions were provided to participants on the screen: "In this experiment, you will be playing an online ball game with two other participants. At different times, you will be asked to imagine playing the ball game in a given setting. We have generated a simple avatar to represent you during this game. This is you!." Alongside these instructions appeared a blank face that was selected from a set of 16 images composed of each combination of gender, skin tone, and hair color. The selected face matched as closely as possible the properties of the participant. Below this face, some additional text also appeared on screen: "The other two players will have avatars representing them. We want you to imagine really playing the ball game with these people." Participants were reminded to minimize eye movements and fixate on a point in the center of the screen throughout the experiment. Following instructions, a waiting screen was displayed in which the participant was told how many players were ready; this amount was randomly selected between 1 and 3. After a random of interval of 5-30 s, the number was increased until all three players were ostensibly ready. A practice session then began, with three face images on-screen: The participant's avatar (as presented during instructions) appeared in the bottom-center of the screen, while the two coplayer avatars appeared in the top-left and top-right corners. These were randomly generated from the same set of images as the participant's avatar, on the basis of their details and their assigned condition. Specifically, the gender of all avatars was always the same (3 males or 3 females), and the skin tone of the coplayer avatars matched that of each other, and either matched or differed from that of the participant depending on their skin tone condition assignment. Coplayers were assigned names which appeared under their avatars, based on the participants' ethnicity, gender, and skin tone condition: White-skinned coplayers had the names "Amy" and "Claire" for females, and "Jake" and "Connor" for males. Brown-skinned coplayers had the names "Miryam" and "Prisha" for females, and "Vihaan" and "Aarav" for males. Participants did not see their own name on the screen, but were told that the other players would see their name. In addition, a black and white soccer ball was drawn on-screen. Initially, the ball randomly appeared next to either of the three players. From this point onwards, the program waited indefinitely for the participant to press either the left or right arrow key on the keyboard whenever the ball was adjacent to the participant. After a key response, the ball was drawn slightly larger at the center of the screen for 500 ms to give an impression of rising into the air and was then drawn at the original size adjacent to the left or right coplayer, depending on the key press. Once the ball was in possession of a coplayer, following a random delay of 500-2500 ms, the ball was "thrown" (using identical animations as the participant throws) either to the participant or to the other coplayer. An initial practice phase included 10 coplayer throws, with 5 directed to the player. After the practice, new instructions appeared stating that participants would be presented with an image of a scene at regular intervals throughout the task. Following these instructions, the same waiting screen appeared as before; once all players were supposedly ready, the main task began with an image of a field or a gymnasium, alongside the text "While playing the next ball game, try to imagine playing it here." After 10 s, the ball game began exactly as it did during the practice session. However, the program now used a predetermined trial specification to determine whether each coplayer throw would be directed either at the player ("self" event) or at the other coplayer ("other" event). Two trial specifications were used, corresponding to periods of inclusion versus exclusion. Periods of inclusion consisted of 60 coplayer throws, with 30 (50%) directed at the participant. For periods of exclusion, it consisted of 75 coplayer throws, with 15 (20%) directed at the participant. Because each throw directed at the participant led to an additional throw by the participant, both trial specifications resulted in 90 total throws, lasting ∼2.5 min. The order of these periods was counterbalanced across participants (Fig. 1). For each period, the game was separated into four blocks, each using the same trial specification but shuffling the order of trials each time. During exclusion blocks, shuffling was constrained so that "Self" events never repeated sequentially, to avoid the illusion of reinclusion. The image shown before each block alternated between the field and gymnasium images. In total, the entire game resulted in 120 self events and 120 other events during inclusion, and 60 self events and 240 other events during exclusion, for a total of 540 events. Each period lasted ∼10 min, with a 60-s break between periods. Depending on their order condition assignment, participants either underwent inclusion or exclusion first. EEG trigger codes were used to mark the moment that each coplayer throw reached its target. Because the ball appeared as static images, this moment was the earliest point in time that participants could be aware of whether the ball was thrown toward them ("self" event) or toward the other coplayer ("other" event). Trigger codes distinguished between the two withinsubjects factors: separate codes were used for self and other events (possession factor), and for inclusion and exclusion periods (social context factor), resulting in four different event conditions. Upon completion of the Cyberball game, the program closed and participants were given two instances of the Need-Threat Questionnaire (NTQ, Van Beest and Williams 2006); each corresponded to the separate periods of the game (referred to as "first half" and "second half") and asked participants to consider how they felt during each period when responding. After the EEG cap was removed, they were given the Rejection Sensitivity Questionnaire (RSQ, Downey and Feldman 1996). Due to a change in protocol part way through the period of data collection, half of participants (those collected during the latter half of data collection, N = 24) were given three additional questions to assess the extent to which exclusion was attributed to racial prejudice. These were identical to the attribution questions used by Goodwin et al. (2010), asking participants to rate from 1 to 5 the extent to which they believed (1) they had been treated as they were due to their ethnicity, (2) they had been discriminated against, and (3) that the coplayers were racist. Finally, all participants were given a debrief form along with the opportunity to withdraw after discovering the true nature of the study. EEG Data Collection EEG was recorded using a 64 channel Neuroscan Quik-Cap, using a 36-channel montage (FP1, FP2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T7, C3, Cz, C4, T8, TP7, CP3, CPz, CP4, TP8, P7, P3, Pz, P4, P8, PO7, PO3, PO4, PO8, O1, Oz, O2). The ground electrode was positioned between FPz and Fz, and the reference electrode was positioned between Cz and CPz. Impedances were kept below 10 kΩ prior to data collection. Data were online referenced to the reference electrode and sampled at 1000 Hz. After data collection, data were transferred to the EEGLab plugin (Delorme and Makeig 2004) and rereferenced off line to the average of two electrodes placed on the mastoids and initially bandpass filtered with an FIR filter (using EEGLab's "pop_eegfiltnew()" function) between 1 and 20 Hz for the purposes of independent component analysis (ICA). Data were then epoched based on each of the four event conditions using the ERPLab plugin (Lopez-Calderon and Luck 2014). ICA was then run on each dataset using EEGLAB's SOBI function (Second-Order Blind Identification; Sahonero-Alvarez and Calderon 2017). The derived ICA weights were then transferred to the dataset using a more conservative bandpass filter of 0.1-40 Hz. The MARA plugin (multiple artifact rejection algorithm; Winkler et al. 2011Winkler et al. , 2014 was then used to automatically classify and subsequently remove components classified as artifacts. Finally, all remaining trials that contained ±100 μV waveforms were removed. In total, no more than 6.67% of trials were removed per participant. Using the ERPLab plugin, subject ERPs were created by averaging epochs within each event condition. To ensure a comparable signal-to-noise ratio (SNR) across event conditions, trials from all conditions except for self-exclusion were randomly removed until 60 events remained. While mean amplitude analysis is not considered to substantially benefit from equal SNR (e.g., Luck 2005Luck , 2012Clayson, Baldwin, and Larson 2013), we chose this method to facilitate other forms of ERP analysis if required. To further improve comparability, "self" events that were immediately preceded by other "self" events (which was possible only during the Inclusion period) were removed from analysis. To determine suitable electrode sites for mean amplitude analysis, grand average positive peak latencies (averaged across all events and all participants) were detected at the 15 central electrode sites (F3, Fz, F4, FC3, FCz, FC4, C3, Cz, C4, CP3, CPz, CP4, P3, Pz, P4) between 200 and 300 ms poststimulus for the P3a and between 300 and 400 ms for the P3b, and mean amplitudes were measured across time windows extending 40 ms before and after these latencies. The highest mean amplitude extending from the peak in the 200 and 300 ms time window was observed at electrode Cz (6.55 μV), and in the 300 and 400 ms time window at electrode CPz (5.48 μV). For these two electrode sites, positive peak latencies were detected separately for each combination of possession, social context and skin tone, and mean amplitudes were again measured extending ±40 ms from these latencies (The same analyses were conducted at electrodes FCz (for the P3a) and Pz (for the P3b), as these sites have often been used to assess P3a/P3b activity (Themanson et al. 2015). In addition to peak-defined time-windows, we calculated mean amplitudes using fixed time windows of 230-310 ms for the P3a, and 310-390 ms for the P3b, as these have also been employed in previous research (Gutz et al. 2011(Gutz et al. , 2015. All of these analyses produced virtually identical results.). P3a and P3b mean amplitudes were separately entered into a 2 × 2 × 2 repeated-measures ANOVA with the within-subjects factors possession (self, other) and social context (inclusion, exclusion), and the between-subjects factor skin tone (same, different). Follow-up t-tests were conducted to assess interactions, with the Holm-Bonferroni correction applied where multiple familywise comparisons were made. Questionnaire Results To assess participants' sense of threatened needs, a 2 × 2 repeated-measures ANOVA was run with the factors social context and skin tone, separately for three measures obtained by the NTQ: Perceived proportion of received throws, reported ostracism, and need-threat. For all three measures, a main effect of social context was observed (all F(1,44) > 94.19, all P < 0.0001, all η 2 p > 0.68), confirming that exclusion elicited lower perceived proportion of throws, higher reported ostracism, and increased threatening of needs. Need Threat For need-threat scores, the same interaction between social context and skin tone was observed (F(1,44) Rejection Sensitivity Finally, an independent-samples t-test on rejection sensitivity, as measured by scores on the RSQ, showed no difference between groups (mean difference = 0.33, t(46) = 0.40, P = 0.69). Supplementary Analysis: Order Effects Because we counterbalanced the order in which participants experienced inclusion and exclusion, this allowed us to compare the pattern of results depending on this order. This was done by including the factor order (included first, excluded first) into the ANOVA model, resulting in a 2 × 2 × 2 × 2 mixed design ANOVA. However, since our power analysis did not account for any interactions with order, we treat this as an exploratory analysis. For the P3a, possession interacted with order (F(1,44) = 17.24, P < 0.001, η 2 p = 0.28). Follow-up t-tests indicated no difference between self and other events for those in the included first condition (mean difference = 0.46 μV, t(44) = 0.76, P = 0.45), but significantly larger responses to self (8.62 μV) versus other (4.55 μV) events for those in the excluded first condition (mean difference = 4.07 μV, t(44) = 6.63, P < 0.0001). There were no other main effects or interactions involving the factor order (all F(1,44) < 2.85, all P > 0.09). The questionnaire data were also run with the addition of the factor order. This revealed a larger overall perceived proportion of throws for those in the excluded first group (19.66%) relative to the inclusion first group (24.86%, mean difference = 5.21%, F(1,44) = 8.27, P < 0.01, η 2 p = 0.16; Fig. 3). Discussion This study aimed to determine whether perceived racial differences between Cyberball players, as cued by the skin tone of avatars, would modulate expectations of exclusion, as indexed using the P3b component. In particular, we predicted that participants whose avatars had a different skin tone from their coplayers would maintain an expectation of exclusion even while experiencing inclusion. We analyzed P3b mean amplitudes in response to ball throws that took place within periods of inclusion and exclusion. During both periods, these "visual out-group" participants elicited an enhanced P3b when receiving the ball, suggesting that receiving the ball violated their representation of their own participatory status (Gutz et al. 2015). We suggest Figure 2. (A) Grand-averaged ERP waveforms elicited at electrode CPz by "self" and "other" events, during periods of inclusion and exclusion, shown separately for participants assigned to the same and different conditions. Stars represent peak latencies and mean amplitudes for each trial condition, with horizontal bars representing the peak-defined time window used for analysis of P3b mean amplitudes (±40 ms from peak latency). (B) P3b mean amplitudes in each condition (error bars represent SEM). "Self" versus "Other" events are compared separately for inclusion and exclusion and separately for participants in each skin tone condition. In the different condition, participants showed significantly larger P3b amplitudes to self versus other events during inclusion as well as exclusion, qualifying a three-way interaction between social context, possession, and skin tone. that this reflects a context-related change in attentional processing, which can be interpreted as the assignment of subjective relevance to this event (Kiat et al. 2018). The mismatch between these participants' perceived exclusionary status and the inclusionary nature of the game suggests that they expected to be excluded during inclusion, while visual in-group participants did not. Since these groups differed only in terms of the skin tone of the coplayers relative to the participant, and since skin tone was not made explicitly salient or relevant during the game, this group difference suggests that skin tone acts as a subtle cue to activate expectations of exclusion. The lack of any group differences on the pattern of P3a activity highlights that skin tone differences specifically influence cognitive processes related to subjective relevance and the updating of working memory representations, as opposed to changes in the orientation of focal attention (Polich 2007). Comparisons of P3b amplitudes in response to self and other ball-possession events were used as a marker for the expectation of exclusion. As described already, several previous ERP studies using the Cyberball paradigm (e.g., Gutz et al. 2011;Themanson et al. 2013;White et al. 2013) have shown that during a period of actual exclusion, but not inclusion, participants do indeed show larger P3b responses to self events, and this has been attributed to an expectancy-based account of social participation (Weschke and Niedeggen 2015;Schuck et al. 2018). The current study supports these findings, and provides further support for more recent findings that the P3b response during Cyberball can be modulated without any changes in actual experience. Just as in Gutz et al. (2015), we found different patterns of P3b activation between groups of participants who both underwent the same periods of inclusion and exclusion. While Gutz et al. (2011) demonstrated this effect in relation to clinical psychological factors, more recent work by Kiat et al. (2017) has revealed similar P3b pattern modulations within racial minority participants, driven by subtle social cues in the form of racially stereotyped choices. However, the current study is the first to confirm that this modulation can take place during Cyberball as a sole result of task-irrelevant visual cues, and between two groups who do not differ in trait rejection sensitivity. An alternative interpretation of differences in P3b amplitude between self and other events is that these events vary in the level of self-relevance and the extent to which they require an upcoming decision (see Kawamoto, Nittono, and Ura 2013). Self events indicate that the participant is involved in the observed event and that a decision is required regarding the next throw, thus necessitating more attention. Other events do neither of these things and do not place the same demands on attention. However, this interpretation does not explain the absence of a significant P3b difference between self and other events during inclusion in the same skin tone condition. Furthermore, differences in the pattern of P3b results across skin tone groups cannot be explained without acknowledging the influence of the avatars' colors. Therefore, we suggest that some degree of attentional processing in response to self and other events is modified either by skin tone differences, or by social cues that these differences elicit. Why might skin tone cue exclusionary outcome expectations in this way? Considering the similarity between this paradigm and that of Goodwin et al. (2010), in which coplayer skin tone was also manipulated, it is likely that individuals search for information to which they can attribute exclusion, and this attribution is believed to account for many of the observed changes in psychological responses to exclusion (Crocker and Major 1989;Mendes et al. 2008). In fact, Goodwin et al. provide convincing evidence that attributing exclusion to prejudice is strongly related to subsequent psychological recovery. However, most of these conclusions are drawn on the basis of self-report measures taken after exclusion has occurred. For this reason, it is difficult to determine whether attributions take place after the experience of exclusion, and retrospectively modify the cognitive appraisal of this experience, or whether the cues that are known to drive these attributions (such as skin tone differences) also influence the cognitive processing of exclusion as it occurs. The ERP technique utilized in the current study sheds light on these processes and reveals that, in addition to facilitating retrospective attributions of exclusion to prejudice, subtle visual cues prospectively modify the formation of social expectations as interactions take place in real time. Assuming that skin tone differences modulate exclusionary expectations in this way, it may be expected that the order of inclusionary and exclusionary periods would have produced a similar effect. Some previous research raises the possibility that the ongoing experience of inclusion and exclusion shapes expectations of upcoming exclusion; for example, exclusionary expectations may increase after having just experienced exclusion relative to having just experienced inclusion. However, ERP evidence for such order effects is not clear: While Gutz et al. (2011) found that the exclusion effect on the P3a was significant only when participants were included first and not when excluded first, no such interaction was observed for the P3b, and to our knowledge no other ERP studies have demonstrated systematic order effects on the P3b in Cyberball paradigms-instead, the majority of ERP studies since Gutz et al. have opted to maintain a fixed order (e.g., Themanson et al. 2015;Niedeggen et al. 2019). As demonstrated by our exploratory analysis of order effects, the P3b exclusion effect trended toward interacting with order. Since our study only manipulated order as a counterbalancing measure and was thus not designed a priori to investigate its influence, we interpret this finding only as a motivation for future research, firstly to continue systematically investigating the influence of temporal order on neurophysiological and psychological responses to inclusion and exclusion, and to further explore the relationship between order and other factors (such as visual group differences) that may cue exclusionary expectations. Another assumption leading from an expectancy-based account of our results is that responses should change dynamically within a continuous period of Cyberball. Specifically, differences in P3b amplitude between self and other events should reduce as the game continues. This has indeed been observed in recent studies (Schuck et al. 2018;Niedeggen et al. 2019). To investigate this, we ran a separate ANOVA which mirrored the main tree-way ANOVA but also included the within-subjects factor "half," comprising two levels (first half and second half), with ERP events divided chronologically within the inclusion and exclusion periods (The main effect of half (F(1,44) = 13.99, P < 0.001, η 2 p = 0.24) interacted with social context (F(1,44) = 5.66, P = 0.02, η 2 p = 0.11) and with possession (F(1,44) = 7.91, P < 0.01, η 2 p = 0.15.) The three-way interaction between these factors was not significant (F(1,44) = 0.07, P = 0.80).). This analysis revealed a general reduction of P3b amplitudes over time, as well as a reduction in the difference between P3b responses to self versus other events. However, the possession × social context interaction, which marked the exclusion effect seen in the main model, did not change throughout the course of each game. This finding is not in line with previous studies, raising questions about the reliability of adaptation effects in Cyberball and highlighting the need for more research to assess the conditions under which expectations might change dynamically. The current study may constitute an example of visual cues creating a priori expectations which are robust in the face of current experience. The link between skin tone and attribution drawn by Goodwin et al. (2010) mirrors the link between skin tone and the expectation of exclusion observed in the current study. To assess this more closely, we added an additional set of questions half way through data collection (24 participants), for participants to respond to at the very end of the experimental session. An independent-samples t-test revealed that attributions (of exclusion) to racism were significantly higher for those in the different condition (mean difference = 2.14, t(22) = 7.21, P < 0.0001). While this supports our claim that skin tone differences may act as a common cause of both attributions for exclusion and the formation of expectations of exclusion, this should be treated as an exploratory finding that warrants more systematic investigation. In particular, it will be important to investigate how direct the link between attributions and expectation formation is; as there was very little variation among responses in the current study, it was not feasible to directly correlate attribution scores with the magnitude of the P3b effect. In addition to the ERP data, the questionnaire data revealed a complex but intriguing set of effects of skin tone difference on perceived proportion of throws received, reported ostracism, and threatening of needs. Specifically, during inclusion, visual out-group members reported receiving the ball less often than visual in-group members did, as well as showing a marginal increase in reported ostracism and an increased threatening of needs. These results are in line with previous claims that members of stigmatized groups sometimes exhibit higher threat vigilance and rejection sensitivity and may have an expectation of being excluded (Major et al. 2003;Clark et al. 2006;Carter et al. 2013). Intriguingly, responses during exclusion showed the opposite pattern of results: visual out-group members reported significantly less ostracism and reduced need threat relative to visual in-group members. This is indicative of other research which suggests the presence of cognitive coping mechanisms that can act as a protective buffer to the harmful effects of exclusion (Crocker and Major 1989;Crocker et al. 1991;Cohen et al. 1999). On the basis of our ERP results, one mechanism for this may be the formation of exclusionary expectations. Ultimately, our questionnaire data add to the literature highlighting the complex psychological impact of intergroup exclusion, and suggest that the associated cognitive coping mechanisms may attenuate negative outcomes during actual exclusion, but sustain negative outcomes outside the context of exclusion (Clark et al. 2006;Hawkley and Cacioppo 2011;Hicken et al. 2013Hicken et al. , 2014. While much research highlights the differences between reflexive needs, which tend to be affected equally regardless of factors such as group membership, and reflective needs, which often show reduced recovery after outgroup exclusion (see Wirth and Williams 2009;Goodwin et al. 2010), our study is difficult to interpret within the context of reflexive and reflective needs, since only one questionnaire was taken for each period of inclusion/exclusion. For this reason, we make no claims about the differential impact of exclusion on reflexive versus reflective needs, and instead take our results to suggest that, generally, participants made to appear visually different appear less harmed by exclusion but more negatively affected by inclusion. Importantly, our results demonstrate that these effects can be driven by immediate context, rather than purely by prolonged, lived experience. Because the relative proportion of White and Brown skinned participants was kept constant across skin tone conditions (14 and 10 respectively per condition), our study reveals that even members of nonmarginalized, racial majority groups can experience heightened threat vigilance purely based on observed differences between their own skin tone and that of their interaction partners. These subtle, taskirrelevant visual cues alone are sufficient to trigger changes in the psychological response to exclusion. To what extent are the results of this study caused by skin tone and its connotations for racial group membership? One possibility that cannot be ruled out is that the observed pattern of P3b results was driven by low-level visual differences entirely unrelated to high-level properties such as group membership. Another alternative possibility is that while skin tone differences created general perceptions of ingroup/outgroup dynamics, they may not necessarily be related to racial prejudice itself. To counter these possibilities, there is some evidence suggesting that the type of identity differences between Cyberball players can influence the psychological responses to exclusion. Wirth and Williams (2009) manipulated either the temporary group membership of participants versus coplayers via avatar colors (green vs. blue), or their permanent group membership via gender. By comparing reflexive and reflective need fulfillment, the authors showed that recovery from the initial harm of exclusion was reduced when participants were identified by permanent versus temporary group membership, indicating that the effect of these visual differences was at least partly due to the higher level social connotations that they evoked. However, research investigating the P3b component during Cyberball has not yet delved into this question, and it remains to be seen whether the P3b modulations observed in the current study could be driven by other forms of visual difference between players. Among other areas of the literature, it is known that minimal group differences can result in changes in behavior and perception toward arbitrarily defined ingroup and outgroup members (Van Bavel et al. 2008), and can even override the effects of race (Van Bavel and Cunningham 2009). To investigate the strength of minimal groups in influencing exclusionary expectation formation within an ERP Cyberball paradigm, manipulating shirt color would provide a cue that is as visually salient as skin tone but without any obvious connotations for permanent group membership such as ethnicity. In a similar vein, the question arises as to whether group differences must be visible in order to shape expectations about upcoming social interactions. Considering that these differences invoke high-level social concepts (e.g., Mendes et al. 2008;Wirth and Williams 2009), it would be logical to assume that visibility is not a requirement; however, virtually no research (to our knowledge) has directly examined exclusion by nonvisible outgroups, which exist in the real world in countless forms (political affiliation, religious beliefs, socio-economic status, and psychological disorders, to name only a few). A fruitful direction for future research would be to make group differences salient using nonvisual cues such as verbal descriptions. In conclusion, the current study highlights the power of a simple, task-irrelevant visual cue in influencing the experience and psychological impact of social interactions. In line with the expectancy-based account of the P3b in Cyberball (Kiat et al. 2017(Kiat et al. , 2018, we suggest that visual skin tone differences between players and interaction partners can activate the expectation to be excluded in preparation for upcoming social interactions. Such expectations of exclusion may be linked to increased vigilance and a process that buffers the effects of exclusion, thus modulating the impact on psychological wellbeing; specifically, vigilance and associated processes may reduce the short-term harm of actual exclusion, but impair need fulfillment outside of exclusionary contexts. Ultimately, this study adds to the literature demonstrating the impact of high-level social information on real-time neural responses to inclusion and exclusion. Notes Conflict of Interest: None declared. Funding This work was supported by the Canada Foundation for Innovation, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada.
2020-09-03T09:12:43.710Z
2020-08-05T00:00:00.000
{ "year": 2020, "sha1": "917b7e3c06e9fe381a715e99c658e4d082a8329a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/texcom/tgaa057", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "913feb893f9c11731bc2a53e84d9e8b6ff84aed7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
263794203
pes2o/s2orc
v3-fos-license
The L^1-norm of exponential sums in Z^d Let A be a finite set of integers and F_A its exponential sum. McGehee, Pigno&Smith and Konyagin have independently proved that the L^1-norm of F_A is at least c log|A| for some absolute constant c. The lower bound has the correct order of magnitude and was first conjectured by Littlewood. In this paper we present lower bounds on the L^1-norm of exponential sums of sets in the d-dimensional grid Z^d. We show that the L^1-norm of F_A is considerably larger than log|A| when A is a subset of Z^d with multidimensional structure. We furthermore prove similar lower bounds for sets in Z, which in a technical sense are multidimensional and discuss their connection to an inverse result on the theorem of McGehee, Pigno&Smith and Konyagin. Introduction We begin with a notational remark.Throughout the paper expressions of the form Q ≤ C are taken to mean that the quantity Q is less than an appropriately chosen absolute constant C > 1.We will therefore write counter-intuitive statements like 2C ≤ C. When the constant is less than 1 a lower case c is used. For finite A ⊂ Z d the exponential sum of A is where • is the usual dot product in R d , e(t) = exp(2πit) and x lies in the d-dimensional torus T d .The L 1 -norm of F A is given by We will also write f, g = for the inner product of two functions f, g : T d → C. Theorem 1.1 (McGehee-Pigno-Smith, Konyagin).Let A be a finite sets of integers.Then Taking A to be a symmetric arithmetic progression about zero, and hence F A the Dirichlet kernel, shows that the lower bound is of the correct order of magnitude [7]. The first proof works equally well when A ⊂ Z d .The order of magnitude of the lower bound is attained when A is an arithmetic progression in Z d .On the other hand, if A is the d-dimensional cube {(x 1 , . . ., x d ) : 1 ≤ x i ≤ N for all i} ⊂ Z d , then It is therefore natural to ask whether a similar lower bound on F A 1 holds when A has a genuinely multidimensional structure. We answer this question to the affirmative, not only for sets in Z d , but also for sets in Z.Our results present partial progress towards answering a question of W.T. Gowers on the L 1 -norm of exponential sums in Z 2 , which will be stated below.They also help characterise sets of integers A for which F A 1 is nearly minimal. The first step is to quantify what we mean by 'genuinely multidimensional structure'.The most typical example that comes to mind is that of the d-dimensional cube, where as we have seen longer holds when A is tweaked and taken to be {(a 1 + x 1 , . . ., a d + x d ) : 1 ≤ x i ≤ N for all i} for fixed integers a 1 , . . ., a d .We study F A 1 for sets that have a similar structure and show that in this case F A 1 ≥ log cd |A|.To keep the notation simple, here and most importantly in the proofs that follow, we will from now on set d = 2 or 3. Our methods can be generalised in a straightforward manner for d > 3. Considering the general case would make what already is a notation-heavy argument even more technical without adding anything to the method. Let us now introduce some terminology, which will be helpful in pinning down an exact meaning for 'multidimensional structure'. We call A ⊆ Z 2 a genuinely 2-dimensional set, if its rows are either empty or large.We call A ⊆ Z 3 a a genuinely 3-dimensional set, if its planar slices are either empty or a genuinely 2-dimensional set. The first of our results asserts that, if A is genuinely 2-dimensional then F A 1 is considerably larger than log |A|.Theorem 1.2.Let A ⊂ Z 2 be finite.Suppose that A consists of at least r rows of size at least s.Then The stated lower bound is probably not best possible.Gowers has asked whether F A 1 ≥ c log r log s holds.Theorem 1.2 only gives F A 1 ≥ log s log 1/2−ε r for all ε > 0 and sufficiently large A. The method of proof of Theorem 1.2 can also be applied to subsets of Z.To define 'multidimensional structure' in the integers we turn to a notion often used in additive problems. Definition.Let A and B be sets in two additive groups.A map ) holds for any choice of a 1 , . . ., a 2k ∈ A. We say A is Freiman isomorphic of degree k to B. Our second main result asserts that if A ⊂ Z is Freiman isomorphic to a 3-dimensional set in Z 3 , then F A 1 is considerably larger than log |A|. Theorem 1.3.Let A ⊂ Z 3 be finite.Suppose that A consists of at least p planar slices each in turn consisting of at least r rows of size at least s.If B ⊂ Z is Freiman isomorphic of degree k to A, then , provided that k = 62 log r log s log p. A helpful, if imprecise, way to rephrase the above is that F B 1 ≥ log 3/2−ε |B| for all ε > 0 whenever B ⊂ Z is isomorphic to a genuinely 3-dimensional set in Z 3 and is sufficiently large.As a consequence we see that any sufficiently large set A where F A 1 ≤ C log |A| cannot have this particular 3-dimensional structure. The lower bound in Theorem 1.3 is probably not best possible.Moreover, one suspects that the conclusion holds for smaller values of k.It is furthermore likely that if A is Freiman isomorphic to a 2-dimensional set in Z 2 , then F A 1 ≥ log 1+η |A| for some absolute 0 < η ≤ 1.The method we present is not strong enough to prove this. The remaining sections are organised as follows.In Sect. 2 we prove a lemma that is central to the proof of both theorems.The lemma is a generalisation of a method developed by P.J. Cohen [2] to tackle Littlewood's conjecture and was later refined by H. Davenport [3] and S.K. Pichorides [9].In Sect. 3 we prove Theorem 1.2 .In Sect. 4 we prove Theorem 1.3.Finally, in Sect. 5 we discuss how an inverse result for Theorem 1.1 may look like and compare the suggested structure with that which comes out of Theorem 1.3. A method of Cohen, Davenport and Pichorides To prove Theorem 1.2 and Theorem 1.3 we will rely on a combination of techniques developed to tackle Littlewood's conjecture by Cohen [2], Davenport [3], Pichorides [9] and McGehee, Pigno & Smith [8].The four aforementioned papers on the Littlewood conjecture concentrate on constructing a test function g that satisfies two properties: g ∞ ≤ 1 and g, F A ≥ log α |A| for some absolute constant α.This immediately gives log Our strategy to prove Theorem 1.2 is as follows.For simplicity let us assume that A consists of r rows A 1 , . . ., A r of size at least s, where A i ⊂ {(x, n i ) : x ∈ Z} for some integers n 1 , . . ., n r .Let Φ n i be the McGehee-Pigno-Smith test function for the exponential sum F A i .That is the function constructed by McGehee, Pigno and Smith that satisfies the two properties listed above for α = 1.We will combine these to produce a better test function for A. This will be done by mirroring the method of Cohen, Davenport and Pichorides. Cohen combined the exponentials {e(nx) : n ∈ A} and obtained a test function which yields the value α = 1/8 − ε for all ε > 0. Davenport improved this to α = 1/4 − ε and Pichorides to α = 1/2 − ε.The three arguments are rather similar.A closer look at the underlying method reveals that one can get the same result even when relaxing the most commonly used properties of exponentials to: • |e(nx)| ≤ 1 for all n and x. Our strategy is to replace the exponentials in the existing proofs by the Φ n i , which satisfy the first condition.The support of the Fourier transform Φ n i lies in the line that contains A i and therefore the Φ n i also satisfy the following new versions of the later two conditions. • Let k and l be positive integers. As we will shortly see every step can still be carried out and we thus obtain Theorem 1.2.One way to describe this process is to say we will employ the McGehee-Pigno-Smith method in one dimension and the Cohen-Davenport-Pichorides in the other. The Cohen-Davenport-Pichorides method is applicable when one considers Freiman isomorphisms.We will thus employ it in all three dimensions to prove Theorem 1.3.The details can be found in the two upcoming sections. We begin with a technical result that is the main building block of the two proofs. Lemma 2.1.Let R and d be positive integers, K a positive real number and F : T d → C be an integrable function.Suppose there are positive integers n 1 , . . ., n R and a collection of integrable functions Φ n 1 , . . ., Then there is a test function g such that (ii) g is a linear combination of functions of the form In particular (i) and (iii) imply The reader can think of the Φ n as exponentials in order to gain some intuition.We will need two lemmata.The first is Lemma 1 of [9]. Lemma 2.3 (Davenport-Pichorides) . Let E and S be sets of positive integers.For p ∈ S let N(p) to be the number of elements of E that are greater than p. Let t be a positive integer and suppose that Furthermore m α = n q(α) , where q(α) ≤ α 4 p∈S N(p) . We now turn to proving Lemma 2.1. Proof of Lemma 2.1.The proof is based on iteration.We will construct functions g 1 , g 2 , . . .that satisfy (i) and modified versions of (ii) and (iii): ) n for some t ≥ 100 to be chosen later. We set g 1 = Φ n 1 , which satisfies (i),(ii ′ ) and (iii ′ ) as the sum is empty.We now inductively define for some m 1 , . . ., m t carefully chosen from {n 1 , . . ., n R } in such a way that the inner product of the middle part with F is zero.For the time being we assume this can be done.We need to check that g i+1 satisfies (i),(ii ′ ) and (iii ′ ). For (i) we apply Lemma 2.2.For any v set We observe that The conditions of Lemma 2.2 are satisfied and so The last inequality coming from Lemma 2.2.Thus g i+1 satisfies (i).g i+1 by definition satisfies (ii ′ ) and so we are left with (iii ′ ). It follows from our assumption on the middle part of g i+1 that Once n becomes considerably bigger than t the terms (1 − 1/t) n ≤ exp(−n/t) become exponentially small and so add very little to the sum.We therefore iterate the process only t times and set g = g t .It follows that the k appearing in (ii) can be taken to be 2t. subject only to being able to repeat the iteration t times. Our final task then becomes to prove that the m i can indeed be chosen t times and get the largest possible value for t.This will be done by applying Lemma 2.3. We start by labelling m t the elements of {n 1 , . . ., n R } chosen in the ith iteration and recursively define the following sets: Applying Lemma 2.3 with S = S i−1 we see that the m (i) j can be chosen provided that The sum in the left hand side is estimated using the final conclusion of Lemma 2.3. where A 1 , A 2 , . . .are the rows of A. Multidimensional sets in Z We repeat the same process to prove Theorem 1. Proof of Theorem 1.3.Translate A if necessary so that all three coordinates of its elements are positive.Let θ be the Freiman isomorphism between A and B and e 1 , e 2 , e 3 the standard basis of Z 3 .Suppose that A 1 , A 2 , . . .are the planar slices of A. For any i let a i be the integer such that We construct a test function for F B = F θ(A) by three successive applications of Lemma 2.1. We begin by applying as θ is a Freiman isomorphism of degree k and α ≤ k.This is impossible as the right hand side is supported on the line {u ∈ Z 3 : u • e 3 = a i , u • e 2 = b i j }, while the left hand is not.Hence Next we combine the f ij to get a test function for F θ(A i ) .We set n j = b i j and Φ n j = f ij in Lemma 2.1.The f ij satisfy condition (A) and, as we saw above, (B) with K ≥ c log 1/2−ε s. To check condition (C) note that the Fourier transform of for α ≤ 2 log s.Thus the inner product with F θ(A i ) is zero unless θ(A i ) intersects the above sum-difference set.Note that l ≤ 2 log r and that θ is a Freiman isomorphism of sufficiently large degree for this to happen only when the sum b j for some j.By Lemma 2.1 we get a test function f i that satisfies f i ∞ ≤ 1 and The support of f i lies in (γ + 1)θ(A i ) − γθ(A i ) for some γ ≤ 12 log r log s: the support of f ij lies in (α + 1)θ(A i ) − αθ(A i ) for α ≤ 2 log s and we have to consider expressions of the form r and so γ can be taken to be (α + 1) . This is impossible as the right hand side lies on the plane {u ∈ Z 3 : u • e 3 = a i }, while the left hand does not.Hence Finally we combine the f i to get a test function for F θ(A) .We let n i = a i and Φ n i = f i .The f i satisfy conditions (A) and, as we saw above, (B) with K ≥ c(log s log r) 1/2−ε in the statement of Lemma 2.1.To check condition (C) note that the Fourier transform of ) for γ ≤ 12 log s log r.The inner product with F θ(A) is zero unless θ(A) intersects the above sumdifference set, which is a subset of (δ+1)θ(A)−δθ(A) for δ = 2lγ +l+γ ≤ 62 log p log r log s. θ is a Freiman isomorphism of degree k ≥ δ, so this happens only if Remark.One can extend this result to higher dimensions. Additive structure when F A 1 is small In this final section we discuss the following question.Suppose F A ≤ C log |A| for A ⊂ Z.Is there a particular structure A must have?We suggest a plausible structure and compare it with that implied by Theorem 1.3. Determining the precise value of F A 1 for a given A is hard.The Cauchy-Schwarz inequality shows that the L 1 -norm is certainly bounded above by the L 2 -norm, F A 2 = |A| 1/2 .This order of magnitude is attained when A is the lacunary sequence {2 i : 1 ≤ i ≤ N}.By an averaging argument one gets much denser random subsets of {1, 2, . . ., N} with F A 1 ≥ cN 1/2 .In general sets with random like properties are expected to give rise to exponential sums with large L 1 -norm.For example, if A is the set of the first N primes, then F A 1 ≥ N 1/2−ε for all ε > 0 [10] and if A is the intersection of the support of the Möbius function with {1, 2, . . ., N}, then At the other end of the spectrum we have structured sets.If A is the union of k arithmetic progressions, then by the triangle inequality Note however that not the whole of A needs to be structured.We can for example remove a subset X with C log 2 N elements from {1, 2, . . ., N} and still have One can instead add a much larger set X.For example X can be a 2-dimensional arithmetic progression disjoint from {1, . . ., N}.If X is Freiman 2-isomorphic to {1, . . ., L} × {1, . . ., L}, where L = exp(log Establishing a concrete relation between F A 1 and the additive structure of A has not been possible so far.Even the simplest inverse theorem for sets A where F A 1 is close to being minimal has been elusive.The following question arose in conversations with B.J. Green and is in accordance with a theorem of Green and T. Sanders on idempotent measures [4]. Question 5.1. .Does there exists an absolute constant 1/2 ≤ η < 1 and a function g : R + → R + with the following property.Let A ⊂ Z be a finite set and K a positive constant.Suppose F A 1 ≤ K log |A|.Then there exists a set X ⊂ Z of size at most exp( F A η 1 ), g(K) arithmetic progressions P 1 , . . ., P g(K) and ε 1 , . . ., ε g(K) ∈ {+1, −1} such that The range of η comes from the example discussed above and Theorem 1.1.Taking A to be a 2-dimensional arithmetic progression Freiman 2-isomorphic to {1, . . ., N}×{1, . . ., N} suggests that g(K) has to be exponential in K. The results in this paper point to a slightly different direction.We have established that no sufficiently large set of integers A whose exponential sum has L 1 -norm at most C log |A| can be Freiman isomorphic to a genuinely three dimensional set in Z 3 .This puts a constraint on sets where F A 1 is close to being minimal.Unfortunately it is not the case that such sets mainly consist of few long arithmetic progressions and a small set.The notion of dimensionality we have relied on is too restrictive to lead to such a conclusion.Take for example the lacunary sequence A = {x i = 2 i : 1 ≤ i ≤ N}.Its elements satisfy the recurrence relation x i+1 = x i + 2(x i − x i−1 ).It follows that its image under a Freiman isomorphism θ of degree 3 also satisfies this relation.The y-coordinate of the elements of θ(A) is either constant (when θ(x 1 ) • e 2 = θ(x 2 ) • e 2 ) or distinct for all i.In other words either θ(A) is contained in a single row or it consists of |A| singleton rows.In either case θ(A) is not a genuinely 3-dimensional set.Yet any subset Y ⊂ A cannot be decomposed in fewer than |Y |/2 arithmetic progressions as A contains at most two consecutive elements of any arithmetic progression. Lacunary sequences are very sparse, but the situation doesn't change when we consider dense sets as the following example demonstrates. Let L be a large integer and P the first prime such that A has large density in {1, . . ., N}.To check this observe that We know that |A p | = N/p and |A p ∩ A q | = N/pq.Hence Which in turn implies that Next we consider the image of A under a Freiman isomorphism of degree two.Freiman isomorphisms map arithmetic progressions in Z into lines in Z 3 and hence θ(A) must be supported on a collection of lines {θ(A p ) : p ∈ P}.For every pair of indices p = q, θ(A p ) ∩ θ(A q ) = N/pq > 2 and so the two lines must in fact be identical.Thus the image of A under any Freiman isomorphism lies in a single line in Z 3 .As a consequence θ(A) either lies in a single row or in |A| different rows. Lemma 2.1 to get a test function for F θ(A ij ) for all pairs of indices {i, j} for which A ij is non-empty.Let b ij , . . .be the elements of θ(A ij ).We set n l = b (l) ij and Φ n l = e(b (l) ij ) in Lemma 2.1.The Φ n l satisfy conditions (A), (B) with K = 1 and (C).Applying Lemma 2.1 we get a test function f ij which satisfies p∈P p −1 ≥ 1/2where P is the set of primes between L and P .Now let where A p consists of all numbers in {1, . . ., N} that are congruent to 1 mod p. 3. We can no longer use the McGehee-Pigno-Smith test functions as their support is both very large and very difficult to analyse.It is furthermore unlikely that condition (C) in Lemma 2.1 holds.Instead we use the Cohen-Davenport-Pichorides test functions, which are Freiman isomorphism friendly because of conclusion (ii) in Lemma 2.1.In what follows for a set of integers S and a positive integer α we write αS = {s 1
2011-10-19T11:06:33.000Z
2011-10-10T00:00:00.000
{ "year": 2011, "sha1": "3de66aa93860e8456602a8fa57e38c589d85d445", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3de66aa93860e8456602a8fa57e38c589d85d445", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
197403022
pes2o/s2orc
v3-fos-license
Implementation of an acute DVT ambulatory care pathway in a large urban centre: current challenges and future opportunities Background Ambulatory management of isolated acute deep venous thrombosis (DVT) is the recommended standard of care in selected populations. However, in practice a significant number of patients continue to be managed as in-patients. Objectives In this study we aimed to evaluate acute DVT treatment pathways in our emergency department (ED) in practice and to identify barriers to outpatient management. Methods This study was a cross-sectional analysis of prospectively collected data pertaining to consecutive patients presenting to the ED of a large, city center, academic teaching hospital over a 46 week period who were diagnosed with DVT. Results Implementation of an outpatient care pathway led to the majority of patients presenting with DVT in our institution being treated without hospital admission. Forty percent (31/78) of patients with DVT were treated with a direct oral anticoagulant (DOAC) as an outpatient in line with international best practice guidelines. Conclusion The study provides a clear picture of the clinical profile and management of patients in clinical practice. Due to the lack of resources and supported infrastructure it is difficult to effectively implement outpatient venous thromboembolism (VTE) management to its full potential. Directing resources towards strategies which facilitate outpatient DVT treatment among vulnerable patient groups could represent a means of reducing hospital admissions for DVT in urban centers. Our study highlights the success and clinical limitations of the outpatient treatment model, which should become standard as part of wider VTE care. Introduction Venous thromboembolism (VTE) comprises deep vein thrombosis (DVT) and pulmonary embolism (PE) and is a major contributor to global disease burden, affecting millions of individuals worldwide every year [1,2]. The incidence of DVT in Europe has recently been reported as 70-149 cases/100000 person-years [3]. The majority of cases are diagnosed and initial treatment is commenced in the emergency department (ED) [4,5]. It has been established that ambulatory management of isolated DVT, without inpatient admission, is safe and feasible in appropriately selected populations [6][7][8][9]. Currently published data suggests that ambulatory care with low molecular weight heparin (LMWH) is safe for selected patients with acute DVT [7]. In recent years, direct oral anticoagulants (DOACs) have been compared with warfarin in randomized phase 3 trials and are now suggested as first-line treatment for VTE over vitamin K antagonists for most patients [10]. Moreover, DOACs are associated with additional benefits in particular for patient management in the outpatient setting, including lack of requirement for monitoring of anticoagulant effect and a lower risk of major haemorrhage including intracranial haemorrhage [11,12]. International guidelines from the European Society of Cardiology recommend the application of specific clinical criteria to identify patients with DVT suitable for outpatient or ambulatory management [13]. However despite this available evidence, in practice a significant number of patients (> 50%) continue to be managed as inpatients, including those potentially suitable for out of hospital management [4,12,14]. In this study we aimed to evaluate acute DVT treatment pathways in our ED, to examine the implementation of these guidelines in practice and to identify barriers to outpatient management. Methods This study was a cross-sectional analysis of prospectively collected data pertaining to consecutive patients presenting to the emergency department (ED) of a large, city centre, academic teaching hospital in Dublin City Centre (Mater Misericordiae University Hospital; MMUH) between October 2014 and September 2015 who were diagnosed with DVT. Patients presenting to the MMUH ED and diagnosed with DVT are managed according to a structured care pathway (under the governance of the multidisciplinary MMUH VTE Working Group). This pathway provides guidance for selection of patients suitable for outpatient DVT management. Clinical pre-test probability was assessed using the two level modified Wells Score. The Wells Score assigns points to clinical variables including active cancer treatment, paralysis, recent surgery, calf swelling and tenderness in order to predict likelihood of DVT [13]. In cases where DVT was unlikely (modified Wells score ≤ 1), D-dimer testing was carried out and a negative result out ruled DVT. In patients with 'likely' DVT (modified Wells score > 1) patients proceeded directly to imaging with compression ultrasonography (CUS). Patients with a confirmed diagnosis of DVT were managed as outpatients providing they did not meet the exclusion criteria listed in local guidelines: alcohol dependence, signs or symptoms suggestive of pulmonary embolus (PE) or confirmed PE, age < 18, patients already on anticoagulation at time of diagnosis, pregnancy, significant issues with compliance, cognition, mobility or communication, active or significant risk of bleeding, or comorbidities requiring medical or surgical admission (which at the time of the study included active malignancy, severe liver and renal impairment, bleeding disorder). Suitable patients preferring a long-term once daily option rather than twice daily therapy were prescribed rivaroxaban 15 mg twice daily for three weeks, followed by 20 mg once daily for three months total duration unless an indication to continue therapy existed upon review, including patients with unprovoked DVT or a persisting provoking factor. All other patients were prescribed therapeutic dose LMWH. Tinzaparin 175 units/kilogram once daily was the LMWH of choice at our centre. Patients were reviewed by the VTE clinical nurse specialist on the same day where possible. Patients for whom a clear date of discontinuation after three months was not suitable were followed up in the MMUH thrombosis clinic. In this study, data were prospectively recorded at the time of diagnosis via the ED in a local hospital database. Variables recorded were age, Wells pre-test probability score, D-dimer, compression ultrasound result, treatment prescribed, presence of provoking factors, computed tomography pulmonary angiogram (CTPA) result (if also requested based upon symptomatology), previous VTE history, co-medications, early bleeding complications (specifically evaluated at 3 weeks post-diagnosis) and whether outpatient management was feasible. Follow up for sonographic evidence of residual thrombosis was not documented as part of this study. Patients treated as per the outpatient VTE pathway were contacted by telephone by the VTE clinical nurse specialist at three days and three weeks post DVT to assess for early bleeding complications. This group of patients were scheduled for follow up at the coagulation outpatient clinic with the consultant haematologist three months post DVT at which time they were specifically asked about bleeding. There was no standardised methodology for data collection regarding early bleeding complications for patients that were admitted to hospital for treatment. The study relied on patients self-reporting episodes of bleeding in this cohort during the three month follow up. We used Microsoft Excel for data entry. We used descriptive statistics to analyse our data. The categorical variables were reported in counts and percentages for our groups of interest; inpatients, outpatients and the total group. For the continuous variables, we calculated the mean accompanied by the standard deviation (SD) and the median accompanied by the interquartile range (IQR) for each group. Results Fifty-one thousand five hundred forty-four patients presented to the MMUH ED during the study time period. Of these, 400 patients were investigated for DVT by the VTE clinical nurse specialist, and 78 (19%) had a confirmed diagnosis of DVT on ultrasound. Of the total number of those investigated, the diagnosis was ruled out with Wells Score and D-dimer in 63 (16%), while 259 (65%) had negative sonography. In 54% (42/78) of cases, there was documentation of the clinical pre-test probability using the two level modified Wells score. In cases with documentation of Wells Score 19% (8/42) and 81% (34/42) had an unlikely probability and a likely probability of DVT respectively. Wells score was not documented in 46% of patients at presentation (36/78). For 58% (21/36) of these patients this was an appropriate omission of the score as individuals were post-partum or persons who inject drugs (PWID). 58% (45/78) of patients had a D-dimer level measured in the ED. In 51% (17/33) of patients without recorded D-dimer, the test was not indicated due to intravenous drug use. A D-dimer was ordered erroneously in three patients who presented post-partum. Two (3%; 2/78) patients presented with upper limb DVT with the remainder of DVTs presenting at the level of the external iliac vein and below. 19% (15/78) of patients with DVTs had a coexisting PE at the time of presentation. Sixty-eight percent (53/78) of patients were managed as outpatients while 32% (25/78) were admitted for treatment. 58% (31/53) of those treated as outpatients were treated with rivaroxaban while the remaining 42% Bleeding complications were recorded in 9% (5/53) of those treated as outpatients versus 4% (1/25) of patients initially treated as inpatients. Reported bleeding complications included positive faecal occult blood, haemoptysis, haematuria, menorrhagia and bleeding from a groin abscess. There were a total of six cases of clinically relevant non major bleeding with no episodes of major bleeding recorded in either group assessed according to the International Society of Thrombosis and Haemostasis definitions [15]. In accordance with the outpatient DVT protocol, patients enrolled in the outpatient pathway and commenced on rivaroxaban received a follow up phone call at three days and three weeks from the VTE clinical nurse specialist (CNS) to assess bleeding complications. 47% (37/78) of patients in our study were followed up in this way. There were insufficient resources to formally follow up every patient included in the study with CNS led phone calls. Patients admitted to hospital for initial therapy were followed up by the admitting medical team, the outcomes of which were outside the scope of our data collection in certain cases. Patients anticoagulated with LMWH had contact with healthcare professionals in the majority of cases therefore follow up phone calls were not prioritised for this group for pragmatic reasons. Discussion Implementation of an outpatient care pathway led to the majority of patients presenting with DVT in our institution being treated without hospital admission. Sixtyeight percent of all patients were treated as outpatients during the study period. Forty percent of patients with DVT were treated with a DOAC as an outpatient in line with international best practice guidelines. A further 28% were treated in the community with LMWH, 77% of those were PWID. There is a high prevalence of intravenous drug use among patients within the catchment of our inner city, urban centre. LMWH is currently recommended for pragmatic reasons for these patients, as most Dublin drug treatment centres are familiar with and most comfortable with direct administration of LMWH for these clients. With the development of pathways and infrastructure in inclusion health in Dublin city centre, this may change in the future. Inpatients were treated with LMWH (72%) or a DOAC (16%), with the remaining three cases undocumented. The most prominent barrier to ambulatory management in our cohort appears to be DVT with associated PE. Consequently, development of safe pathways for outpatient management of PE has been prioritised by the hospital VTE Committee. The implementation of validated risk assessment tools in the ED may identify low risk patients with PE suitable for outpatient management. The introduction of eligibility criteria and coordination with the outpatient department to ensure appropriate interval follow up for this cohort, could facilitate safe outpatient management of specific patients presenting with PE [16]. Almost 20% of patients presented with DVT and associated PE. This is similar to figures cited in international literature [11]. While the prevalence of intravenous drug use was stable between the inpatient and outpatient group (40% versus 42%) concomitant medical and psychosocial issues were documented in the group for admission. Development of inclusion health pathways in collaboration with medical social workers dedicated to the care and management of patients with social challenges including homelessness and drug and alcohol addictions has been prioritized as a key outcome of this study. In order to meet this objective, the Hospital VTE Committee will also work closely with colleagues in primary care who provide dedicated support to these vulnerable patients and with colleagues in other Dublin City Centre Hospitals, as patients with inclusion health needs frequently attend several hospitals due to their circumstances [17,18]. In 34% of cases presenting to the ED where Wells Score was indicated (excluding PWID and post-partum cases), record of the score was inappropriately omitted from the patient documents. This is a crucial step in the evaluation of clinically suspected DVT and guides further investigation. This revealed non-compliance with the recommended structured care pathway for DVT in the ED and is not in line with best practice as outlined by international guidelines [13]. Patients with complete DVT work up including documentation of Wells Score were more likely to be treated as an outpatient. While D-dimer was measured in 58% of cases, deviation from clinical practice guidelines occurred in only one case when the test was inappropriately requested with a documented high clinical probability of DVT as established by the Wells Score [13]. The presence of a provoking factor and unusual site of DVT, such as upper limb thrombi, were both factors associated with admission to hospital. The data in this study was collected consecutively and prospectively. All radiologically proven cases of DVT were recorded in the data set by the research team. This approach aimed to minimise bias due to low response and to prevent misclassification due to recall bias as can be seen in cross-sectional studies. The main limitation of the study was its observational design. Physicians in the ED were unaware that the data collection was taking place and therefore did not clearly document the clinical criteria supporting admission. Patients with suspected DVT were seen directly by the VTE CNS during office hours under the supervision of the Emergency Medicine (EM) consultant staff, but outside of these hours patients were assessed by junior and senior EM doctors, which likely accounts for variance in assessment and documentation. Contraindications to outpatient management were inferred from studying the patient's clinical data in the context of the structured care pathway. This study reports the prevalence of DVT presenting to the ED and describes the subsequent management pathway, however it is not possible to fully elucidate the patient factors impacting clinical decision making and treatment choice. Cause and effect relationships and associations are be difficult to interpret. There were missing data in our study. Incomplete data entry was noted most particularly when recording early bleeding complications. Accurate records of follow up complications are not available to us for all cases. This study did not set out to primarily assess bleeding complications. We are limited in our ability to comment on bleeding complications between the two treatment pathways and between the different anticoagulant agents due to the lack of standardisation in our methods for assessing bleeding complications between the two groups. However, despite limitations, the study provides a clear picture of the clinical profile and management of patients in clinical practice. Conclusion The results of this study are relevant throughout the world today. Clinical trials have long since established that outpatient DVT management is a safe and effective treatment. There is now an emerging evidence base for the outpatient treatment of pulmonary embolism [19]. However despite the progress in the field of research, it is clear from this data that there is scope for further development of the outpatient DVT pathway in clinical practice. The findings of this study are consistent when compared with international data [4,12,14]. Directing resources towards strategies which facilitate outpatient DVT treatment among vulnerable patient groups could represent a means of reducing hospital admissions for DVT in urban centres and, ultimately, lead to health care savings. The majority of hospitals in Ireland do not have a permanent VTE CNS in the ED. Due to the lack of resources and supported infrastructure it is difficult to effectively implement outpatient VTE management to its full potential. Our study highlights the success of this model, which should become standard as part of wider VTE care in Ireland.
2019-07-14T22:18:24.750Z
2019-07-10T00:00:00.000
{ "year": 2019, "sha1": "e3a0c194cfb3e88e378d79c8a0d78a9deaea35bc", "oa_license": "CCBY", "oa_url": "https://thrombosisjournal.biomedcentral.com/track/pdf/10.1186/s12959-019-0203-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3a0c194cfb3e88e378d79c8a0d78a9deaea35bc", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252683097
pes2o/s2orc
v3-fos-license
Inability of a graph neural network heuristic to outperform greedy algorithms in solving combinatorial optimization problems like Max-Cut In Nature Machine Intelligence 4, 367 (2022), Schuetz et al provide a scheme to employ graph neural networks (GNN) as a heuristic to solve a variety of classical, NP-hard combinatorial optimization problems. It describes how the network is trained on sample instances and the resulting GNN heuristic is evaluated applying widely used techniques to determine its ability to succeed. Clearly, the idea of harnessing the powerful abilities of such networks to ``learn'' the intricacies of complex, multimodal energy landscapes in such a hands-off approach seems enticing. And based on the observed performance, the heuristic promises to be highly scalable, with a computational cost linear in the input size $n$, although there is likely a significant overhead in the pre-factor due to the GNN itself. However, closer inspection shows that the reported results for this GNN are only minutely better than those for gradient descent and get outperformed by a greedy algorithm, for example, for Max-Cut. The discussion also highlights what I believe are some common misconceptions in the evaluations of heuristics. In Ref. [1], Schuetz et al provide a scheme to employ graph neural networks (GNN) as a heuristic to solve a variety of classical, NP-hard combinatorial optimization problems. It describes how the network is trained on sample instances and the resulting GNN heuristic is evaluated applying widely used techniques to determine its ability to succeed. Clearly, the idea of harnessing the powerful abilities of such networks to "learn" the intricacies of complex, multimodal energy landscapes in such a hands-off approach seems enticing. And based on the observed performance, the heuristic promises to be highly scalable, with a computational cost linear in the input size n, although there is likely a significant overhead in the pre-factor due to the GNN itself. However, closer inspection shows that the reported results for this GNN are only minutely better than those for gradient descent and get outperformed by a greedy algorithm, for example, for Max-Cut. The discussion also highlights what I believe are some common misconceptions in the evaluations of heuristics. Among a variety of QUBO problems Ref. [1] consider in their numerical evaluation of their GNN, I want to focus the discussion here on Max-Cut. As explained in the context of Eq. (7), it is derived from an Ising spinglass Hamiltonian on a d-regular random graph [2] for d = 3. (In the physics literature, for historical reason such a graph is often referred to as a Bethe-lattice [3,4].) Minimizing the energy of the Hamiltonian, H, maximizes the cut-size cut = −H. The cut results for the GNN (for both, d = 3 and 5) are presented in Fig. 4 of Ref. [1], where they find cut ∼ γ 3 n with γ 3 ≈ 1.28 via an asymptotic fit to the GNN data obtained from averaging over randomly generated instances of the problem for a progression of different problem sizes n. In Fig. 1(a) here, I have recreated their Fig. 4, based on the value of γ 3 reported for GNN (blue line). Like in Ref. [1], I have also included what they describe as a rigorous upper bound, cut ub (black-dashed line), which derives from an exact result obtained when d = ∞ [5]. While the GNN results appear impressively close to that upper bound, however, including two other sets of data puts these results in a different perspective. The first set I obtained at significant computational cost (∼ n 3 ) with another heuristic ("extremal optimization", EO) long ago in Ref. [4] (black circles). The second set is achieved by a simple gradient descent (GD, maroon squares). GD sequentially looks at randomly selected (Boolean) variables x i among those whose flip (x i → ¬x i ) will improve the cost function. (Such "unstable" variables are easy to track.) After only ∼ 0.4n such flips, typically no further improvements were possible and GD converged; very scalable and fast (done overnight on a laptop, averaging over 10 3 − 10 5 instances at each n, up to n = 10 5 ). Presented in the form of Fig. 1(a), the results all look rather good, although it is already noticeable that results for GD are barely distinguishable from those of the elaborate GNN heuristic. To discern further details, it is essential to present the data in a form that, at least, eliminates some of its trivial aspects. For example, as Schuetz et al reference themselves, the ratio cut/n ∼ γ converges to a stable limit with γ ∼ d/4 + P * d/4 + O( √ d) + o(n 0 ) for n, d → ∞ [6], where P * = 0.7632 . . . [5]. In fact, for better comparison with Refs. [3,4], we focus on the average ground-state energy density of the Hamiltonian in their Eq. (7) at n = ∞, which is related to γ via arXiv:2210.00623v1 [cond-mat.dis-nn] 2 Oct 2022 The awkward denominator is owed to fact that P * = lim d→∞ e d / √ d. Also, energy provides a fair reference point to assess relative error because a purely random assignment of variables results in an energy of zero, the ultimate null model. Such a reference point is lacking for the errors quoted in Tab. 1 of Ref. [1], for example.) More revealing then merely dividing by n is the transformation of the data into an extrapolation plot [4,7]: Since we care about the scalability of the algorithm in the asymptotic limit for large problem sizes n → ∞, which in the form of Fig. 1(a) is out of view, it expedient to visualize the data plotted for an inverse of the problem size (i.e., 1/n or some power thereof [4,8,9]). Independent of the largest sizes n achieved in the data, it conveniently condenses the asymptotic behavior arbitrarily close to the y-intercept where 1/n → 0, albeit it at the cost of sacrificing some data for smaller n. To this end, I propose to plot the data in the finite-size corrections form, e 3 n ∼ e 3 n=∞ + const n + . . . , (n → ∞). (1) In Fig. 1(b) we have plotted the same data from Fig. 1(a) according to Eq. (1) for d = 3 (modulo a trivial factor of 1/ √ 3 for better comparison with P * ). Stark differences between each set of data appear, since each set converges asymptotically to a stable but distinct limit at 1/n = 0. First, we note the addition of a well-known result from replica theory, a one-step replica symmetry-breaking (1-RSB) calculation [3,10] that is expected to yield the actual value for e 3 n=∞ (and thus, γ 3 ) with a precision of 10 −4 (green line), a superior reference value than −P * (black-dashed line), valid only at d = ∞ although seemingly sensible in the form of Fig. 1(a). The 1-RSB value is further emphasized by the fact that the EO data (black circles) from Ref. [4] smoothly extrapolate to the same limit within statistical errors. Finally, in the form of Fig. 1(b), it becomes apparent that the claimed GNN results (blue line) are systematical far (> 15% at any n) from optimal (1-RSB, green line) and hardly provide any improvement over pure gradient descent (GD, maroon squares). It appears that the GNN learns what is indeed the most typical about the energy landscape: the vast prevalence of high-energy, poor-quality metastable solutions that gradient descent gets trapped in, missing the faint signature of exceedingly rare low-energy minima. In fact, extending GD by a subsequent 5n spin flips, say, each flip adjusting one among the least-stable spins (even if not always unstable), allows this greedy local search to explore several local minima, still at linear cost. The results of that simple algorithm, also shown in Fig. 1(b) (diamonds), already reduce the error to ≈ 6% across all sizes n, a considerable improvement on the GNN results in Ref. [1] and still better than an improved version, GraphSAGE, the authors mention in their response (orange line). In conclusion, the study in Ref. [1] exemplifies a number of common shortcomings found in the analysis of optimization heuristics (see also Ref. [7]): (1) Reliance on rigorous but rather poor and often meaningless bounds, as provided by the Goemans-Williamson algorithm in this case, instead of using the much more relevant results (albeit as-of-yet unproven) from statistical physics, (2) using an obscure presentation of the data, (3) lack of state-of-the-art comparisons across different areas in science, and (4) lack of benchmarking against trivial, baseline models such as gradient descent or greedy search we presented here. On such closer inspection, the proposed GNN heuristic does not provide much algorithmic advantage over that base line. It is likely that these conclusions are not isolated to this specific example but would also hold for Max-Cut at d = 5 and for the other QUBO applications discussed in Ref. [1], as the concurrent comment by Angelini and Ricci-Tersenghi (arXiv:2206.13211) indicates.
2022-10-04T06:42:08.598Z
2022-10-02T00:00:00.000
{ "year": 2022, "sha1": "695014674e1a481286dd8fdec281b7a8c4925285", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "695014674e1a481286dd8fdec281b7a8c4925285", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
143574975
pes2o/s2orc
v3-fos-license
Disability Policies in France: Changes and Tensions Between the Category-Based, Universalist and Personalized Approaches , Introduction At an international level, over recent years disability policies have been marked by the gradual emergence of non-discrimination policies. At a European level, in 1997 the Treaty of Amsterdam included a non-discrimination clause in relation to disabled persons. In 1993, the United Nations set out the rules for equal opportunities for disabled persons, and they have just drafted an international agreement on the rights of disabled persons, which will enter into effect as soon as it has been ratified by all 20 countries. Such major international orientations are gradually making their mark on what we will call, in this article, the ''social treatment'' of disability, i.e. a set of views, practices and policies relating to disability. For over a century, France, with its welfare state traditions, has had legislation covering victims of workrelated accidents and then maimed soldiers, before finally being generally Correspondence: Myriam Winance, 13 rue Jules Lagneau, F-57000 METZ, France. Tel: '(33) 3 87 38 98 26. Email: winance@vjf.cnrs.fr applied in 1975 to all disabled persons, when two laws were passed: the ''orientation law in favour of disabled persons'' and the law governing social and medico-social institutions. These two laws established a disability policy in France, and created the status of ''handicapped person'', by defining specific rights for such people, and by leading to the development of specialist or common law intervention devices. This entire policy is now undergoing indepth changes. The 1975 law in favour of disabled persons has been replaced by a new law, of 11 February 2005, entitled the ''law on the equal rights and opportunities, the participation and the citizenship of disabled persons''. This law changes France's policy on disability inasmuch as it establishes new rights for disabled persons, changes the institutional intervention device and asserts the need for everyone to have access to everything. The drafting of the law required major negotiations between all actors involved in the problem: politicians, administrators, associations, scientists, etc. This process officially started in December 2002, and still continues today through the drafting of decrees governing application of the law and through the implementation of the new device. By bringing all actors together at the negotiating table, this process of revising and organizing a new policy gave rise to fruitful debates on both the conceptions of disability and the different types of ''social treatment'' for disabled persons. The questions under examination were both theoretical and practical: what is disability? Must we or can we define disability within the framework of a law? What must/can the relationship be between ''society'' and ''disabled persons''? What might the objectives of a disability policy be? When referring to people who are sometimes very heavily disabled, what do the terms ''citizenship'' and ''social participation'' really mean? How can compensation and non-discrimination be combined? And so on. During these debates, the various actors were obliged to explain their standpoints Á often the opposite of those of the other parties Á on all of these questions. France's current disability policy is therefore the result of a political and administrative device that is loaded with history and which has evolved under the influence of the international situation. In this article we will be making a socio-political analysis of France's disability policy. By looking back at the history of disability and at more recent debates surrounding the implementation of the new law, we will show that the field of disability is currently traversed by tensions between different types of treatment and different conceptions: a category-based conception, a universalist conception and an interactive and personalized conception. The first part of the article looks at the historical changes which led to the implementation of the disability policy in France in the second half of the 20th century. We will then describe the emergence of these three conceptions and their related modes of treatment. We will show that they are linked, that they in some way call upon one another, and finally, that they coexist without replacing one another Á even if within this evolution we can point out certain major shifts. In the third part, we will examine the tensions caused by this coexistence within the new policy. We will concentrate on two significant examples: the integration into the law of a definition of disability, and the Disability Policies in France 161 creation Á through the law Á of a new right, the right to compensation. We will demonstrate that this new disability policy is part of both continuity and severance. It integrates certain aspects acquired through history whilst at the same time making it possible to make changes for the people concerned. The Historical Context, the Origins of Disability Policy in France Infirmity as a Condition: between Assistance and Social Control Throughout the Middle Ages, people with impairments Á referred to as ''cripples'' Á were considered to be part of a larger group, that of the ''deserving poor'' (Castel 1995, Stone 1984. This label covered anyone who was unable to work Á not just the crippled, but the insane, old people, orphans, widows with a large family to provide for, etc. Because it was accepted that they were unable to provide for themselves through work, all these people could legitimately benefit from assistance and charity. There was also a certain hierarchy among the ''deserving poor''. There were two reasons why the crippled had a privileged status. On one hand, their infirmities made their incapacity to work visibly and undeniably clear and independent of their wills. On the other hand, in the Christian condition, the suffering poor are exalted because they symbolize Christ's suffering and allow the rich to gain salvation by being charitable. To this first criterion, that which allows the person to benefit from assistance (the recognized inability to work), a second can be added: domiciliation. In France, help is mainly provided by the Catholic Church, via the clergy and religious orders, to people within the parish. These two criteria Á inability to work and domiciliation Á allow one to distinguish between those who are helped and those who must work (Castel 1995). This way of thinking about and organizing assistance was common to all western countries in the Middle Ages. From the 14th century, following changes in work organization and social structure, the question of helping the ''deserving poor'' was examined in the light of another question crucial to the times Á that of vagrancy and the mobility of the workforce. In most European countries the number of vagrants was increasing. Vagrants were causing problems in pre-industrial society. On one hand, they were physically able to work, but circumstances meant that they were not able to integrate the traditional work structure. They were therefore forced to move around to find work elsewhere, and, while waiting to find work, to beg. On the other hand, they represented a threat to traditional work organization; where or when there was a lack of workers, they could negotiate their salaries or refuse to do the work that was offered. Consequently a negative image of vagrants came about, as they were seen as dangerous and useless. As far as the authorities were concerned, it was no longer a case of simply helping the poor, but also and above all of controlling and repressing vagrants, and preventing them from travelling. For Stone (1984), the roots of the administrative category of ''disability'' are found in the necessity to maintain a clear distinction between a distributive system based on need, and a distributive system based on work, and thus on the requirement, as from the 14th century, to introduce devices with which to distinguish between vagrants who were legitimate because they were unable to work, and those who were not legitimate and ought to work. (Stone analyses the construction of the disability category by returning to the history of the Welfare State in three countries: England, Germany and the USA.) In England, the desire to control led to devices such as letters of authorization to travel (1388), which for able-bodied persons gave the reason for the journey in question, and for disabled persons they stated the type and duration of the disability; a system of identifying badges for legitimate beggars (1563), and to the generalization of workhouses and forced labour (1834). In France, it strengthened the practices of hospitalization and confinement, and, more particularly, to the development of the ''general hospital'' 1 . The French Revolution was the beginning of a gradual shift from the notion of private charity to that of public charity. Under the influence of the philosophers of the Enlightenment, there appeared the idea that assistance was the duty of the State. In 1790, the principles of the ''Comité de Mendicité'' (a committee governing begging) were: assistance is a social duty, savings plans were a necessity, and private charity must be encouraged. But the first public aid establishments, created in 1796 by the Directory, took the form of charity offices which were communal services under the authority of Prefects. The result was that during the entire 19th century, the duty to assist was exercised at local level and was therefore optional (Stiker 1999). It was in 1905 that public assistance became a legal obligation in France for the elderly, crippled and incurable. This free assistance was aimed at individuals who were without resources and who could not work (''the deserving poor''). People who could work were expected to resort to personal funds in the form of savings, to protect themselves against insecurity and destitution. Disability, Damage caused by a Collective Activity, Gives One the Right to Repair At the end of the 19th century and at the start of the 20th century, two major events led to a change in the help given to the impaired and to the breakdown of the distinction between ''good'' and ''bad'' poor people: on one hand, industrialization and the resulting impoverishment, on the other hand, the First World War. The new working conditions which emerged from the successive waves of industrialization made it impossible for the majority of workers to guard against insecurity through individual savings (it is possible to work and still be poor). Furthermore, work accidents, which were common in this context, were a problem (Ewald 1986). France's Civil Code did not provide for the indemnification of victims of work-related accidents, as the procedure required that it had to be proven to be the employer's fault. Often it was impossible to find any fault that could be put down to a given individual, or when it was possible, it was difficult to prove it. The worker, victim and often invalid, remained without resources and without the possibility of finding a new job. Proceedings started by workmen, which had unsatisfactory outcomes for either the workman or the employer, led to an awareness and to Disability Policies in France 163 a debate in the national assembly that in turn led to the law of 9 April 1898 on work-related accidents. The law established the notion of responsibility ''for risks'' (without there necessarily being fault) and introduced the notion of social repair for the damage caused. The damage, i.e. the disability caused by the accident, refers to the loss of the ability to work. It was no longer seen as the consequence of individual behaviour, an individual's fault, but as the product of a collective activity, and as such it should be dealt with by the community. This law thus broke away from the liberal tradition based on liberty and individual responsibility. By introducing the notion of collective risk, it marked a change in the way of thinking about and organizing social relations. The 1898 law pushed the judgment of individuals and individual responsibility into the background, and introduced the idea of a collective sharing of the costs relating to accidents at work. Henceforth, when the inability to work resulted from a work-related accident, it left the field of aid and charitable assistance. Work-related accidents as a social risk opened the road to collective repair, with the employer being obliged to take out insurance to guarantee its solvability. This notion of risk socialization was gradually extended to cover other risks. (Rosanvallon 1995). In this way, the First World War extended this logic of risk socialization to include repair for maimed soldiers, by introducing a pension system. As in the case of victims of work-related accidents, the injuries incurred by a soldier on the front resulted in a collective activity. A special regime of social rights was created for maimed soldiers. In 1919, the French Ministry for ex-servicemen introduced an official scale for assessing disabilities, which was used to determine the levels of the pensions paid. Introducing access to the right to a military disability pension guaranteed by the State involved the disabled individual's incapacity being assessed in accordance with this official scale. From Financial Indemnification for the Damage Done, to Compensation for Impairment through Rehabilitation Whilst war prolongs the notion of repair, it also changes it. The lack of workers Á another consequence of war Á led to the emergence of rehabilitation practices. Their objective was, by compensating for the disabilities, to enable invalid soldiers to return to work. To make repair, it was not enough to pay an indemnification, it was important also to compensate through physiotherapy and prostheses and achieve reintegration through work. These practices were gradually extended to invalid civilians, i.e. to people who have to live with the consequences of an impairment which cannot be cured. Following the place taken in the first half of the 20th century by chronic illnesses and health problems with long-lasting consequences (tuberculosis, and above all poliomyelitis; see Montès 2000, Oshinsky 2005, there is an ever increasing number of invalid civilians. For these people, professional insertion is an alternative to assistance. There has thus been a shift from simple financial indemnification (in relation to an injury) to compensation for disabilities through rehabilitation and physiotherapy, and on to professional reinsertion (Stiker 1999) 2 . The social insurance system which was set up in the first half of the 20th century (Join-Lambert, Bolot-Gittler, Lenoir & Méda 1997) contributes to the institutionalization of rehabilitation practices. In 1945, the creation of the Social Security system was the concretization of the socialization process described above. In its introduction, the founding text (edict of 4 October 1945) clearly states the objective of ensuring that all citizens have the resources with which to live in cases where they are unable to earn such resources through work. Several risks are covered by the social security system Á invalidity, illness, maternity, old age, work accidents and professional illness. But the mechanism in place is the result of the insurance mechanisms developed in the first half of the 20th century, and only covers the workers and their families who are entitled to those rights through the contributions deducted from their salaries. A certain number of people are excluded from this system, and further notions of assistance will gradually be introduced to complete this system of insurance. Disability as a Social Maladjustment requiring Specialist Intervention The above history mainly relates to changes in the social treatment of adults with impairments; from the mid-20th century it encounters a parallel history which related to children and which, at that point in time, leads to the emergence of the notion of ''social maladjustment''. This notion, which was to be replaced by that of disability, explicitly reveals the idea of a difference between the individual and the social norm, a difference which must be reduced through specialist interventions, hence the creation of a specific field. This notion has its roots in the concern, which appeared during the 19th century, for educating those who had until then been considered to be impossible to educate or incurable (Chauvière 2000, Gateaux-Mennecier 2000, Muel 1975, Pinell & Zafiropoulos 1978, Zafiropoulos 1981). The first schools for impaired children were founded in Paris in the 18th century, for deaf children (the Institut des Sourds-Muets was founded by the Abbé de l'Epée in 1760) and for indigent blind children (in 1784, Valentin Haü y opened the first school which in 1786 was to become the Institution des Enfants-Aveugles; see Ravaud 2006, Weygand 1990). This concern for education was then extended to children considered to be idiots. In 1882, the Jules Ferry law made primary education obligatory for all children between the ages of 6 and 12. But very quickly, the school was finding it difficult to integrate those referred to as ''abnormal children'' (Vial 1990(Vial , 1991. These were children who were blind, deaf, idiotic, retarded or with motor deficiencies, and also children who were difficult, unstable, perverted, delinquent, etc. Hence the creation, in 1909, of specialized classes integrated into ordinary schools, and Ecoles Autonomes de Perfectionnement (independent specialized schools). This was the starting point for a specialist sector, one which would be built at the crossroads of school, asylum and legal institution; its objective, through the creation of a medico-educational system, would be to adapt to society all children who ''clutter up'' schools, or who, on the contrary, had until then been relegated to asylums, Disability Policies in France 165 correctional institutions, etc. Paedopsychiatrists would be asked to give diagnoses, to orientate the children (now qualified as ''maladjusted''), and to develop educational and rehabilitative methods which would eventually allow such children to be put to work. (These experts examined the pupils at the school and separated the maladjusted children from the normal children, using IQ tests.) This sector was to involve a full range of professionals, institutions, establishments and specific, specialist knowledge (Ravaud & Lang 1998). Institutionalization practices The period between the two wars and the period immediately following the end of the Second World War saw the creation of the first associations for disabled persons (the Association des Paralysés de France was founded in 1933) and for children who were ''maladjusted'' (Barral 2007, in this issue). Their main objective was to take disabled persons out of hospices and psychiatric hospitals, and, through appropriate rehabilitation, to allow them to acquire the abilities they would need to reintegrate into society. Thanks to donations and bequests (real estate in particular), and with local political support, these associations were able to set up specialized educational institutions, special residential homes, workshops, holiday centres, etc. The creation of the French health and social security system allowed continued development. The state chose to delegate management of such institutions to associations, in return for the introduction of a daily price financed by the French health and social security system (accommodation and re-education costs were borne by the local community, with the institutions paying for the buildings). At this point, we would like to stress the French specificity of these institutions. For the most part, they were of private initiative and were privately run (in an associative form), but functioned with public funding. From the 1950s, and above all in the 1960s and 1970s, the number of institutions was to increase considerably, but without any national planning, their creation being the result of negotiations between local associations and local political authorities which were more sensitive to the notion of leaving visible traces of their actions (sometimes qualified as a ''stone and mortar policy'') than to the question of insertion into mainstream life. This specificity was also to mark the French associative movement: the notion of a ''management'' association was later to be considered as a brake on the rights of the people concerned, with the biggest associations finding themselves in a situation of conflict between their status as employers and their mission to represent disabled people (Barral, Patterson, Stiker & Chauvière 2000). Towards a Disability Policy We wish to retain two elements from this history: first of all, the emergence of the idea of repair related to that of invalidity. Invalidity became something that was repaired by indemnification or a pension, and then compensated by prostheses and rehabilitation. An entire specialized sector Á medical or medico-social Á was gradually developed, with the objective not of curing, but of adapting adults or children to society, of enabling them to achieve integration through work, even if this meant temporarily removing them from society in order to rehabilitate them. The detour via an institution would then allow them to return to society. The second element is the change from assistance to social rights. Until the 19th century, cripples could benefit from assistance even though they were not allowed to claim it. Assistance was a moral issue. From the 20th century, following the changes described above, it became an obligation, a matter of legislation. A further step was taken with invalidity, interpreted as a social risk, and, as such, covered by a social insurance. Through a gradual extension of this logic, different systems for dealing with disabled persons were created in succession, but without replacing one another: the system for victims of work-related accidents, the system for maimed soldiers, for invalid persons under the French health and social security system, etc. The level of benefits available to individuals, which varies considerably depending on the system, depends on the origin of the disability. But certain persons with impairments (especially when since birth) cannot benefit from any of these systems and have to rely on assistanceship. These two elements constitute the bases for the notion of ''handicap'' that appeared in the second half of the century, and for the disability policy which was then put in place. The notion of ''handicap'' was originally used in sport, in horseracing; it designated the additional weight that the faster horses would be required to carry in order to give all runners an equal chance of winning. In France, the word then moved over to human beings, to signify a ''disadvantage''; finally, in the second half of the century, it coexisted and ended up being substituted for all the terms used to designate people with disabilities, such as invalids, the maimed and the ''maladjusted'' (Stiker 1996). The use of the term ''handicap'' came about through a change in the representation and the modes of treatment of people with impairments, a change that was characterized, during the 20th century, by a connection being made between the notions of infirmity, risk and repair. The notion of ''handicap'' is used to designate a difference from a social norm, which is itself defined in terms of performance (mainly work-related), the difference being caused by the existence of an impairment. ''Handicapped'' people are people who differ from the average, from the social norm, who cannot do what average individuals can do (this notion of normality is founded in the statistical theory of Quételet and Galton; see Grue & Heiberg 2006), and who must therefore be adapted or readapted (Ebersold 1997). Use of the term marks the conjunction between Disability Policies in France 167 the field of adults (maimed soldiers, victims of accidents, invalids) and that of maladjusted childhood, although excluding one child category, that of delinquents. It appeared for the first time in an official text in 1957 (with regard to ''handicapped workers''); it took over as the term to be used to describe a category covering all persons with an impairment, when in 1975 two laws were passed Á the ''orientation law in favour of handicapped persons'' and the ''law relating to social and medico-social institutions''. These two laws extended the process of the socialization of responsibilities and the notion of social rights to all persons with an impairment, whatever the origin of that impairment might be. Article 1 of the orientation law thus states the national obligation to integrate ''handicapped persons'' into society. But these laws achieve this extension by organizing a category-based policy, i.e. by creating a broader category, that of ''handicapped persons''. They institute the political, legal and administrative mechanism which allows one both to statistically define this category and to organize the system that aims to define the members, to grant them specific aid and to integrate them into society. Indeed, the first law (orientation law) creates the status of ''handicapped person'', a status which creates entitlement to certain rights (allocations, aid, etc.). This status covers everyone who, due to an impairment, whatever its origin, cannot integrate society in a ''normal'' manner, and who are therefore allowed to benefit from certain rights which facilitate their integration. The law provides for the following: it asserts the role of prevention and detection and disabled children's right to education (it makes special education free and provides for compensation which allows families to cover extra costs relating to education and brought about by the child's disability); as far as employment is concerned, it allocates the quality of ''handicapped worker'' with a view to facilitating professional insertion into a normal or sheltered environment and creates the right to guaranteed resources for the worker (this guarantee compensates for the loss in earnings due to the lesser output of people with lower work capacities); it also sets out methods of improving people's social lives. Finally, it creates a specific system of social aid; this system does not remove existing systems of indemnification (victims of workrelated accidents, disability pension under the health insurance system, maimed soldiers) but adds to them, to help those who have no other entitlements (''allocation for handicapped adult'' which does not come from insurance, but from assistance). In addition, this system allows disabled persons who do not work to be protected by the French health and social security system. But whilst the orientation law creates the status of ''handicapped person'', it provides no conceptual definition of disability. What a ''handicapped person'' is, in practice is defined and identified by regional commissions (one for adults, one for children) set up by the law. Administrative commissions, made up of an equal number of representatives from the various administrations and interested parties concerned, have the responsibility of assessing the impairments of individuals. The first assessment is a medical one: a medical certificate and a medical assessment of the impairment constitute the entry ticket into the procedure. The commissions then compare the medical assessment and the official scale 3 in order to define a level of disability; this level will then determine what a person is entitled to and the type of cover. The allocation of a level of disability depends on the medical assessment; for example, a paraplegic person is allocated a disability level of 100%. The second law, passed at the same time, covers the organization of social and medico-social institutions. It defines the missions of these institutions, and how they work. It led to the creation and organization of an autonomous medico-social sector, separate from the health sector (Bauduret & Jaeger 2002; see also the special issue of the Vie Sociale review, 2005). The emergence of this law led to a dual process. On one hand, as we have seen, associations promoted the creation of an ever-increasing number of institutions specializing in the care of disabled persons. On the other hand, the objective of reintegrating society is a difficult one to reach, and it has to be said that a temporary stay in an institution often becomes a permanent one. The expression ''segregatory detour'' has thus been used to qualify this movement of institutionalization (Ravaud & Stiker 2001). These numerous institutions have become places of living which are relatively apart from society, caring for people who, when it comes down to it, are considered to be unable to adapt to the society. From this comes the need to introduce legislation to standardize the organization and management of these particular institutions which are not healthcare institutions but places of living, of work, of learning. These two laws and the resulting mechanisms created a category-based treatment for ''handicapped persons''. The category is defined in a pragmatic manner on one hand, through references which state the characteristics of the typical representative of that category, and, on the other hand, through practices for the assessment and allocation of aid which compare each person and his/her individual characteristics, to those of the typical representative. Furthermore, this treatment leads to an objectivation of disability which becomes an inherent characteristic that is attached to the people in question, defining their identity, their status and their position. Each person is ''labelled'' as a member or non-member of the category, and this membership gives each person rights. This category-based treatment led to a paradox. The integration of people (a national objective) was achieved by segregating them. This segregation was either implicit Á people were integrated into the mainstream life through the specific status of handicapped person Á or explicit Á people were integrated via a specialist sector (Winance 2007). This paradox was increased by the time it took to apply each law. The second law came into force immediately, unlike the first, whose decrees of application came into existence either very slowly or, in some cases, not at all. This category-based treatment (treating individuals as members of a category) also led to spatial separation in the sense that it led to a distinction between spaces: the space for normality (for able-bodied people, for ''the average person'') and the specialist sector space (qualified as the medico-social sector), the aim being for those who were ''outside the normal space'' to be able to re-integrate it. But, in the 1960s and 1970s, the category-based approach came into conflict with a universalist approach that was emerging from the question of accessibility. Disability Policies in France 169 The Question of Accessibility: the Emergence and Development of a Universalist Treatment The question of accessibility arose in the 1970s in English-speaking countries and in the USA in particular. At this time, more and more disabled persons wanted to integrate into mainstream life, but they were encountering obstacles Á architecture, lack of services. Ed Roberts, a disabled student at Berkeley, founded the first Centre for Independent Living, the objective of which was to provide disabled persons with all the services they need in order to live independently in society. Then came the creation of the Independent Living Movement, which highlighted accessibility as the main cause of the exclusion and dependency of disabled persons; in order to achieve independence for disabled persons, society had to be made accessible. The question of accessibility was then taken up by international organizations, and, in France, by certain actors confronting similar problems. The Association for Housing the Seriously Disabled (Association pour le Logement des Grands Infirmes, ALGI, founded in 1959), the initial purpose of which was, one by one, to rehouse disabled persons as they left rehabilitation or treatment centres, aimed to gradually fight for the construction industry to adopt accessibility standards (Sanchez 1997). Yet this principle of accessibility led to a change in the approach to disability, which was to be seen in the terminology used. To highlight the problem of accessibility was to demonstrate that the disability, or rather the difficulties of a person with an impairment, were not due solely to the impairment itself (to individual characteristics), but also to the environment. In other words, there was a shift from the individual to environmental factors, and hence a related shift in the action being targeted: the target was no longer the adaptation of individuals, their normalization, but the adaptation of society, in such a way that society integrates the differences. The standard of reference had changed: it was no longer the able-bodied person, the ''average person'' who served as a reference, but ''humanity'', representing all differences. This change in reference was accompanied by a universalization (opposed to the notion of category). Disability was no longer a constant that defined certain individuals as members of a given category, but a variable which depended on the environment. Thus the person in the wheelchair, the elderly person, the mother pushing her pram, the person carrying a pile of packages, all are confronted with similar difficulties: stairs, a door that is too hard to push open, a corridor that is too narrow, etc. With this approach, ''we are all potentially disabled'' and it is society which has to be radically changed. In the United States, this approach was pushed to its furthest by the Independent Living Movement (Zola 1989), a movement which fights for generalized accessibility in all areas of life. People then referred to universal design. In France, the principle of accessibility is set out in the law of 1975, which, as we have already said, provided for an essentially category-based treatment. Furthermore, the law made implementation of this principle the responsibility of decrees and regulations, some of which were never passed. It set down no deadline and no penalty in case of non-application. In later years this was to lead to a shift towards ''category-based application'' for accessibility, which involved making premises and transport accessible solely in the case of the effective presence of a disabled user and for his/her specific disability. This category-based interpretation of accessibility goes against its universal application, which would mean making the environment accessible for all disabilities and whether or not any disabled person is present. During the 1980s, due to pressure from associations and from international organizations (ONU, EEC) who adopted the principles of non-discrimination and of access by all people to all things, the principle of accessibility, under its universal interpretation, gradually prevailed (the law of 1991 on the accessibility of buildings open to the public). During the 1990s, in France, this change was accompanied by the emergence of a new notion, that of ''situation of disability''. Actors, especially associations, often referred to this when making their demands. This notion, by defining disability in an interactive manner, can have two meanings: the broader sense falls within the tradition of the universalist treatment; the other is contextual and leads to a more personalized treatment. ''Persons in a Situation of Disability'': a Shift Towards Personalized Treatment The notion of ''situation of disability'' no longer defines disability as a difference from a social norm, nor as the consequence of the inaccessibility of the environment, but as an interaction. Disability is the result of an interaction between individual characteristics and an environment (Ravaud & Fougeyrollas 2005). This notion thus refers to an intermediate model between models qualified as individual and those qualified as social (Oliver 1996). In the 1980s it was used in the scientific domain to stress the role of the environment (Minaire 1983). Then it was gradually taken on board by actors (associations in particular) in the field of disability. They used the notion in a dual sense, which sometimes led to a certain ambiguity regarding the approach that they were defending. At first they used it in support of the universalist approach; by defining disability in an interactive manner, they stressed the need to make society accessible by removing all barriers preventing the social participation of people with impairments. Then, a while later, there was a shift in accent; they stressed not only the idea of interaction, but also the notion of ''situation'', defined as ''context'' or ''circumstances''. ''Situation'' no longer referred to the general idea of the environment, but to a person's own specific environment. This ambiguity exists in the official texts, as can be seen from a report from the Economic and Social Committee, ''Situations of disability and living circumstances'' (Assante 2000): A brief reminder of the terminology will allow us to better understand the framework for this thinking. What do we mean by situation of disability? A situation of disability is always and solely the product of two factors, on the one hand a person referred to as ''disabled'' due to his or her impairment, be it physical, sensorial or mental, and on the other hand, environmental, social, cultural or even regulatory barriers which create an obstacle that the person cannot cross due to his or her particularity or particularities. Disability Policies in France 171 The existence of such situations prevents disabled persons from carrying on the everyday activities normally available to every citizen. The emergence of such situations creates a de facto discrimination . . . It is clear that the removal of such obstacles will not remove a person's impairment, but it will allow him or her, if certain conditions are met, to move freely throughout the city, to go to school, to his or her place of work, to his or her activities, etc. (Assante 2000:II-5). During the various debates (both in parliament and in negotiations when drafting the texts, between associations and administrative and political actors) surrounding the creation of the new law, the notion of ''situation of disability'' had several functions. On one hand it was the instrument for personalized assistance and personalized ways of treating disability. ''Situation of disability'' refers to all the characteristics which make it a special, singular situation, which is proper to the person in question, different from other people's, and which needs to be analysed on a case-by-case basis in order to find a suitable solution. On the other hand, the notion of ''situation of disability'' is the tool allowing one to detach the disability from the person and to avoid any identification between the person and his/her disability; in other words, to avoid all stigmatization. The disability is no longer a state, it is interpreted as resulting from a set of environmental or individual characteristics which defines a situation and not a person. During the revision process for the law of 1975, disabled persons' associations stressed this point: during the procedures for allocating rights, it is a case of assessing a particular ''situation of disability'' and never a ''handicapped person''. So whilst this notion maintains the idea of the role of the environment in the production of disability, it also reintroduces the role of individual characteristics, though in a completely different manner to that of the traditional category-based approach. The latter, which was based on a medical model, led to the objectification of disability through individual characteristics (impairment and incapacity), to attaching the disability to the person and to thus positioning the person within a status. Belonging to the ''handicapped person'' category was the condition sine qua non for rights to be granted. A given impairment corresponded to a given level of incapacity and gave entitlement to the same rights for everyone. The interactive approach, covered by the notion of ''situation of disability'', recognizes the role and the need to take certain individual dimensions into account, including impairment and incapacities in the emergence of a situation of disability; but these are not turned into objective and essential characteristics of the individual, they are seen in a relative manner, within a singular context. The purpose of the assessment is to identify all of the factors which play a role and which interact, in order to determine the action, targeting either the person or the environment or both, and to reduce the situation of disability without the individual being either defined or categorized; it is thus the situation which is defined, and not the person. In this case, the implemented action respects the principle of non-discrimination 4 . The customized approach to disability that France is trying to set up with its new mechanism already exists in certain European countries. The mechanisms grouped together under the generic terms of ''direct payments'' and ''individualized funding'' originate in the same logic (Askheim 2005, Carmichael & Brown 2002, Waterplas & Samoy 2001. The UK, Sweden, Norway, Belgium and Holland have introduced customized allocations (Waterplas & Samoy 2005). Its basic principle is to grant disabled persons money with which to pay for the aids needed for an independent life and compensating them for their needs, thus strengthening their control over their lives. Later on, we will see that the creation of a right to compensation is in some ways similar to these measures. The Law of 11 February 2005 and the Debates that Surround it: a Crossroads at which the Different Approaches Meet In the debates 5 that surrounded the development of the new law and its decrees of application, and which now still surround the implementation of the new system, category-based, universalist and personalized treatments coexist. Each is defended by different actors, but they are sometimes all defended by the same actor who moves from one approach to another. Indeed, analysis shows that for some actors, the development of a disability policy and the reduction in situations of disability involve combining the universalist and personalized approaches. Opting for one to the detriment of the other would lead to solving only a part of the difficulties encountered by people in situations of disability 6 . Yet by putting the two approaches into a hierarchy, the actors give a different meaning to the policy being implemented. The National Consultative Committee for Disabled Persons, acting as a relay for disabled persons' associations, made a priority of the universalist approach, with personalized solutions being used simply as a complement when generalized accessibility does not suffice to ensure a person's full participation. The government favoured the personalized approach, which was easier to define in terms of cost. We will give two examples of these conflicts, the first being the debate on the definition of disability, the second being the debate surrounding the newly created right to compensation. From a methodological point of view, our analysis in this section is based upon two types of data. Firstly, we observed a series of negotiation meetings in two committees which have always had and still have real influence. The first Á the Committee for Entente between Associations Representing Disabled Persons and Parents of Disabled Children Á is an informal group; it brings together the main associations in order to develop a common strategy to put before the government and government representatives, for whom it serves as a preferential mediator. The second Á the National Consultative Committee for Disabled Persons (CNCPH) Á is a national committee with a direct link to the Minister responsible for disability. It has a dual mission: on the one hand to ensure that disabled persons participate in developing and implementing policies that concern them, and on the other hand, to assess the situation of these people and to come up with recommendations for improving them. It is therefore an official committee which, in its plenary form, brings together all of the actors concerned Disability Policies in France 173 (a French MP, a senator, people representing territorial collectivities, associations for disabled persons and their families, associations and organizations working in the field of disability, organizations for social protection, organizations carrying out research in the field of disability, unions and professional organizations for employers; representatives of ministries were also present, but were not allowed to vote). The CNCPH has a consultative voice for any project of law or decree relating to disability. As far as the 2005 law and its decrees of application are concerned, the opinion of the CNCPH must be obtained before any legislation can be definitively passed. Secondly, we gathered a corpus of documents produced by entities involved in the process (minutes from CNCPH meetings, intermediate versions of texts proposed by ministries, legislative texts, reports on disability policies made upon request from the government, exhaustive reports on parliamentary debates taking place in the French Senate and National Assembly Á the two representative houses making up the French parliament). The two examples chosen for this part of our article Á the question of the definition of disability and the question of the right to compensation Á are based on a summary of the debates which took place in the different committees. Disability: Between Category and Situation? The 1975 law did not define disability in a conceptual manner, but solely in a pragmatic way, because it introduced local administrative commissions responsible for assessing people's levels of incapacity and, where appropriate, for granting the status of ''handicapped person'' and any related rights. This generates (and this is one of the major problems that the actors, the government, disabled persons' associations and professionals are trying to resolve) a vagueness and considerable heterogeneousness in the attribution of the status of ''handicapped person'', depending on the commission. On the one hand, local variation in the assessments of incapacity levels, depending on the commission (commissions grant different levels of disability for the same medical and functional assessment), on the other hand, the emergence of the phenomenon of social exclusion which has led to an expansion, in practice, of the notion of handicap that was no longer based on the existence of impairments. In certain counties, commissions sometimes granted the status of ''handicapped person'' to people in poor health, unemployed and thus without resources, but whose incapacities were unclear. Faced with the disappearance of the boundary that was in France referred to as ''social handicap'', the government tried to unify the treatment of disabled persons with that of persons in situations of exclusion, to the detriment of the former. In February 2004, the government evoked the possibility of doing away with the allocation for a disabled adult (AAH) and of replacing it with the minimum income granted to social outcasts, the amount of which is lower than the AAH. During the revision process, there rapidly arose the question of adding a definition of disability to the law. But the stakes of the various actors were very different. The government wanted to clearly define the area of application of the law, in order to avoid any drifts. Hence its preference for a category-based definition and its rejection of the expression ''situation of disability'', which highlighted the risk of an extension of the target population and a grouping together of populations which had previously been separated (for example, ''elderly persons'' and ''handicapped persons''). For disabled persons' associations, the stake of this definition was threefold. First of all, they felt it was vital to get away from a category-based approach, seen as stigmatizing and excluding, which supposed a definition of disability and an organization of practices which do not turn disability into an objective characteristic of the person. They nevertheless wanted to maintain a specificity compared to other populations, in order to preserve certain advantages already gained (for example, the disabled adult allocation, the level of which was higher than that of other social aids). This specificity required disability to be defined in relation to the existence of impairments. Furthermore, within the field of disability thus delimited, disabled persons' associations wanted the different systems of indemnification to be equalled out to suit the origin of the impairment 7 . Finally, in order to put the accent on a necessary universalist policy of generalized accessibility, the definition had to explicitly recognized the role played by the social and physical environments in producing the disability. Within the framework of the negotiations on the texts, during the plenary sessions of the CNCPH, the three options were prioritized differently by the various associations of disabled persons. Some of them (the main association to be taking this stance is the Association des Paralysés de France, one of the biggest associations for the disabled in the country, defending the cause of those with motor impairments) saw the universalist approach to be the priority and defended the term ''situation of disability'', whilst others (for example, the FNATH, Fédération Nationale des Accidentés du Travail, which has changed its name to Fédération Nationale des Accidentés de la Vie) preferred the other two options and a ''category-based'' terminology, whilst at the same time defending the notion of an interactive definition. Despite its insistence, the CNCPH, supporting the position held by the majority of the associations, was unable to win the day with its demand for the inclusion of a definition of disability as interaction. Yet the definition which was chosen is not a purely category-based definition, but the result of a compromise. It does not define a category by the objective characteristics of the individual members: Art. L. 114. A disability, under this law, is constituted by any activity limitation or any restriction to participation in life in society to which a person is subjected in his or her environment due to a substantial, durable or definitive alteration to one or more physical, sensorial, mental, cognitive or psychological functions, or to a polydisability or to a disabling problem of health (Law of 11 February 2005). Disability designates an activity limitation or a restriction to participation in life in society, and not the deficiencies of the individual. By making the solid link between disability and substantial and durable deficiency, the law clearly defines its field of application and, above all, distinguishes it from that Disability Policies in France 175 of exclusion or social disability. In so doing, it rejects the notion of interaction, the environment being integrated as a mere context. This reduction of the role of the environment to that of a simple context compromises the ambition of a universalist policy for generalized accessibility and brings personalized treatment to the fore, an objective which, as far as disabled persons' associations are concerned, should only be complementary. The assertion of the link between disability and impairment is made to the detriment of the recognition of the link between disability and environmental barriers, which is a disappointment to those who were hoping for a break away from the conceptions of disability that had prevailed until then. On the other hand, it satisfies the desire of the associations of disabled persons to preserve what they had already gained and to maintain their specificity in relation to the field of exclusion by diverging from a policy of insertion through work. By placing the accent on the notions of participation and citizenship, they give a broader definition of insertion, which can take place through work and school, but also through leisure activities, the exercise of one's political rights, etc. Hence their insistence, throughout the entire revision process, on the notion of ''situation of disability'', which from the outset places a person in society. Henceforth the objective is not to integrate or insert people in a situation of disability into society, for they are included from the outset, their difficulties being due to social interaction; as the title of the law indicates, the objective is ''equal opportunities and rights, the citizenship and social participation of disabled persons''; society must ensure that people in situations of disability have the wherewithal to take an effective part in society, and that they have access to the same rights as everyone else 8 . The Right to Compensation: Should Measures be Personalized? One of the main principles of the new text is the distinction (previously blurred) between existence income (either from work or from national solidarity through social minima) and compensation for disability. The new law sets out a ''right to compensation'' in accordance with which people have the right to compensation for the consequences of their disabilities, which takes the concrete form of the allocation of compensation 9 . In the debates (parliamentary and CNCPH) surrounding the drafting and determination of this right, there is a tension between category-based, universalist and personalized treatments. In the text for the preliminary draft of the law presented by the government (December 2003) three conditions were set down for the right to compensation: age (exclusion of children under 20 and elderly people over 60), a minimum level of incapacity of 80%, and resource conditions. The combination of these three conditions for the allocation of compensation kept the latter within the framework of a category-based approach, the right to compensation being dependent on people belonging to a category the boundaries of which were age, level of incapacity and resources. The CNCPH and disabled persons' associations immediately reacted and demanded the removal of the three conditions, as the right to compensation should be a universal right granted to everyone ''in a situation of disability'', whatever their age, resources or level of incapacity. In the final text, passed in February 2005, the three conditions that had originally imposed were removed (albeit with transitory periods, especially with regard to the removal of age barriers), as demanded by the CNCPH 10 . Furthermore, the allocation of compensation was not seen as a set allocation granted in accordance with individual characteristics (i.e. the objective characteristics of an individual inasmuch as they result from a comparison of the individual with a specific grid), but as something to be adapted to a given person and his/her personal situation. The objective was no longer to compensate for disabilities, but to meet needs, needs which were specific to each individual because they depended on his/her way of living and life project. These needs were assessed by a multi-discipline team which, on the basis of an overall examination of ''the situation'', came up with a personalized compensation plan. But whilst the law indicated the principles to be followed, it did not define the concrete forms of application, which were to be set out in decrees. Yet during the drafting of these decrees and the creation of the new system, lively new debates took place, particularly regarding the tools used to make this assessment. These debates once again brought disabled persons' associations into conflict with the government, within the framework of the works carried out by the CNCPH. A new tension has arisen between the need to develop standardized criteria to ensure equal treatment throughout the country, the desire for personalized treatment and the refusal to categorize. Disabled persons' associations are once again stressing the idea of ''situation of disability'' which allows them to distinguish between the category-based treatment that has existed so far, and their demand for personal treatment. This notion shows that the assessment is not of the person, of his or her individual characteristics, but of the person's situation of disability, of the specific interaction that the disability creates, thus preventing the person from living in society and achieving his or her life project. Conclusion Analysis of the revision process for the 1975 law and the debates that surrounded it shows the existence of a current tension between three approaches to disability that emerged at different moments in history. The law and the policy which were implemented give concrete form to these tensions. The category-based approach is defended by the government and certain disabled persons' associations; for the former, it allows a defined population to be targeted, thus making it possible to control public expenditure; for the latter, in a period of economic uncertainty it allows people to keep the rights which have already been acquired. The partial maintenance of this approach, in particular through the imposition of eligibility criteria which create entitlement to compensation, can be interpreted as being due to the weight of history. As we pointed out in the first part of the article, in France this approach was the basis for the system of social Disability Policies in France 177 protection. Above and beyond this maintenance, the analysis revealed a conflict between two more recent approaches to disability: the universalist approach (in existence since the 1970s) and the interactive and personalized approach (which took form in the 1990s and 2000s). The integration of these two approaches into the law and policy marks an evolution in the representations and treatment of people with disabilities. These evolutions follow those already seen in English-speaking countries and abroad (Albrecht 2000, Ravaud 2001, Scotch 1988. The notion of ''handicap'', defined as a difference from the social norm due to the existence of an impairment, linked the disability to the individual and objectified the disability by leading to the definition of a category of persons. The other two approaches, universalist and interactive, detach the disability from the individual, either in order to link it to the environment or to place it within an interaction. They henceforth open up the possibility of a noncategory policy, which does not presuppose that people be labelled in order to define an action, an intervention Á in a logic of non-discrimination. They thus open up the possibility of a change in reference and in norm in order to assess and define ''disability'' (Winance 2007) inasmuch as the norm is no longer predefined and determined, but is simply the stake of negotiations. As we briefly mentioned, the question of the centrality of work as an unavoidable road to insertion is under discussion (Ville & Winance 2006). What the notion of ''citizenship'' covers is not defined, and must be defined, virtually on a case by case basis, by the individuals concerned, the people around them and by the professionals who make the assessment, in what the law calls a ''life project''; this will vary considerably from person to person. Finally, analysis of the debate and the policy shows a specificity of the policy in France. The latter in fact results from a desire to keep the last two approaches together. The level of this desire depends on the actor. At the same time Á and this will be the object of more detailed research Á the way in which each actor links and prioritizes these two approaches leads to different representations of the individual, of the society in which he/she lives, of the relationship between the individual and society. Notes 1 It should be noted that the development of the ''general hospital'' did not homogenize the field of assistance. In the 19th century in France there was a great diversity of establishments, some old, some modern: hospitals only accepting people who were ill, hospices for old people, for the crippled, the incurable and orphans, general hospitals with mixed populations. There were also charity offices (Bauduret & Jaeger 2002). 2 Rehabilitation practices have often been interpreted in the sense of a reduction in difference and an alignment with the norm of able-bodiedness. Without denying this aspect, we wish to underline two arguments. Firstly, these practices cannot be understood independently of the contexts in which they occurred. For the persons concerned, in the middle of the century these practices represented a real opportunity, that of being able to leave the hospice or asylum. We have also shown through ethnography that current rehabilitation practices will allow actors to work on the norm and to transform the norm of validity (Ville & Winance 2006, Winance 2006. level or a level bracket for the incapacity caused by each impairment in terms of an assessment of how said impairment affects day-to-day and social life.'' (Sanchez 2005:100). 4 Of course, one might argue that this process also leads to the creation of a ''persons in a disability situation'' category. But unlike the ''handicapped person'' category, which, due to the assessment procedure is very homogenous, the ''persons in a disability situation'' category is likely to show high heterogeneousness and high variability over time. Furthermore, the fact of belonging to a category (the recognition that one belongs to this category of individuals) does not mean one will be granted rights. The category can only be obtained a posteriori through a census and grouping together of all persons having obtained certain rights. Category-based treatment is based on a priori categorization. 5 Such debates have been numerous and varied, involving different actors. We use the term here in a generic manner, to designate all such debates: debates which took place in the Conseil National Consultatif des Personnes Handicapées (see below), parliamentary debates, one-off debates such as when the government organizes themed meetings, etc. Later in the article we will give details of the venue for each debate mentioned. Finally, it should be noted that this debate received very little media coverage, particularly in the daily press, which only described, essentially in an informative manner, the key moments of the revision processes (the move over to the French National Assembly for example). We did not however carry out a systematic examination of the press, which would certainly have allowed a more detailed analysis. 6 During the numerous debates surrounding the evolution of the social model in Great Britain, some researchers (women) raised this question: the radicalization of the social model leads to some people being unaware of the problems that certain persons are facing, by denying and suppressing the importance of the individual experience of disability (Crow 1996, French 1993. 7 We must remember that the law of 1975 had maintained the different systems of indemnification whilst at the same time creating a new one, relating to assistance. This question of equalization is currently being debated within the framework of a project to standardize the levels of the disability pension (regime governed by the French health insurance system) and the AAH (''assistance''). 8 We believe that the definition of disability, as set out in the 2005 law, perfectly fits the analysis of laws proposed by P. Lascoumes. Basing himself on an analysis of laws in the field of the environment, he demonstrates in particular that ''Every legal system is simply an adjustment, to varying degrees of stability, of diverging and sometimes contradictory social interests, under the arbitration of public authorities'' (Lascoumes 1995:399). 9 The allocation of compensation finances five types of expenditure: 1) human aid, 2) technical aid, 3) adaptation of place of living and/or vehicle, along with any additional transport-related costs, 4) specific (i.e. the purchase of nutrients to improve a regime) or exceptional (one-off disability related) costs relating to the disability and not covered by other systems charges, 5) allocation and maintenance of animal assistance. It should be noted that, so far, only the decree relating to compensation for life at home has come into effect. A decree relating to compensation when living in an institution should be coming into effect soon. 10 Eligibility criteria are nevertheless still in force, making it possible to decide who is or is not entitled to the allocation of compensation. But these criteria no longer lead to an objectivation of individual characteristics. In order to be entitled to the allocation of compensation, one must show absolute difficulty in doing one activity or serious difficulty in doing two activities. The activities taken into consideration are those relating to mobility, looking after oneself, communication, the general ability to situate oneself within the environment and protect one's interests (i.e. to situate oneself in time and space, to ensure one's own safety) and one's relationships with others. A commission (commission of rights and autonomy) then grants or refuses an allocation of compensation based on the above criteria.
2019-05-04T13:06:59.633Z
2007-11-28T00:00:00.000
{ "year": 2007, "sha1": "2c8067ae25673456ffe77188d8cb10869558c42e", "oa_license": "CCBY", "oa_url": "https://storage.googleapis.com/jnl-su-j-sjdr-files/journals/1/articles/262/submission/proof/262-1-866-1-10-20171115.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8972bcc0243e682d8492b21870b46d12fa28798a", "s2fieldsofstudy": [ "Sociology", "Political Science", "History" ], "extfieldsofstudy": [ "Sociology" ] }
6232311
pes2o/s2orc
v3-fos-license
Altered Channel Conductance States and Gating of GABAA Receptors by a Pore Mutation Linked to Dravet Syndrome Abstract We identified a de novo missense mutation, P302L, in the γ-aminobutyric acid type A (GABAA) receptor γ2 subunit gene GABRG2 in a patient with Dravet syndrome using targeted next-generation sequencing. The mutation was in the cytoplasmic portion of the transmembrane segment M2 of the γ2 subunit that faces the pore lumen. GABAA receptor α1 and β3 subunits were coexpressed with wild-type (wt) γ2L or mutant γ2L(P302L) subunits in HEK 293T cells and cultured mouse cortical neurons. We measured currents using whole-cell and single-channel patch clamp techniques, surface and total expression levels using surface biotinylation and Western blotting, and potential structural perturbations in mutant GABAA receptors using structural modeling. The γ2(P302L) subunit mutation produced an ∼90% reduction of whole-cell current by increasing macroscopic desensitization and reducing GABA potency, which resulted in a profound reduction of GABAA receptor-mediated miniature IPSCs (mIPSCs). The conductance of the receptor channel was reduced to 24% of control conductance by shifting the relative contribution of the conductance states from high- to low-conductance levels with only slight changes in receptor surface expression. Structural modeling of the GABAA receptor in the closed, open, and desensitized states showed that the mutation was positioned to slow activation, enhance desensitization, and shift channels to a low-conductance state by reshaping the hour-glass-like pore cavity during transitions between closed, open, and desensitized states. Our study revealed a novel γ2 subunit missense mutation (P302L) that has a novel pathogenic mechanism to cause defects in the conductance and gating of GABAA receptors, which results in hyperexcitability and contributes to the pathogenesis of the genetic epilepsy Dravet syndrome. Introduction Dravet syndrome (also known as severe myoclonic epilepsy in infancy, OMIM: 607208) is an epileptic encephalopathy of childhood that is characterized by multiple types of seizures that are often prolonged and particularly fever sensitive. With onset in the first year of life, de novo heterozygous missense mutations in SCN1A, which encodes the pore-forming ␣1 subunit of the sodium channel, are found in about 95% of patients with Dravet syndrome (Vadlamudi et al., 2010). Heterozygous nonsense mutations in the ␥-aminobutyric acid type A (GABA A ) receptor ␥2 subunit gene, GABRG2, and missense mutations in the GABA A receptor ␣1 subunit gene, GABRA1, are associated also with Dravet syndrome (Harkin et al., 2002;Carvill et al., 2014). The GABRG2 mutation, ␥2(Q40X), caused truncation of the predicted mature ␥2 subunit in the first amino acid, resulting in translation of no mature subunits and to activation of nonsense-mediated mRNA decay (NMD) that reduced truncated signal peptide production (Huang et al., 2012;Ishii et al., 2014). In contrast, a nonsense mutation in the last exon of GABRG2, ␥2(Q390X), which was located in the intracellular loop of the ␥2 subunit between transmembrane segments M3 and M4, produced mRNA that was stable and not degraded by nonsense-mediated mRNA decay, and truncated subunits that were degraded slowly and incompletely by endoplasmic reticulum-associated degradation (Kang et al., 2009). The heterozygous Gabrg2 ϩ/Q390X knock-in mouse had a severe epilepsy phenotype that included spontaneous generalized tonic-clonic seizures and sudden death after seizures and accumulation and aggregation of mutant ␥2(Q390X) subunits that may contribute to the chronic progressively declining clinical course of the epileptic encephalopathy (Kang et al., 2015). GABA A receptor ␥2 subunits, which belong to the Cysloop ion channel superfamily, are formed by an ϳ200 residue N-terminal extracellular domain and four transmembrane segments, M1 to M4, that are homologous to the glutamate-gated chloride channel (GluCL) (Hibbs and Gouaux, 2011), and the glycine receptor (GlyR) ␣1 subunit (Du et al., 2015). Pentameric assembly of GABA A receptor ␣1, ␤3, and ␥2 subunits form the most abundant GABA A receptor subtype in the brain (Farrant and Nusser, 2005) that mediates the majority of fast inhibitory neurotransmission and controls network excitability in the brain. To date, only 8 missense mutations in the ␥2 subunit are associated with mild forms of epilepsy syndromes and are located mainly in the receptor N-terminal extracellular domain and in the outermost region of the M2 transmembrane segment (Baulac et al., 2001;Wallace et al., 2001;Audenaert et al., 2006;Shi et al., 2010;Lachance-Touchette et al., 2011;Carvill et al., 2013;Reinthaler et al., 2015). Using targeted next-generation sequencing, we identified a novel de novo GABRG2 missense mutation, P302L, in a patient with Dravet syndrome. Multiple sequence alignments among GABRG genes showed that this mutated proline was located in the cytoplasmic entrance of the transmembrane segment M2 that contributes to the formation of the pore region of the channel and was completely conserved in the three ␥ subunits in humans and other species (Fig. 1A). In vitro analysis showed that the ␥2(P302L) subunit mutation caused slight changes of surface expression, but mainly disrupted GABA A receptor gating by reduction of GABA-evoked singlechannel current amplitudes, slowing activation, enhancing desensitization, and decreasing single-channel open probability, all of which caused reduction of GABA A receptor-mediated mISPCs. We propose a structural mechanism that leads to dysfunction of this channel. Through modeling of the channel structure in the closed, open, and desensitized states that govern the gating of the receptor, we found that the mutation reshaped the pore of the GABA A receptor by altering the features of both outer and inner vestibules of the hour-glass-like pore cavity, and thus altering the conduction pathway. The findings presented here shed light into the molecular mechanisms of the pathogenesis of Dravet syndrome. Patient, targeted next-generation, and Sanger sequencing Genomic DNA was extracted from peripheral leukocytes from the patient (case 1007) and parents for segregation analysis. Written informed consent was obtained from all individual subjects of this study. The study was approved by the Peking University First Hospital Medical Ethics Committee. Custom-designed panels capturing the coding exons of GABRG2 were synthetized using the Agilent SureSelect Target Enrichment technique. Targeted next-generation sequencing was subsequently performed on an Illumina GAIIx platform (Illumina) using a paired-end sequencing of 110 bp to screen for mutations as described previously (Kong et al., 2015). We used Sanger sequencing to confirm the origin of the mutation as being de novo. cDNA constructs and expression of recombinant GABA A receptors cDNAs encoding human ␣1, ␤3, and ␥2L GABA A receptor subunit subtypes (NM000806, NM021912, and NM198904, respectively) were subcloned into the plasmid Figure 1. The de novo GABRG2(P302L) missense mutation was identified in a patient with Dravet syndrome. A, Sequence alignments of the transmembrane segment M2 of GABA A receptor ␥1-3 subunits with the D. rerio GlyR receptor ␣1 subunit, highlighting the evolutionary conservation of the proline (shown in red) at the -2' position. The residues in gray were conserved across all of the expression vector pcDNA3.1 (Thermo Fisher Scientific) using standard techniques. The ␥2L(P302L) subunit mutation was generated by site-directed mutagenesis using the QuikChange Site-Directed Mutagenesis kit (Agilent Technologies) and verified by sequencing. Human embryonic kidney cells (HEK293T) were grown in 100-mm tissue culture dishes (Corning) with DMEM, supplemented with 10% fetal bovine serum at 37&cenveo_unknown_entity_ timesnewroman_00B0;C in 5% CO 2 /95% air and passaged every 3-4 d. For surface biotinylation experiments, 4 ϫ 10 5 cells were plated onto 60-mm diameter culture dishes. Twenty-four hours after plating, cells were transfected using polyethyleneimine (PEI,MW 40,000;Polysciences). For mock or single subunit expression, empty pcDNA3.1 vector was added to make a final cDNA transfection amount to 1.8 g. For electrophysiology experiments, cells were plated onto 12-mm cover glass chips at 4 ϫ 10 4 in 35-mm diameter culture dishes, transfected after 24 h with 0.3-g cDNA of each ␣1, ␤3, and ␥2L subunits and 0.05 g of EGFP (to identify transfected cells) using X-tremeGENE9 DNA transfection Reagent (Roche Diagnostics; 1.5 l/g cDNA). Recordings were obtained 48 h after transfection. Electrophysiology Whole-cell recordings from lifted HEK293T cells and cell attached single-channel recordings were obtained as previously described (Hernandez et al., 2011;Janve et al., 2016). For whole-cell recordings the internal solution consisted of: 153 mM KCl, 10 mM HEPES, 5 mM EGTA, 2 mM Mg-ATP, and 1 mM MgCl 2 .6H 2 O; pH 7.3, ϳ300 mOsm. The external solution was composed of: 142 mM NaCl, 8 mM KCl, 10 D(ϩ)-glucose, 10 mM HEPES, 6 mM MgCl 2 .6H 2 O, and 1 mM CaCl 2 ; pH 7.4, ϳ326 mOsm. This combination of external (1Na-external) and internal solutions produced a chloride equilibrium reversal potential (V rev ) of ϳ0 mV, and cells were voltage clamped at -20 mV. For the ionic selectivity current-voltage (I/V) experiments, the 142 mM NaCl of the external recording medium was replaced with 71 mM NaCl (a 50% reduction of Na concentration), and 142 mM sucrose was added to maintain isoosmolar conditions. This external solution (0.5Na-external) produced a chloride V rev of ϳ13 mV. Liquid junction potentials were calculated using Clampex's Junction Potential Calculator and corrected by the Pipette Offset circuitry of the amplifier. The I/V experiments were performed by holding the cell membrane potential (in mV) at: -80, -60, -40, -20, 0, ϩ20, ϩ40, ϩ60, and 4 s GABA A receptor currents evoked by 1 mM GABA were recorded at each membrane potential. External solutions and drugs were gravity fed to a four-barrel square glass pipette connected to a SF-77B Perfusion Fast-Step system (Warner Instruments). The solution exchange time across the open electrode tip was ϳ200-400 s, and the exchange around lifted cells (ϳ8-12 pF) occurred within 800 s, which was sufficiently fast for these experiments (Bianchi and Macdonald, 2002) and guaranteed rapid solution exchanges and accurate measurement of the kinetic properties of the receptor currents. Although series resistance errors were not compensated in this study, we rule out the possibility of underestimating the "true" peak amplitude and desensitization kinetics by using low resistance electrodes (ϳ1 M⍀) and cells that showed current amplitudes ϳ5 nA (ϳ8-12 pF) at Ϫ20 mV holding potential as previously reported (Bianchi and Macdonald, 2002). Single-channel currents were recorded in an external solution containing: 140 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 2 mM CaCl 2 , 10 mM glucose, and 10 mM HEPES; pH 7.4. During recording, 1 mM GABA was added to the internal solution that consisted of: 120 mM NaCl, 5 mM KCl, 10 mM MgCl 2 , 0.1 mM CaCl 2 , 10 mM glucose, and 10 mM HEPES; pH 7.4. The intramicropipette potential was ϩ80 mV. ␣␤-GABA A receptor currents were blocked by adding 100 M zinc. Single-channel conductance in the presence of 1 mM GABA and 100 M zinc was determined by holding the transmembrane potential at ϩ80 mV. All experiments were performed at room temperature (22-23°C). Macroscopic activation and deactivation current times () were measured by application of 1 mM GABA for 10 ms, while desensitization and peak current amplitude by application of 1 mM GABA for 4 s. Activation, desensiti-zation and deactivation current time () courses were fitted using the Levenberg-Marquardt least squares method with up to four component exponential functions of the form Αa nexp(-t/ n ) ϩ C, where n is the number of the exponential components, t is time, a is the relative amplitude, n is the time constant, and C is the residual current at the end of the GABA application. Additional components were accepted only if they significantly improved the fit, as determined by an F test on the sum of squared residuals. The time course of deactivation was summarized as a weighted time constant, defined by the following expression: Αa n n /Αa n . The extent of desensitization was measured as (fitted peak current -fitted steady-state current)/ (fitted peak current). GABA peak current-voltage plots were fitted to a first order polynomial function, and V rev values were read directly from the fitted current-voltage plots for each cell. Peak current responses obtained with 0.5Na solutions and recorded with holding potentials from -80 to -40 mV were excluded from the fitting analysis as they showed outward rectification. GABA A receptor current concentrationresponse curves were fitted using GraphPad Prism version 6.07 for Windows (GraphPad Software). We used a nonlinear regression Hill equation of the form E ϭ E basal ϩ (E MAX -E basal )/(1 ϩ 10^((LogEC 50 -X)‫ء‬Hill slope)), where E is the fractional response of the GABAR-gated currents, E max is the maximal response, X is [GABA], EC 50 is the [GABA] at which response ϭ 50% of maximal response, and the Hill slope (nH). Thus, the nH for wt and ␥2(P302L) GABA A receptors were 1.17 Ϯ 0.25 and 2.60 Ϯ 0.88, respectively. Inhibition of 1 mM GABA A receptor evoked currents by 10 M zinc was measured by preapplication for 10 s followed by coapplication with GABA for 4 s. GABA and zinc were obtained from Sigma. Single-channel open and closed events were analyzed using the 50% threshold detection method and visually inspected before accepting the events. Single-channel openings occurred as bursts of one or more openings or cluster of bursts. Bursts were defined as one or more consecutive openings that were separated by closed times that were shorter than a specified critical duration (t crit ) prior to and following the openings (Twyman et al., 1990). A t crit duration of 5 ms was used in the current study. Clusters were defined as a series of bursts preceded and followed by closed intervals longer than a specific critical duration (t cluster ). A t cluster of 10 ms was used in this study. Open and closed time histograms as well as amplitude histograms were generated using TACFit 4.2 (Bruxton). Single-channel amplitudes (i) were calculated by fitting all-point histograms with single-or multi-Gaussian curves. The difference between the fitted "closed" and "open" peaks was taken as i. Duration histograms were fitted with exponential components in the form: Α(A i / i )exp(-t/ i ), where A and represent the relative area and time constant of the I component, respectively, and t is the time. The mean open time was then calculated as follows: ΑA i i . The number of components required to fit the duration histograms was increased until an additional component did not significantly improve the fit (Fisher and Macdonald, 1997). Single-channel current-voltage plots were fitted to a linear regression function and conductance values were extracted from the slopes of the best-fit line using GraphPad Prism version 6.07 for Windows (GraphPad Software). Primary cortical neuron culture Mouse cortical neurons were obtained from embryonic day 17.5 mouse pups (four to eight) of either sex. C57BL/6 mice ages 2-4 months were used in schedule breeding 17 days prior to the day of neuron isolation . Two female mice were used after confirmation of pregnancy by detection of a vaginal plug. Briefly, dissociated cells were plated at a density of 6.5 ϫ 10 4 cells/ cm2 onto 12-mm round coverslips in 24-well plates coated with poly-L-ornithine (0.5 mg/ml; Sigma). Cortical neurons were incubated at 37°C in 5% CO 2 incubator and maintained in serum-free Neurobasal medium (Gibco) supplemented with B27 supplement (Gibco), glutamine (Gibco), and penicillin/streptomycin (Gibco, 20 U/ml). Cultured neurons were transfected at DIV12 with 2 g of ␥2L or ␥2L(P302L) and 0.5 g of EGFP using X-tremeGENE9 DNA transfection reagent. Recording were obtained at DIV 19-21. All animal procedures were performed in accordance with Vanderbilt University Medical Center animal care committee's regulations. Analysis of GABA A receptor-mediated mIPSCs were obtained using the Clampfit 10.4 data analysis module. Detection of mIPSCs was determined from all automatically detected events in a given 200-s recording period. For kinetic analysis, the mIPSCs were automatically detected by the program initially and then manually analyzed based on the criteria that only single-event mIPSCs with a stable baseline, rising phase (10-90% rise time), and exponential decay were chosen during visual inspection of the recording trace. Double-and multiple-peak mIPSCs were excluded. Events of low amplitude (Ͻ15 pA) were discarded from this analysis. For each neuron, 1000 individual mIPSC events were recorded. Cumulative histograms of the peak currents were compared using a Kolmogorov-Smirnov (K-S) test with a significance value of p Ͻ 0.05. Structural modeling, simulation, and channel pore characterization Three-dimensional structural models of the GABA A receptor in the open, closed, and desensitized states were modelled using the electron cryo-microscopy of the Danio rerio GlyR subunit ␣1 structures (Du et al., 2015) in the open (3JAE), closed (3JAD), and desensitized (3JAF) conformations as templates. GABA A receptor ␣1, ␤3, and ␥2 subunit raw sequences in FASTA format were uploaded to the Swiss-PdbViewer 4.10 server (Schwede et al., 2003) for template searching against the ExPDB database (ExPASy, http://www.expasy.org/). The initial sequence alignments between GABA A receptor ␣1, ␤3, and ␥2 subunits and D. rerio GlyR ␣1 subunits in the open (3JAE), closed (3JAD), and desensitized (3JAF) states were generated with full-length multiple alignments using ClustalW. Sequence alignments were inspected manually to assure accuracy among structural domains solved from the template. Because the long M3/M4 cytoplasmic loop of the GABA A receptor subunits was absent in the solved GlyR structures, the corresponding fourth transmembrane segments (M4) were misaligned onto the template. Consequently, the M3/M4 cytoplasmic loop was excluded from the modelling, and separate alignments were generated for the transmembrane segments M4. Then full-length multiple alignments were submitted for automated comparative protein modelling implemented in SWISS-MODEL (http://swissmodel.expasy.org/SWISS-MODEL. html). Before energy minimization, resulting 3D models of human GABA A receptor ␣1, ␤3, and ␥2 subunits in the three conformational states were inspected manually, their structural alignments confirmed, and evaluated for proper h-bonds, presence of clashes, and missing atoms using Molegro Molecular Viewer (CLC bio). Pentameric 3D GABA A receptor models were generated by combining ␣1, ␤3, and ␥2 structural models in the stoichiometry 2␤:2␣:1␥ with the subunit arrangement ␤3-␣1-␤3-␣1-␥2 in a counterclockwise order by superposition onto the D. rerio GlyR receptor in the open (3JAE), closed (3JAD), and desensitized (3JAF) conformation states. Neighborhood structural conformational changes within a radius of 6 Å of the mutated residue P302L in the ␥2 subunit in the 3D structural models of the GABA A receptor in the open, closed, and desensitized conformation states were modelled using Rosetta 3.1 (Smith and Kortemme, 2008) (https://kortemmelab.ucsf.edu). Up to 20 of the bestscoring structures were generated at each time by choosing parameters recommended by the application. Root mean squared (RMS) deviation was calculated between the initial (wt) structures and superimposed modelled (mutated) structures. For each 3D GABA A receptor conformational state, the average RMS deviation over 10 low-energy structures was computed and conformational changes displayed among neighborhood structural domains. The detection and pore shape and visualization of the 3D structural models of the wt and mutated (P302L) GABA A receptor in the open, closed, and desensitized conformation states were determined using PoreWalker (Pellegrini-Calace et al., 2009), a computational automated method available as a web-based resource (http:// www.ebi.ac.uk/thornton-srv/software/PoreWalker/). From the PoreWalker outputs, a list of the identified pore-lining residues with their ␤-carbon coordinates along the pore axis (i.e., x-coordinates) and the given pore diameter profile of a channel were used to plot of pore diameters as a function of distance along the pore axis. In addition, we implemented ChExVis (Masood et al., 2015) (http:// vgl.serc.iisc.ernet.in/chexvis/) to determine the pore radius profile and the list of the identified pore-lining residues of the transmembrane pore of the 3D structural models of the wt and the P302L in the open conformation state. The models were rendered using UCSF Chimera version 1. 10 (Pettersen et al., 2004). Statistical analysis Numerical data were expressed as mean Ϯ SEM. Statistical analysis was performed using GraphPad Prism version 6.07 for Windows (GraphPad Software). Statistical significance was taken as p Ͻ 0.05, using unpaired two-tailed Student's t test, one-way ANOVA with Dunnett's and Tukey's multiple comparisons test, or two-way ANOVA with Sidak's multiple comparisons test as appropriate. Results A de novo mutation in GABRG2 that encodes the GABA A receptor ␥2 subunit was associated with Dravet syndrome Using a designed targeted sequencing panel, we identified a heterozygous de novo missense mutation in GABRG2, c.905CϾT (encoding p.Pro302Leu), which affected the highly conserved transmembrane segment M2 that lines the pore domain of the GABA A receptor ␥2 subunit (Fig. 1A,B). The affected male (patient 1007) carrying the de novo missense ␥2 subunit mutation c.905CϾT; p.Pro302Leu ( Fig. 1B) was diagnosed with Dravet syndrome. The phenotype included early psychomotor and language developmental delay and multiple seizure types (partial seizures, generalized tonic-clonic seizures, atypical absence seizures, and myoclonus) that were often prolonged. Neither parent had cognitive impairment or epilepsy. During his first year of life, he had seizures with febrile episodes, and his EEG was normal. However, he had frequent afebrile seizures after his first year. When he was 4 years old, the patient still had frequent atypical absence seizures with EEG showing bursts of bilateral 2to 3-Hz spike waves and multispike and slow waves (Fig. 1C). The patient was treated with more than five antiepileptic drugs, which did not control the seizures. Therefore, his epilepsy syndrome was consistent with a diagnosis of Dravet syndrome. The GABA A receptor ␥2(P302L) subunit mutation substantially reduced GABA-evoked currents, enhanced their desensitization, and reduced GABA potency To gain insight into the molecular mechanisms underlying the epilepsy syndrome, we determined how GABA A receptor function was affected by the presence of the ␥2(P302L) subunit mutation. Whole-cell currents were evoked from lifted HEK293T cells cotransfected with ␣1 and ␤3 subunits and wt ␥2L or mutant ␥2L(P302L) subunits by applying a saturating GABA concentration (1 mM) for 4 s and 10 ms using a rapid exchange system (Fig. 2). Peak GABA-evoked current amplitudes recorded from cells coexpressing ␣1␤3␥2L(P302L) subunits were reduced to ϳ10% of those from wt ␣1␤3␥2L receptors (p Ͻ 0.0001; Fig. 2A, Table 1) . To further determine whether the ␥2L(P302L) subunit mutation altered the gating of ␥2L(P302L) subunit-containing GABA A receptors, we examined the desensitization, activation, and deactivation rates of macroscopic whole-cell currents, properties that represent transitions among the closed, open, and desensitized states of the receptor. Currents from cells coexpressing ␣1␤3␥2L(P302L) subunits with ␣1 and ␤3 subunits were strongly desensitized (ϳ80%; Fig. 2B,C), slowly activated (ϳ2 times slower than wt receptor currents; Fig. 2D), and rapidly deactivated (ϳ2 times faster than wt receptor currents; Fig. 2E, Table 1). Moreover, while wt currents desensitized with three exponential components (1, 2, 3) as reported before (Bianchi et al., 2007), ␥2L(P302L) currents desensitized with Figure 2. The de novo ␥2(P302L) subunit mutation reduces GABA-activated currents and enhances desensitization. A, B, Representative GABA evoked-current traces obtained following rapid application of 1 mM GABA for 4 s to lifted HEK293T cells expressing wt ␥2L and mutant ␥2L(P302L) subunit-containing ␣1␤3␥2L GABA A receptors. Traces on the right (B) were normalized to illustrate the differences in desensitization between wt and mutant receptor currents. Bar graphs summarized the effects of wt and mutant GABA A receptors on peak current amplitudes and extent and weighted of desensitization. C, Bar graphs summarized the effects of wt and mutant GABA A receptors on desensitization time constants (1, 2, and 3) and relative areas (a1, a2, and a3). D, E, Representative current traces showed activation (D) and deactivation (E) obtained following rapid application of 1 mM GABA for 10 ms to cells coexpressing ␣1 and ␤3 subunits with wt ␥2L or mutant ␥2L(P302L) subunits. The traces were normalized for clarity. Bar plots summarized the differences in activation and deactivation between wt and mutant receptors. F, GABA A receptor concentrationresponse curves for wt ␣␤␥ (solid blue line), ␣␤ (dashed green lines), and mutant ␥2L(P302L) subunits (dashed red lines) were obtained. The top left inside panel was a comparison between currents from receptors containing wt ␣␤ and mutant ␣␤␥2L(P302L) subunits. Values were expressed as mean Ϯ SEM (n ϭ 5-6 cells for each experimental condition). The data represented the summary of 17 cells with comparable capacitances (8-12 pF) recorded from three independent transfections. Values were expressed as mean Ϯ SEM. Unpaired two-tailed Student's t test relative to wt ␥2L. ‫‪p‬ءءءء‬ Ͻ 0.0001, ‫‪p‬ءء‬ Ͻ 0.01, respectively. See Table 1 for details. New Research only two exponential components (1 and 2; Fig. 2C). The first desensitization time constant (1) was not different from wt, but its relative contribution increased from 6 to 72% (p Ͻ 0.0001; Table 1), which may account for the acceleration of the desensitization weighted . Interestingly, the second time constant (2) of the mutant receptor was not different from the longest wt time constant (3) (p Ͻ 0.2103; Table 1), which suggested the mutant lacks the second or intermediate time constant (2) (p Ͻ 0.0020; Table 1). To determine whether the changes observed in receptor gating altered GABA A receptor efficacy and/or potency, we measured the effects of mutant ␥2L(P302L) subunits on ␣1␤3␥2L GABA A receptor concentration-response curves (Fig. 2F). Thus, macroscopic peak currents were evoked by applying various concentrations of GABA for 4 s to wt and mutant receptors. For wt receptors, the EC 50 for current stimulation was 7.50 Ϯ 0.77 M, and the maximal current was 7038 Ϯ 302 pA (n ϭ 5-6). The ␥2L(P302L) subunit mutation caused an ϳ6-fold right shift of the EC 50 (50.26 Ϯ 0.82 M), with a substantial reduction of peak current to ϳ13% of the maximal response to GABA (931.2 Ϯ 57.6 pA, n ϭ 5-6). When comparing the maximal response to GABA between mutant ternary ␣1␤3␥2L receptors (see above) and wt binary ␣1␤3 receptors (1204 Ϯ 235 pA, n ϭ 5-6), their efficacy was similar, but binary ␣1␤3 receptor potency (see above) was 10-fold higher than mutant ternary ␣1␤3␥2L receptors (5.20 Ϯ 0.47 M), which was comparable with previous results (Wooltorton et al., 1997). These results suggest that, although the ␥2L(P302L) subunit mutation conferred characteristics to ternary receptors that resembled those of binary receptors, their kinetic defects were distinguishable. cannot totally exclude the possibility of the presence of a small proportion of ␣1␤3 receptors, these results suggested that the mutated ␥2L(P302L) subunit was effectively assembled in the receptor and, once in the surface membrane, drastically reshaped the kinetic behavior of the receptor. Three-dimensional structural modeling of pentameric ␣1␤3␥2 GABA A receptors showed that the ␥2(P302L) subunit mutation occurred where the desensitization-closed gate of Cys-loop receptors resides Three-dimensional structural modeling of the pentameric ␣1␤3␥2 GABA A receptor showed that the ␥2(P302L) subunit mutation occurred at the -2' position of the transmembrane segment M2 (Figs. 1A and 4B ), which was at the intracellular entrance of the pore (Fig. 4A). Recent x-ray structures of the GluCL (Hibbs and Gouaux, 2011) and the electron cryo-microscopy of the GlyR (Du et al., 2015) demonstrated that the desensitization-gate of Cysloop receptors resides at the intracellular entrance of the pore (Gielen et al., 2015), where the -2'P302 mutation was located, which represented the second site of constriction within the pore. These findings suggest a molecular mechanism by which the ␥2(P302L) subunit mutation mainly reduced GABA-evoked currents and induced fast desensitization of GABA A receptors. Considering that residues lining the transmembrane segment M2 of the receptor contribute to gating transitions between closed and open states, we propose that the mutant ␥2(P302L) subunit might have different impacts on receptor structure in the closed, open, and desensitized states. Taking advantage of the electron cryo-microscopy of the D. rerio GlyR structures (Du et al., 2015) in the open (3JAE), closed (3JAD), and desensitized (3JAF) conformational states, we modelled wt ␣1␤3␥2 and mutant ␣1␤3␥2(P302L) receptors in the open (Fig. 5A,D), closed (Fig. 5B,E), and desensitized (Fig. 5C,F) states (see Materials and Methods for details). As described for the GlyR (Du et al., 2015), the ␣1␤3␥2 GABA A receptor pore exposed two sites of constriction, one at position 9' (Fig. 5A-C, top panels), which corresponded to the activation/open gate, and the other in the cytoplasmic interface at position -2' (Fig. 5A-C, lower panels), which represented the desensitization gate (Gielen et al., 2015). The ␣1␤3␥2 GABA A receptor modeling showed that these two gates were open or closed depending on the conformational state of the receptor. Structural modeling of ␣1␤3␥2(P302L) receptors in the open (Fig. 5D), closed (Fig. 5E), and desensitized (Fig. 5F) states showed that the ␥2(P302L) subunit mutation caused rearrangements of the side chain residues confined to the transmembrane segment M2 of the ␥2 subunit and propagated to the ␣1 Figure 3. The ␥2(P302L) subunit mutation causes slight changes in surface levels of GABA A receptor subunits. A, Wild-type (wt) ␥2L or mutant (mt) ␥2L(P302L) subunits were coexpressed with ␣1␤3 subunits in HEK293T cells. Surface protein samples were biotinylated, isolated, separated by SDS-PAGE, and probed by anti-GABA A receptor subunit and anti-ATPase antibodies. B, Band intensities of the ␥2, ␤3, and ␣1 subunits were normalized to the ATPase signal, and the summarized data were shown in the bar graphs. C, Wild-type (wt) or mutant (mt) ␥2L(P302L) subunits were cotransfected with ␣1␤3 subunits into HEK293T cells, and total cell lysates were collected, analyzed by SDS-PAGE, and blotted by anti-␣1, anti-␤3, or anti-␥2 subunit and anti-ATPase antibodies. D, Band intensity of ␥2L subunits was normalized to the ATPase signal. Values were expressed as mean Ϯ SEM. An unpaired t test was used to determine significance. and ␤3 neighboring subunits. We computed that the rearrangements of the subunit's secondary structure between the wt and mutant structural modeling had RMS deviation values Ն0.5 Å, which were shown in rainbow colors in Figure 5D-F. In addition, the ␥2(P302L) subunit modeling showed a repositioning of the side chains of the pore-lining residues of neighboring ␤3 and ␣1 subunits in between -2' and 9' positions ( Fig. 5G-I), where the closeddesensitized and open gates are located. The rearrangements observed in the side chains of the pore-lining residues were dependent on the conformational state of the receptor. In addition, the residues were perturbed differently depending on their localization within the pentameric receptor when taking into account its counterclockwise arrangement as ␤3(chain A)-␣1(chain B)-␤3(chain C)-␣1(chain D)-␥2(chain E) subunits (chains refer to the subunits in the pentameric structure). Thus, in the open state (Fig. 5G), the perturbations were located mainly below the 9' position, perturbing pore-lining residues in the M2 helices of ␣1D-␥2E-␤3A neighbor subunits (␣1V279-V287, ␥2A300-T310, ␤3A271-L278), and in the M3 helix of ␥2(L350-V360). In contrast, in the closed and desensitized states, all five ␤3A-␣1B-␤3C-␣1D-␥2E subunits were altered (␤3A and ␤3C, I267-T281; ␣1B and ␣1D, S278-F285; ␥2E, S293-T310), compromising a greater number of residues but mainly located in M2 helices (Fig. 5H,I, only showing ␣1D-␥2E-␤3A neighbor subunits for clarity). The latter may be due to the predicted projection of L302 towards the channel pore, which was absent in the open state ( Fig. 5G-I, lower panels). It is noteworthy that ␥2T310 is a residue that was perturbed independent of the conformational state of the receptor, whereas ␤3T281 was solely perturbed in the closed and desensitized states. These threonines correspond to the 6' position in the M2 helix that outlines the channel pore. Highly conserved through all GABA A subunits, T310 and T281 are located just below the gate of the channel (L9' position) and shape the inner mouth of the channel pore. The pore mutation P302L destabilized the channel gate in the open state To further investigate whether the ␥2(P302L) subunit mutation altered the ion channel pore, we determined the structural characteristics of the channel cavity and the pore-lining residues along the axis of the channel pore through the implementation of two computational fullyautomatic methods (Pellegrini-Calace et al., 2009;Masood et al., 2015). Six views of the transmembrane domain (M1 to M4) of the GABA A receptor structure ( Fig. 6A-C, top panels), which represent approximately 32 Å along the pore axis (i.e., xz-plane section), show the variation of pore diameters at 3-Å steps among wt and mutant structures in the open, closed, and desensitized states (Fig. 6D). In these pore visualization views, the top of the pore axis corresponds to the outermost part of the pore (ϳ20' position), which is the extracellular vestibule of an asymmetric pore divided by the narrow open gate (9' position). On the other hand, the bottom of the pore axis corresponds to the innermost part of the pore (-2' position), which is the intracellular vestibule of the pore. As previously reported for GlyR (Du et al., 2015) and GluCL channels (Hibbs and Gouaux, 2011; Althoff et al., 2014), these A, A 3D structural model of the ␣1␤3␥2 GABA A receptor was displayed with the ␤ subunits in red, ␣ subunits in blue, and the ␥ subunit in gray. The ␥2(P302L) subunit mutation was mapped onto the structure and represented in orange. The dashed box represented the transmembrane domain of the receptor and transmembrane segments M1 to M4 were labeled in the ␥2 subunit. B, The transmembrane domain of 3D structural model of the ␣1␤3␥2 GABA A receptor, with residues at positions -2= (dashed circle) in each transmembrane segment M2 was displayed as spheres and colored by subunit (␥2 in orange, and ␣1 and ␤3 in black). Subunits ␣1 (blue), ␤3 (red), and ␥2 (gray) were labeled, and transmembrane segments M1 to M4 were labeled in the ␥2 subunit. A 3D model was viewed from the extracellular side as shown in the lower left corner (dashed box); for clarity, the N-terminal extracellular domain was not shown. , and desensitized (C) conformational states were viewed from the extracellular side and displayed the ␤ subunits in red, ␣ subunits in blue, and ␥ subunit in gray. Side chains of the pore-lining residues at 9'(in black) and -2' (P302 in orange, other residues in black) positions were shown within the ion channel pore of the receptor. D-F, Superimposed 10 best-scoring 3D transmembrane domains of GABA A receptors in the open (D), closed (E), and desensitized (F) states modelled between the initial wt and P302L mutated structures were viewed from the cytoplasmic side. The modelled structures were in stick representation. The wt GABA A receptor structure was in gray. The structural rearrangements in side chain residues that differ among the wt and the mutated structures (RMS Ն 0.5 Å) were represented as a different color. G-I, observations support the concept that the shape of the pore of the GABA A receptor also shows an asymmetrical hour-glass-like cavity. Considering the centers of the pore at 3-Å steps along the pore axis ( Fig. 6A-C, red spheres), and their sizes being proportional to the pore diameter measured at that point, the pore diameter profile shows a great variability of both outer and inner vestibules among the structures (Fig. 6D). Taking into account the presence of these two vestibules, we compared the diameters of the cavity between the outermost position (21'-18' position) and the open gate (9' position), which represent the outer vestibule, and the latter and the innermost position (-2') that represents the inner vestibule, between wt and mutant structures for each conformational state. Figure 6D shows the differences of the pore diameter profiles along the pore axis of both outer and inner vestibules. Comparisons made against the wt structure (solid lines) revealed that although in the open state the mutated structure narrowed the outer vestibule, but increased the inner vestibule, in the closed state, the mutated structure narrowed the inner vestibule whereas increased the outer vestibule (dashed lines). Conversely, in the desensitized state, the mutated structure solely increased the inner vestibule. Moreover, cross-sectional views of the pore at the open gate (9' position) ( Fig. 6A-D, bottom panels) showed that in both open and closed states the open gate was reduced, whereas in the desensitized state, it was increased. These data suggested that in the open state, while the channel gate was constricted, the desensitized gate was enlarged. By analyzing the radius along the channel pore ( Fig. 7A,C), the pore radius at the 9' position was reduced to 5.18 Å in the P302L model in comparison with the wt structure model (5.28 Å). These subtle differences were accompanied by the rearrangement of two residues, not present in the wt structure, as outlining the channel pore. Both residues of the ␤3 subunit, H292 (17' position) and A273 (-2' position), were located at the outer and inner entrances of the pore, respectively. Although GABA A receptors were formed by arrangement of ␣, ␤, and ␥ subunits, small differences in the entrances of the pore caused by a single substitution in the ␥2 subunit might account for differences in how the ions pass through the channel. These observations indeed support our hypothesis that the occurrence of mutations at the intracellular entrance of the pore is relevant to the structural mecha-continued Superimposed 10-best-scoring transmembrane domains of P302L structures in the open (G), closed (H), and desensitized (I) states were seen parallel to the membrane (top panels) and from the cytoplasmic side (bottom panels). Two subunits were removed for clarity. Perturbed neighborhood side chains within of 10 Å of L302 at the -2' position were shown within the ion channel pore, and the structural perturbations that differ among the wt and mutated structures (RMS Ն 0.5 Å) were represented in different colors. The wt-␥2 subunit structure was in gray. The channel gate at the 9' position was represented as a dashed line in the top panels. The ␤ and ␣ subunits were in red and blue, respectively. In the bottom panels, dashed black circles represented the channel pore, and the location of the L302 residue was shown in red dashed circles. Lists of the residues perturbed by the L302 mutation were detailed in the text. , and desensitized (C) states were shown parallel to the membrane (top panels) and from the extracellular side (bottom panels) of wt and P302L structures. In the top panels, the section of each structure was obtained by cutting the protein structure along the pore-axis. Red spheres represented pore centers at given pore heights and their diameters correspond to 1/10 of the pore diameter calculated at that point. Bottom panels represent transverse sections of the pore-axis at the 9' position (dashed lines in top panels), where the channel gate was located. Pore-lining side chains and residues at 3-Å steps within the cavity were colored in orange and blue, respectively. The remaining part of the transmembrane domain was shown in green. D, Pore diameter profiles at 3-Å steps corresponding to the pore-lining residues in the open (A), closed (B), and desensitized (C) states of wt and P302L structures. Horizontal black dotted lines show positions of pore-lining residues in M2 (see Figure 1A). . The pore mutation P302L had no effect on GABA A receptor anion selectivity. A, C, Two-dimensional representations of channel radius (in Å) along the pore-axis for both P302L (A) and wt (C) structures in the open state, which was implemented by ChExVis. The analysis showed the pore-lining residues (colored boxes) numbered according to protein sequence and position in M2 of pentameric 3D GABA A receptor structures with the subunit arrangement ␤3(chain A)-␣1(chain B)-␤3(chain C)-␣1(chain D)-␥2(chain E). Above, alignments of the transmembrane M2 of GABA A receptor ␥2, ␣1, and ␤3 subunits showed the location and the conservation of the residues within M2 [identical ‫,)ء(‬ conservative (:), semiconservative (.)]. The residues were in concordance with the boxes below the 2D views. In red were common pore-lining residues that formed part of the inner face of the cavity predicted for both wt and P302L structures, while in blue were those only predicted for the P302L structure. In green, it was the site of the substitution. Below the 2D views, boxes were labeled and colored based on amino acid type as follows: E, glutamic acid (orange); T, threonine (light green); S, serine (yellow); A, alanine (red); I, isoleucine (rose); L, leucine (purple); V, valine (dark green); P, proline (brown); K, lysine (red orange); H, histidine (blue). The suffix after the residue number indicates the subunit to which the residue belongs. The dotted lines connecting the boxes represented the position of the residues along the pore-axis, as indicated on the top of the panels. B, D, Current-voltage (I/V) plots and corresponding GABA-gated peak current responses of mutant ␥2L(P302L) (C) and wt (D) ␣1␤3␥2L GABA A receptors obtained from membrane potential range from -80 mV to ϩ60 mV following rapid application of 1 mM GABA for 4s. The traces represented representative individual cells recorded in control (1 Na) or dilute (0.5 Na) solutions to determine the relative permeability of anions of mutant P302L and wt receptors. nisms that govern the gating of GABA A receptors, as discussed in the next sections. The pore mutation P302L at the -2' position did not affect GABA A receptor anion selectivity It is assumed that the movement of ions through the channel pore obeys the physical-chemical transport properties into barriers and wells (Bormann et al., 1987). Similar to other channels, GABA A receptors have wide extracellular and intracellular entrances that lead to the channel gate. Thus, amino acid substitutions at these entrances, such as that caused by P302L, may cause a loss of the selectivity of ions entering the channel as discussed above. To gain insights into whether the predicted perturbations at the entrances of the channel pore could result in loss of selectivity to anions, GABA-gated peak current responses of ␣1␤3␥2L(P302L) and wt ␣1␤3␥2L GABA A receptors at membrane potentials from -80 to ϩ60 mV were measured in physiological (1Na) and diluted (0.5Na) extracellular solutions (Fig. 7B,D). In our experimental conditions, at the extracellular NaCl concentration of 142 mM (1Na), the theoretical V rev for chloride given by the Goldman-Hodgkin-Katz equation is approximately -1.63 mV. The dilution of NaCl to 72 mM (0.5Na) predicts a relative right shift of the zero chloride current to V rev approximately ϩ13.04. Figure 7 shows representative I/V plots recorded from four different cells expressing receptors containing ␥2L(P302L) subunits (Fig. 7B) and wt (Fig. 7D) receptors in physiological (P302L, 0.62 Ϯ 1.07 mV, n ϭ 3; wt, 0.02 Ϯ 1.26 mV, n ϭ 5; p ϭ 0.7560, unpaired two-tailed Student's t test) and diluted NaCl concentrations. Noticeably, I/V plots for P302L mutant and wt receptor currents showed a significant rightward shift in V rev (p Ͻ 0.0001, one-way ANOVA with Dunnett's and Tukey's multiple comparisons test) with decreasing extracellular NaCl (P302L, 14.73 Ϯ 0.52 mV, n ϭ 7; wt, 13.99 Ϯ 1.12 mV, n ϭ 5; p ϭ 0.5222), which demonstrated that both currents were carried by chloride ions and remained anion selective. Single-channel gating properties of GABA A receptors were impaired by the ␥2(P302L) subunit mutation Our studies showed that depending on the state of the channels, whether they were in the closed or open states, the occurrence of the ␥2(P302L) subunit mutation in the channel pore increased or decreased, in an asymmetrical manner, both outer and inner vestibules. Conversely, in the desensitized state, the mutation produced only an increase of the diameter of the inner vestibule of the pore, which widened the cytoplasmic side of the pore. In the open state, a decrease in the outer vestibule of the pore can affect the anticlockwise rotation of the transmembrane M2 segments for channel activation (Hibbs and Gouaux, 2011;Althoff et al., 2014;Du et al., 2015). In this regard, our data showed that the ␤3(H292) residue was among the pore-residues outlining the inner face of the cavity in the P302L model (Fig. 7A). It seems that H292, mapped at the ␥2ϩ/␤3-interface in the wt model, is accessible to the pore in the P302L model, which might predicted by a clockwise tilt towards the channel pore, perturbing the anticlockwise rotation during channel acti-vation. In addition, the narrowing of the inner vestibule of the pore in the closed state suggested that the channel was transiently trapped in a nonconducting state (Gielen et al., 2015). On the other hand, a failure of pore closure of the inner entrance in the desensitized state could lead to a partially nonconducting desensitized state, thus favoring occurrence of subconductance openings (Dani, 1986;Bormann et al., 1987;Root and MacKinnon, 1994). To determine whether the structural changes were correlated with the macroscopic kinetic defects and resulted in changes in the gating of the receptor, we measured the properties of GABA-evoked (1 mM) single-channel currents recorded from HEK293T cells coexpressing ␣1 and ␤3 subunits with wt ␥2L or mutant ␥2L(P302L) subunits. Single channels from coexpressed ␣1␤3␥2L subunits opened into brief bursts and frequent prolonged (Ͼ500 ms) burst clusters (Fig. 8A) and opened to a high mainconductance level of ϳ21-28 pS and to a lowconductance level of ϳ14-18 pS in all seven patches (Table 2), as described previously (Twyman et al., 1990;Fisher and Macdonald, 1997). In contrast, single-channel burst clusters from coexpressed ␣1␤3␥2L(P302L) subunits occurred at three distinct conductance levels in 14 patches. Combination of clusters of main-conductance (ϳ22-25 pS) and low-conductance (ϳ13-15 pS) openings were observed in seven of the patches (Fig. 8A, Table 2). Low-conductance openings were reported in patches with ␣␤ receptors (Lo et al., 2010). In addition, a novel sublow-conductance cluster type of openings of ϳ7 pS (6.61 Ϯ 0.15) occurred in combination with clusters of openings of ϳ13-15 pS in seven of the ␣1␤3␥2L(P302L) patches recorded (Fig. 8A). It was important to note that this type of sublow-conductance openings did not occur in wt receptors (Bormann et al., 1987;Twyman et al., 1990;Fisher and Macdonald, 1997). With the intent to simplify the kinetic analysis, those patches with sublowconductance openings were excluded from further singlechannel measurements. Therefore, we measured the singlechannel properties of seven of the patches from cells coexpressing wt ␣1␤3␥2L or mutant ␣1␤3␥2L(P302L) receptors that exhibited only main-and low-conductance openings. When the openings were compared between wt and ␣1␤3␥2L(P302L) receptors, no significant differences in conductance levels were observed (Fig. 8B, top panel; Table 2). Conversely, the relative distribution of main-and low-conductance openings that contributed to the behavior of ␣1␤3␥2L(P302L) receptors was significantly different from the wt receptor. The ␥2L(P302L) subunit mutation caused a significant shift in the contribution of these levels, increasing the contribution of the lowconductance openings (Fig. 8B, lower panel), while reducing the contribution of the main-conductance openings ( Table 2). Both mutant and wt receptors opened to at least three different types of openings (O1, O2, and O3) with three open time constants ( o1 , o2 , and o3 ) ( Table 2), and open time distributions that were fitted best by three weighted (a o1 , a o2 , and a o3 ) exponential functions (Fig. 9A,B) (Table 2). In addition, mutant receptors reduced channel open probability (p ϭ 0.002) and single-channel opening frequency (p ϭ 0.039) and increased singlechannel mean closed time intraburst (p ϭ 0.029) compared with wt receptors (Fig. 9C-F, Table 2). The low-conductance ␥2(P302L)-GABA A receptors are zinc insensitive As described above, the presence of low-conductance cluster type of openings suggested the formation of ␣1␤3 receptors in the membrane of wt and ␣1␤3␥2L(P302L) patches and support the notion that the observed differ- Figure 8. The de novo ␥2(P302L) subunit mutation shifted the openings of GABA A receptors to low-conductance states. A, Representative single-channel current traces from cell-attached patches of wt ␣1␤3␥2L and mutant ␣1␤3␥2L(P302L) GABA A receptors recorded from HEK293T cells. Patches were voltage clamped at ϩ80 mV and continuously exposed to 1 mM GABA. Openings were downward and each representative trace was a continuous 1600-ms recording. Divisions in the x-axis corresponded to 200 ms, and in the y-axis to 0.5 pA. Note that the cluster of openings fell into the three distinct types of conductance: main (3), low (2), and sublow (1) openings. C refers to the closed state. B, Means of the main and low conductance (pS) and the relative contribution (%) of each conductance level were calculated from patches of wt ␣1␤3␥2L and mutant ␣1␤3␥2L(P302L) GABA A receptors displaying both levels of openings. Values were expressed as mean Ϯ SEM. Two-way ANOVA with Sidak's multiple comparisons test was used to determine significance. ‫‪p‬ءءءء‬ Ͻ 0.0001, and ns, not significant (p ϭ 0.7287), respectively. See Table 2 for details. Kinetic parameters of single-channel currents in cell-attached patches held at ϩ80 mM with 1 mM GABA in the glass electrodes were obtained. The s and a s refer to the time constants and fractions of the three exponential components (O1, O2, and O3), which best represent the distributions of the single-channel openings. Values refer to combined data of low-and main-conductance openings reported are mean Ϯ SEM (n ϭ 7). a Unpaired two-tailed Student's t test relative to ␣1␤3␥2L. b Two-way ANOVA with Sidak's multiple comparisons test relative to ␣1␤3␥2L. ence in the gating of ␣1␤3␥2L(P302L) receptors was the result of the kinetic behavior of a mixture of two types of channels. Thus, we computed the kinetic properties of each component separately to gain insight into the contribution of both main-and low-conductance openings in the gating of wt and mutant ␣␤␥ receptors and then compared with those that express only ␣␤ receptors. As shown in Table 3, it was obvious that wt and mutant ␣␤␥ receptors were not composed of ␣␤ receptors due the presence of prolonged type 3 openings (O3) in both mainand low-conductance openings, which is characteristic of receptors assembled from ␣␤␥ subunits. Conversely, the lack of O3 openings together with the presence of brief O1 type of openings are distinct properties of ␣␤ receptors, which makes the single-channel behavior of these receptors faster than ␣␤␥ receptors. These comparisons clearly showed that the type of receptor containing the ␥2L(P302L) subunit mutation had kinetic properties that differed from receptors assembled with ␣␤ subunits alone. When the two types of conductance openings between wt and mutant receptors were compared, it was notable that low-conductance openings were contributing Values were expressed as mean Ϯ SEM. Statistical differences were determined using unpaired t test relative to wt. ‫‪p‬ءء‬ Ͻ 0.01, ‫ء‬p Ͻ 0.05, and ns p Ͼ 0.05, respectively. See Table 2 for details. largely to the kinetic behavior displayed by mutant receptors. Notwithstanding the presence of the three types of openings in the low-conductance state, these openings were briefer than in the wt receptor, which decreased by 70% the mean open time in the mutant receptor. Unexpectedly, main-conductance types of openings were more prolonged in the ␥2L(P302L) mutant receptors than wt receptors, but this was not deemed significant, because of the small contribution (ϳ13%) of total channel openings. We can conclude that ␣1␤3␥2L(P302L) receptors open mainly through clusters of bursts of low-conductance openings. Further, to confirm that the low-conductance openings resulted from tertiary ␣1␤3␥2L(P302L) receptors, we analyzed the properties of GABA-evoked single-channel openings measured in the presence of 100 M zinc (Fig. 10). Zinc is a well-known GABA A receptor antagonist whose potency is affected by the subunit composition of the receptor (Smart et al., 1991;Hosie et al., 2003). Thus, receptors lacking ␥ subunits are strongly inhibited by zinc. Whether low-conductance openings occurred due to the presence of binary ␣␤ receptors, it is expected to be block by zinc. Noteworthy, we found that, in the presence of zinc and at a membrane potential of ϩ80 mV, the activation of ␣1␤3␥2(P302L) receptors occurred mainly as clusters of low-conductance openings (12.30 Ϯ 0.92 pS, n ϭ 5, p ϭ 0.240 unpaired t test) (Fig. 10A,B), which was comparable with the results obtained previously (Table 2). In addition, no differences were found for wt ␣1␤3␥2 receptors (24.84 Ϯ 0.46 pS, n ϭ 3, p ϭ 0.863 unpaired t test). Up to this point, we have shown that most of mutant channels open with a lower conductance than wt channels, and that this was not due either to alterations in channel permeability for chloride ions or the presence of binary receptors on the cell surface. These findings also suggested that the conductance of the mutant channel is expected to be a linear assuming that the chloride V rev was ϳ0 mV. Figure 10B shows GABA-evoked singlechannel openings at various membrane potentials in the presence of 100 M zinc. At membrane potentials of ϩ100 mV and ϩ120 mV, the ␥2L(P302L) mutant receptors displayed openings with a conductance of 13.38 Ϯ 0.35 pS (n ϭ 5, p Ͻ 0.0001 unpaired t test) and 23.72 Ϯ 1.58 pS (n ϭ 5, p ϭ 0.004 unpaired t test), which were distinguishable from the wt receptor (29.38 Ϯ 0.97 pS, n ϭ 3; and 35.83 Ϯ 2.63 pS, n ϭ 3, respectively). As a result, low-conductance openings for mutant receptors had a slope conductance of 14.24 Ϯ 1.72 pS, which was signif-icantly different from wt receptors (24.17 Ϯ 1.18 pS, p ϭ 0.000165). In line with the structural modeling, these results demonstrated that mutant ␣1␤3␥2L(P302L) receptors might perturb the conduction pathway and destabilize the open conformation while trapping the receptor in nonconducting closed and desensitized conformations. This is in accord with reductions in maximum currents, the increased relative contribution of low (and sublow) conductance openings over the main-conductance openings, the enhanced desensitization, slowed activation, and accelerated deactivation in functional experiments. GABA A receptor-mediated mIPSCs were reduced by overexpressing ␥2(P302L) subunits in cultured neurons Our data demonstrated that the ␥2(P302L) subunit mutation significantly decreased the function of GABA A receptors expressed in HEK293T cells, also suggesting that there must be an impairment of their function at GABAergic synapse. As previously described (Swanwick et al., 2006), widespread expression of clusters of GABA A receptors containing ␣␤␥ subunits at GABAergic synapses was reached in mature cultured neurons around DIV 19-21. To determine whether the mutant ␥2(P302L) subunit altered the function of GABAergic synapses, we overexpressed wt ␥2L or mutant ␥2L(P302L) subunits in cultured cortical neurons at DIV 12 and measured current amplitude and kinetic properties of GABA A receptor-mediated mIPSCs at DIV 19-21 (Fig. 11). Thus, GABA A receptormediated mIPSCs recorded after overexpression of wt ␥2L subunits displayed current amplitude (-33.94 Ϯ 3.08 pA, n ϭ 4), rise time (1.70 Ϯ 0.08 ms), and decay (25.63 Ϯ 2.23 ms) values that suggested the transfected wt ␥2L subunits formed functional receptors with endogenous ␣␤ subunits, resembling mIPSCs yielded at the GABAergic synapse (Fig. 11). In contrast, overexpression of mutant ␥2L(P302L) subunits decreased current amplitude (-21.16 Ϯ 0.72 pA, n ϭ 4, p ϭ 0.0156) and slowed rise time (2.13 Ϯ 0.14 ms, p ϭ 0.049) and decay (69.11 Ϯ 8.96 ms, p ϭ 0.009). These results suggest that mutant ␥2L(P302L) subunits were incorporated into GABAergic synapses and caused a dominant negative effect, reducing GABAergic function. Although the type of receptor expressed in the membrane was unknown since multiple GABA A receptor subunits were expected to be coexpressed (Swanwick et al., 2006), these findings were comparable with the gating deficiencies found in ␣1␤3␥2L(P302L) receptors coexpressed in HEK293T cells. Discussion The findings presented here shed light onto a novel mechanism in the pathogenesis of Dravet syndrome. This study identified a missense de novo mutation in the GABA A receptor ␥2 subunit, P302L, in a patient with Dravet syndrome. The first question to consider was whether a single missense mutation was associated with this catastrophic epileptic encephalopathy. It is well known that missense mutations in coding sequences in the GABA A receptor ␥2 subunit gene GABRG2 are asso- Figure 10. The low-conductance ␥2(P302L)GABA A receptors were zinc insensitive. A, Single-channel current traces from cellattached patches of mutant ␥2L(P302L) and wt recorded at ϩ80 mV and continuously exposed to 1 mM GABA and 100 M zinc. Cluster openings were downward. Upper panels were representative traces continuously recorded for 4000 ms. Red lines indicated that the channel was closed (C), while the gray lines indicated when the channel was open (O) in steps of 1 pA. Dashed boxes enclosed an expanded section of 1600 ms, which was shown below as indicated. B, Representative low-conductance current traces from cell-attached patches of mutant ␥2L(P302L) receptors recorded at various membrane potentials in the presence of 1 mM GABA and 100 M zinc. For the linear regression analysis of conductance, the membrane potentials were indicated as negative values. The traces were recorded from the same patch at the membrane potentials indicated. The red lines represented the closed state of the channels. In the right panel, single-channel current-voltage (I/V) plots of mutant ␥2L(P302L) and wt receptors were shown. The amplitude of the most frequently occurring opening was used for the construction of the I/V plots. Thus, low-conductance openings were used for the mutant receptor, whereas main-conductance openings were used for wt receptors. Data points were values expressed as mean Ϯ SEM (n ϭ 3-5 cells for each experimental condition). Solid lines represented fits of the linear regression equation, and the doted lines represented 95% confidence intervals of the best-fit line. Zn 2ϩ , zinc. ciated with relatively mild epilepsy phenotypes, including childhood absence epilepsy and febrile seizures (Wallace et al., 2001;Audenaert et al., 2006;Shi et al., 2010), and with the genetic epilepsy with the febrile seizures plus (GEFSϩ) spectrum (Baulac et al., 2001). Intriguingly, missense mutations in the GABA A receptor ␣1 subunit gene have been reported in cases of Dravet syndrome (Carvill et al., 2014). Taking into account the location of these mutations in the ␣1 subunit, it is significant that all three mutations (R112Q, G251S, and K306T) share the same topological domain in the receptor, which is directly related to the ligand-binding coupling mechanism (Bianchi et al., 2001;Bera et al., 2002;Keramidas et al., 2006;Wang et al., 2010;Calimet et al., 2013;Althoff et al., 2014). Thus, ␣1(R112Q) and ␣1(G251S) subunit mutations face the ␤ϩ/␣subunit-subunit GABA-binding interface within the N terminus and M1 transmembrane helix, respectively, whereas the ␣1(K306T) subunit mutation is Figure 11. GABA A receptor-mediated mISPCs were altered by overexpression of ␥2(P302L) subunits in cultured neurons. A, Representative GABA A receptor-mediated mISPCs traces recorded at -60 mV in cultured cortical neurons overexpressing mutant ␥2L(P302L) subunits or wt ␥2L subunits. B, A segment of each raw trace in A (red dashed lines) was expanded for better visualization of the differences in mISPC properties. C, Cumulative histograms of GABA A receptor-mediated mISPCs current amplitudes of neurons overexpressing mutant ␥2L(P302L) subunits or wt ␥2L subunits show the Kolmogorov--Smirnov (K-S) comparison test with a statistical value of 61% (p Ͻ 0.00001). D, Overlapped ensemble average traces of mISPCs recorded from receptors containing wt-␥2L (blue) and mutant ␥2L(P302L) (red) subunits showed reduced amplitude and kinetic changes. E, F, Bar graphs summarize the effects of overexpression of wt and mutant ␥2L(P302L) subunits on current amplitude (E), rise time (F), and decay (G) of GABA A receptor-mediated mISPCs. Values were expressed as mean Ϯ SEM. Statistical differences were determined using unpaired t test relative to wt. ‫‪p‬ءء‬ Ͻ 0.01 and ‫ء‬p Ͻ 0.05, respectively. located in the M2-M3 extracellular loop, which is part of the coupling interface of the GABA A receptor. In line with these findings, the next question to consider was whether the location of the ␥2 subunit mutation in the GABA A receptor may account for the severity of the epilepsy phenotype. While most of the missense ␥2 subunit mutations are preferentially distributed across the N-terminal domain and in the outermost region of the transmembrane segment M2 not facing directly ␤ϩ/␣-GABAbinding coupling interface (Baulac et al., 2001;Wallace et al., 2001;Audenaert et al., 2006;Shi et al., 2010;Lachance-Touchette et al., 2011;Carvill et al., 2013;Reinthaler et al., 2015), the ␥2(P302L) subunit mutation was mapped in the pore-forming region of the GABA A receptor. This was probably the most significant feature that distinguished the defects caused by the occurrence of this mutation as discussed below. A mechanism of gating impairment of GABA A receptors by the ␥2(P302L) subunit mutation For Cys-loop receptors (Hibbs and Gouaux, 2011;Althoff et al., 2014;Du et al., 2015), pore-lining residues of the transmembrane M2 domain of GABA A receptors form the axis of the channel pore. The pore exposes two gates that are open or closed depending on the conformational state of the receptor. One gate corresponds to the activation/opening gate, which is at position 9', and the other in the cytoplasmic interface representing the desensitization gate at position -2' (Fig. 5). Thus, the conducting pathway shows an asymmetrical hour-glass-like cavity divided by the narrow open gate at 9' position. The top of the pore axis corresponds to the outermost part of the pore, which is the extracellular vestibule, and the bottom of the pore axis (-2' position) corresponds to the innermost part of the pore, which is the intracellular vestibule, where the ␥2(P302L) subunit mutation occurs. Through structural simulation of the different states that the GABA A receptor adopts during the activation and deactivation of the channel, we found that the shapes of the vestibules that determine the features of the pore were deeply affected by the mutation, thereby altering the gating of the receptor. Further, we propose a structural mechanism to explain the deficiencies observed in the gating of receptors containing the ␥2(P302L) subunit mutation. Thus, the decrease in the outer vestibule of the pore in the open state can affect the anticlockwise rotation of the transmembrane segments M2 for channel activation (Hibbs and Gouaux, 2011;Althoff et al., 2014;Du et al., 2015) and may account for the slowing in the activation of the channel and the occurrence of low-conductance openings. The narrowing of the inner vestibule of the pore in the closed state instead suggests that the channel is transiently trapped in a nonconducting state (Gielen et al., 2015), explaining the profound desensitization of the currents. Finally, the pore closure failure at the inner entrance in the desensitized state could lead to a partially nonconducting desensitized state, favoring the occurrence of sublow-conductance openings (Dani, 1986;Bormann et al., 1987;Root and MacKinnon, 1994), corroborating the experimental observations of clusters of sublow openings. Further, slowing of recovery from desensitization and reduction in the activation caused by the ␥2(P302L) subunit mutation may be responsible for facilitating the allosteric modulation by zinc (Barberis et al., 2000) and increase of zinc sensitivity. Missense epilepsy mutations in the M2 domain of Cys-loop receptors cause hyperexcitability by altering current desensitization and channel conductance Among Cys-loop family receptors, structural studies revealed that the nicotinic acetylcholine receptor (nAChR) (Unwin, 2005), GABA A receptor (Miller and Aricescu, 2014), GlyR (Du et al., 2015), and GluClR (Hibbs and Gouaux, 2011) share a common ion-conducting pore scaffold composed of four transmembrane segments (M1 to M4), with the transmembrane segment M2 lining the pore. Thus, it has been suggested that these receptors share a similar structural basis for the ligand bindingchannel gating coupling mechanism (Unwin, 2005;Hibbs and Gouaux, 2011;Althoff et al., 2014;Miller and Aricescu, 2014;Du et al., 2015). Missense mutations of the nAChR ␣4 subunit associated with autosomal dominant nocturnal frontal lobe epilepsy affected highly conserved residues in M2, specifically those residues predicted to be part of the M2-axis that rotates when the agonist binds and opens the pore. Similar to the defects caused by the GABA A receptor ␥2(P302L) subunit mutation, nAChR ␣4(S248F) (Weiland et al., 1996;Kuryatov et al., 1997) and ␣4(S252L) (Matsushima et al., 2002) subunit mutations caused faster desensitization and decreased singlechannel conductance. In addition, in homomeric GlyR ␣1 subunits, autosomal dominant mutations associated with hyperekplexia were also found clustered in and around the pore-lining transmembrane segment M2 impairing GlyR ␣1 subunit gating. At the extracellular end of the M2, GlyR ␣1(R271Q) and GlyR ␣1(R271L) subunit mutations reduced single-channel conductance (Langosch et al., 1994), while the GlyR ␣1(V280M) subunit mutation in the M2-M3 loop enhanced glycine sensitivity and spontaneous channel activity (Bode et al., 2013). The GABA A receptor ␥2(P302L) subunit missense mutation occurred in the evolutionarily conserved proline residue in the pore region of GABA A receptors, GlyRs (Du et al., 2015) and GluCL receptors (Hibbs and Gouaux, 2011). Remarkably, the autosomal dominant hyperekplexia GlyR ␣1(P250T) subunit mutation occurred at a position homologous to that of the ␥2(P302L) subunit mutation. Resembling the ␥2(P302L) subunit mutation, the GlyR ␣1(P250T) subunit mutation reduced glycine-activated current amplitudes, induced fast desensitization and reduced single-channel conductance (Saul et al., 1999), which accounted for the hyperekplexia phenotype. None of these mutations impaired cell surface expression. Thus, the main mechanisms by which these mutations cause hyperexcitability appear to be through impairment of receptor gating, thereby contributing to the epilepsy phenotype.
2018-04-03T03:32:05.419Z
2017-01-31T00:00:00.000
{ "year": 2017, "sha1": "43643dafaabbd68e891f36a1282b8e7266c6147b", "oa_license": "CCBY", "oa_url": "https://www.eneuro.org/content/eneuro/4/1/ENEURO.0251-16.2017.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43643dafaabbd68e891f36a1282b8e7266c6147b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
262221984
pes2o/s2orc
v3-fos-license
Creative Steps for Learning Islamic Religion (Classroom Management Study at Smk Muhammadiyah 2 Wates) ABSTRACT INTRODUCTION The learning creativity step is a process of learning stages that can emphasize how a facilitator is able to facilitate the learning process in order to create supportive and comfortable learning conditions.Therefore, students can be stimulated to carry out creative and fun activities (Kalida, 2015).In achieving this creativity, a teacher needs to improve the quality of the learning process, which is directed and organized, in order to create a goal of creative learning.The creativity of Islamic religious learning includes five aspects, namely Al-Qur'an Hadith, Fiqh, Aqidah Akhlak, SKI, and Arabic from MI, MTs to MA levels.The application of Islamic religious learning that is carried out certainly needs to be managed, which is a clear direction for building successful learning (Warsono, 2016). Successful learning is undeniable in many aspects, be it goals, materials, media, methods/strategies, tools, and evaluations applied (Kulsum, 2020).The success obtained from the many steps in the preparation with various processes starting with the process of planning, organizing, directing, and controlling, which is often known as the 4P instrument, is used as the key to controlling the learning process.Control of the learning process is aimed at building a pleasant learning atmosphere, which is in order to optimize and increase student interest and motivation in learning (Asmara & Nindianti, 2019).An effort to increase students' interest and motivation to learn certainly requires creativity so that it creates a sense of curiosity or interest in students to participate in the teaching and learning process.Because of this creativity, it can be awakened when a teacher can first recognize the attitude, environment, the habits contained in the learning place, which is often called the approach step.(Astuti, 2019).In this approach, it is expected that a teacher can determine the steps that are deemed appropriate and creative, which are directly proportional to the student's condition where the implementation stage is rather than the goals that have been determined. In creative learning, it is not only teachers who are required to play an active role, but students are also important to show their involvement in the process of teaching and learning activities which is the key to effective learning.In this effective learning, of course, in the learning process, there are often problems, both from the condition of students who are difficult to be conducive, the class environment that is not organized, and so on.It can interfere with the achievement of learning, especially in religious subjects known as PAI, which often makes students bored in participating in their learning.The boredom that is created sometimes occurs because the interests and talents of students are not known by educators, so learning is considered boring and creates many obstacles in the classroom.These existing constraints (Goddess, 2012).1ii from various perspectives so that solutions to the learning process can be solved that can support and achieve the learning objectives themselves Zulkifli (2011: 1).creative learning has various characteristics according to Solihatin (2012: 161), such as: 1.1.Involve students in learning both emotionally and intellectually 1.2.Students are encouraged to be able to express their opinions in accordance with the material 1.3.Students are given the opportunity to be accountable for all the assignments they get. In discussing the results of the research, it is possible to seek interpretation of this research in the field, namely at the SMK Muhammadiyah 2 Wates, Kulon Progo Regency, the results of the research are obtained in accordance with the existing reality, so that it can be discussed as follows. Learning with Imaginative Concepts Creative learning is a teaching and learning process that has imaginative characteristics and is in accordance with reality.So everyone can have creativity and imagination in accordance with each educator who teaches.Pirto (2011) 1.1.2.Learning that Stimulates Original Ideas and Work Creative learning must include several original ideas; then, educators actualize them with independent learning.Such as educators giving assignments to students to memorize hadiths or letters in textbooks by advancing one by one in front of the class.This idea is in accordance with the interview with the subject "I" as follows "When creating creative learning concepts and original works, in the learning process, students must be able to memorize hadiths or letters and the methods used to memorize freely according to the abilities of students".So it will produce memorization that is in accordance with recitation and in short length where students are always involved in teaching and learning activities. Presentation of Various Learning At the time of learning, activities can be interesting, exciting, and monotonous.Activities implemented by educators must have a variety of learning and teaching styles that are not boring.Then having a pattern of interaction that is varied and able to be understood by students, for example, such as grouping students in discussion activities, then students are divided into their respective tasks then after that, the results of the discussion are presented in class, as well as questions and answers and concluding the results of the discussion.When educators are expected to present teaching materials that are fun, serious, innovative, and interesting.So that various kinds of learning can be implemented by educators at SMK Muhammadiyah 2 Wates, such as presenting material during an imaginative, varied learning process.Various kinds of varied learning styles will greatly affect student learning outcomes, such as educators always paying attention to the nature and character of students and teaching not only sitting in front but around the class.So by looking at the situations and conditions of creative learning, namely always thinking divergent Supriatna (2019).Creative learning will provide various answers according to the information absorbed by each individual, according to Munandar (2004).1.1.4.Immediate Assessment In learning, there are various kinds of assessments, namely direct assessments and assessments of the correct and wrong tajwid.That way, students feel comfortable and have a sense of mutual trust with one another.Then present it in front of the class together with the group.By using this assessment, students will form self-confidence within themselves and practice writing Arabic so that it can be read easily so that students are always involved and produce high creativity Barbot (2011). Methods that Stimulate Creativity Brainstorming is a method of exchanging opinions in which groups are led to generate innovative ideas, exchange ideas, and express their thoughts on certain issues, so it can be explained that this method is able to provide information in terms of knowledge and experiences of students who are very complex.This method aims to enable teachers to train and develop their ideas to stimulate student creativity.By using this method, educators have tried to do creative learning Santrock (2007).The importance of the learning process is to make students more creative in the learning that is being carried out. Combining Learning Methods There are various kinds of procedures to achieve systematic learning, namely by using a variety of very varied methods so that learning is created more interesting, able to capture information conveyed by students so that learning can be directed and fun to avoid boredom and boredom.During the learning process, as much as possible, the teacher creates a learning process that always combines various methods to achieve learning goals so that the accuracy of the combined method can be seen by Hebert (2010).In addition, creative learning also includes aspects of educators being able to convey motivation to students and can bring out student creativity during the learning process by combining various methods and strategies that are very varied, for example, group work with the lecture method Lin (2011).Some of the creative methods used by educators must meet the requirements such as the method used is able to provide motivation for enthusiasm for learning, is able to increase the development of students' knowledge, is able to encourage students to show their work, is able to attract students' enthusiasm to continue learning, is able to apply the method learning by itself.The learning method carried out at Muhammadiyah 2 Wates Vocational School is by dividing groups, question and answer, demonstrations, lectures, etc., depending on how the 1ii teacher varies the various methods.So educators at SMK Muhammadiyah 2 Wates have really implemented creative learning so as to produce learning that is fun, not boring, and easily accepted by students. Teacher Creativity in Developing Student Learning Media and Resources Learning objectives have various components, such as media and learning resources, so as to help achieve creative learning, combined with the development of media and learning methods that vary from educators.The selection of learning media can provide benefits and functions to help students learn resources because utilizing a variety of media can facilitate the delivery of knowledge and other information to students so that it is easy to understand to achieve optimal learning goals Mahnun (2012).Some things will be explained in the creative learning of a PAI teacher in developing learning media. Create Your Own Media When the teaching and learning process occurs, media is needed as a tool in achieving learning goals; with the development of learning media, sometimes there are limitations, namely the lack of inadequate facilities and infrastructure, so educators can develop their potential by making media by themselves such as learning made like a game, which requires materials such as paper, markers, rulers, etc. Modifying Learning Media Modifying learning media is developing media that will be used for the learning process so that it becomes new and has high effectiveness in the learning process. Combining Teacher Media and Student Work The results of students' work can be used as examples by educators in learning media, for example, such as making calligraphy, so the best calligraphy is selected and can be used as a reference in making calligraphy for other students.That way, students will feel happy and proud of their own work when educators make their work an example for other students Mahnun (2012). Development of Learning Resources Muhammadiyah 2 Wates Vocational School has learning resources in the form of libraries and workshops that are used for the learning process; educators can assign students to study from existing learning resources at Muhammadiyah 2 Wates Vocational School when educators cannot teach for some reason, or also participants Students can search the internet to find complete material. B. Class Management Classroom management is a way of supervising student behavior in class; according to Mulyadi, class management is a skill that must be owned by every teacher in understanding, analyzing, deciding, and the ability to act to improve the class atmosphere and dynamics in achieving educational goals and supporting the process of educational interaction as a medium to empower the potential of existing classes to the maximum extent possible. 1) Planning a) Definition of planning According to Terry in Nadzir, Planning is a step to determine the work that must be carried out by a group to achieve the goals to be achieved (Nadzir, 2013). Learning planning, according to Suryapermana, is decision-making that is carried out in order to achieve predetermined goals, which includes determining policies, determining programs, determining learning procedures/methods, and learning activities be carried out (Widyanto & Wahyuni, 2020) b) Planning function in learning i) Learning Planning as a Science Learning planning is a way to create, develop, implement, evaluate, and maintain learning situations or facilities for learning material with all its complexity.ii) Learning planning as reality The stage of developing learning by providing learning linkages that are measured in terms of time or a process of learning linkages from time to time, of all the processes that have been done, where the teacher has checked all the activities that have been carried out systematically.iii) Lesson planning functions as a system Learning planning functions as a driving force for the learning process; learning planning is also carried out as an effort to develop a learning system through a systematic process, which can then be implemented with reference to the planning system.iv) Planning as technology This planning encourages the use of techniques that can be used to develop behavior and theories to find solutions to problems in learning (Seknun, 2014) 2) Organizing Handoko said that organizing is the breakdown of all work to achieve a goal to be achieved.Melayu SP Hasibuan stated that organizing is the process of managing and classifying activities needed to achieve goals.In line with Hasibuan's opinion, the notion of organizing is also explained by Asnawir in Subekti Organizing is the activity of compiling, the process of forming one's work relationship so that one unit can be realized to achieve the goals achieved (Subekti, 2022). Meanwhile, according to Darma, quoted from Yuanita, the class organization has the goal of assisting in grouping tasks; forming groups can make it easier to achieve organizational goals, helping someone to be able to collaborate with groups or classes (Yuanita, 2022).1ii Class organizing steps According to Karwati and Priansa, quoted from Yuanita, there are steps used in class organizing, namely: i. Observing the various goals and targets that have been previously set. ii. Reviewing work that has been compiled and re-specified so that various tasks are translated into several activities.iii.Determine members who have the ability and ability to realize the tasks and activities.iv.Delivering valid and transparent information to educators regarding obligations that must be carried out, regarding the time and place of implementation, as well as work relations (Yuanita, 2022) According to A. Soedomo Hadi quoted from Oci class organization includes inside and outside class organizations, KBM organizations, student personnel organizations, and class physical facility organizations.Intra-school organization can be in the form of teaching and learning management carried out in class.Intra-class activities are activities carried out during school hours, while extra-class activities are activities carried out outside teaching and learning hours/school hours.Activities outside the classroom aim to reduce the boredom of students studying in class; of course, the learning environment is adjusted to the learning material.In organizing classes, teachers play an important role because educators need to control classes accordingly and establish good communication with students (Oci, 2018) 3) Briefing According to Liang Gie, quoted in Lin, direction is an activity of directing or commanding activities so that the activities carried out are in accordance with what was planned and the results set.According to Suharsimi and Arikunto, this briefing intends to understand whether strategies, methods, and techniques are suitable for the delivery of learning (Meriza, 2018) 4) Control Harold Koontz, in Hasna, argues that control is an activity that is basically monitoring and assessing subordinates so that all plans that have been prepared are able to achieve suitability and the goals have been set.In line with Harold's opinion, a figure, namely Earl P. Strong argued that control is a way to regulate several aspects within an agency, so that implementation can be in line with the provisions in a plan (Hasna, 2019).From this, it can be concluded that control is an activity to command an institution to carry out activities so that it is in accordance with predetermined goals. 5) Class Control According to Salabi, class control is part of curriculum management which then becomes part of education management.(Salabi, 2016).Teachers have an important role in class control, especially in PAI learning, which has standards of learning effectiveness, including: (a).Involve students actively Wiliam Burton in Wahyudi, teaching is a way of guiding student learning activities so that they have a willingness to learn.Therefore, students need to be subjects in the learning process because students are required to be very imaginative because students are the subjects who do the learning.(Revelation, 2022) (b).Able to attract the interest and attention of students Interest is something that is embedded in every individual.Conditions of high interest in learning can create conditions/environments for effective learning.The contribution of students in learning is often associated with the behavior of students, both from the aspect of knowledge, affective, and psychomotor.(c).Can generate student motivation Motivation is something that can activate forms into behavior in order to meet needs and achieve goals, as well as individual readiness.Such behavior can trigger certain goals. Creative Learning Steps at SMK Muhammadiyah 2 Wates A teacher must be required to apply various creative learning methods in order to attract students' learning interests so that the learning objective can be achieved optimally.There are several learning methods and steps that are usually applied by ISMUBA teachers at SMK Muhammadiyah 2 Wates, including: I. Lecture method PAI creative learning applied at SMK Muhammadiyah 2 Wates uses the lecture method.The lecture method is a method that is often used by PAI teachers in general in the learning process, which is conveyed verbally or verbally."The use of this method is considered effective in delivering learning that is adapted to the conditions of the students in the class," said Indarto, a teacher at the school.This method has advantages and disadvantages in its implementation, including: A) Advantages of the lecture method 1) This method is easy to apply 2) Without using tools 3) The use of this method is effective for classes with large numbers 4) Class control can be carried out in full.5) Learning with this method can be delivered as a whole.B) Disadvantages of the lecture method The results of our findings are that the application of the lecture method at SMK Muhammadiyah 2 Wates has drawbacks.Namely, students feel bored because they seem monotonous and are one-way.1ii II. Discussion method This method has the main goal of solving a problem that involves students to train their reasoning power and critical thinking.At SMK Muhammadiyah 2 Wates, this method has advantages and disadvantages, namely: a) The advantages of the discussion method (1) Students can develop critical thinking skills (2) Stimulate students to be more active and creative in learning (3) Can exchange ideas with other friends in a group (4) Can foster leadership spirit b) Disadvantages of the discussion method 1) Learning will be controlled by active students 2) This method requires a long allocation of time Class Management at SMK Muhammadiyah 2 Wates 1. Learning Planning at SMK Muhammadiyah 2 Wates In carrying out lesson planning at SMK Muhammadiyah 2 Wates, the teacher adjusts between ATP and teaching modules so that learning objectives can be adjusted to the teaching modules.According to Utami, teaching modules are learning tools that are based on the applied curriculum with the aim of achieving predetermined competency standards (Utami, 2022).The next step is the teacher preparing the material to be taught, where the learning material is already in the teaching module; all that remains is how the implementation of the learning is in accordance with the level of students' understanding of the material. Class Organization at SMK Muhammadiyah 2 Wates The teacher, in organizing the class, prepared a learning contract at the beginning of the semester.The learning contract contains the teacher's agreement with students while participating in the learning.The learning contract consists of things that students may and may not do while participating in the learning.The learning contract is a class organizing tool so that learning runs conductively.However, in addition to learning contracts, teachers also often remind students about the learning process so that students are interested and serious about participating in teaching and learning activities because the characteristics of each student are different.Furthermore, the teacher can also do learning outside the classroom so that students do not feel bored during the learning process. Class Briefing at SMK Muhammadiyah 2 Wates The teacher reminds or reprimands students who have low learning interest.Reprimands made by the teacher.The teacher's warning usually invites students to wash their faces while sleeping during the teaching and learning process; students often skip classes during class hours. Class Control at SMK Muhammadiyah 2 Wates. The teacher always reminds students who are not concentrating on learning; the teacher can also give educational punishments to students, such as going to the front of the class explaining what material has been delivered by the teacher; The teacher also works with students to remind each other for students who are still lazy to follow the lesson / busy in class. CONCLUSION Based on the results of the research, it can be concluded that class management at SMK Muhammadiyah Wates has implemented it as optimally as possible.This is marked because there is a plan and determination of achievement targets to be achieved by the school and homeroom teacher.Then the PAI teacher at SMK Muhammadiyah 2 Wates is very concerned about the condition of each student, especially in the learning process.The material taught does not always have to refer to the teaching material module but also uses other references, for example, from YouTube videos and others. The teacher also always reminds students who are not concentrating on learning, giving punishment to students who deviate as little as possible by carrying out interactions that are appropriate for both the homeroom teacher and students.For example, the teacher gives educational punishments to students, such as going forward to explain the material that has been delivered by the educator in front of what has been delivered by the teacher.
2023-09-25T15:07:35.663Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "464aaca654aa2ba5554e379afe654b22faf6ba56", "oa_license": "CCBYSA", "oa_url": "https://jiee.umy.ac.id/index.php/jiee/article/download/19/15", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1f18aaeb42b241501f1752d909a45bbff6719994", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
245190378
pes2o/s2orc
v3-fos-license
Mitophagy Eliminates the Accumulation of SARM1 on the Mitochondria, Alleviating Axon Degeneration in Acrylamide Neuropathy Background: Sterile-α and toll/interleukin 1 receptor motif containing protein 1 (SARM1) is the central executioner of axon degeneration. Although it has been confirmed to have a mitochondrial targeting sequence and can bind to and stabilize PINK1 on depolarized mitochondria, the biological significance for mitochondrial localization of SARM1 is still unclear. Chronic acrylamide (ACR) intoxication can cause typical pathology of axonal injury, owning the potential to explore the interaction between mitochondria and SARM1 during the latent period of axon destruction.Methods: The expression and the mitochondria distribution of SARM1 were evaluated in in vivo and in vitro ACR neuropathy models. Transmission electron microscopy, immunoblotting, and immunofluorescence were performed to evaluate mitochondrial dynamics and PINK1-dependent mitophagy. LC3 turnover experiment and live cell imaging were conducted to further assess the state of mitophagy flux. In order to verify the effect of mitophagy in SARM1-mediated axon degeneration, low-dose and low-frequency rapamycin was administered in ACR-exposed rats to increase basal autophagy.Results: In a time- and dose-dependent manner, ACR induced peripheral nerve injury in rats and truncated axons of differentiated N2a cell. Moreover, the severity of this axon damage was consistent with the up-regulation of SARM1. SARM1 prominently accumulated on mitochondria, and at the same time mitophagy was activated. Importantly, rapamycin (RAPA) administration eliminated mitochondrial accumulated SARM1 and alleviated SARM1 dependent axonal degeneration.Conclusions: Complementing to the coordinated activity of NMNAT2 and SARM1, mitochondrial localization of SARM1 may be part of the self-limiting molecular mechanisms of Wallerian axon destruction. In the early latent period of axon damage, the mitochondrial localization of SARM1 will help it to be isolated by the mitochondrial network and to be degraded through PINK1-dependent mitophagy to maintain local axon homeostasis. When the mitochondrial quality control mechanisms are broken down, SARM1 will cause irreversible damage for axon degeneration. Moderate autophagy activation can be invoked as potential strategies to alleviate axon degeneration in ACR neuropathy and even other axon degeneration diseases. Background Axon degeneration is a common hallmark of neuropathies, traumatic injury, and multiple neurodegenerative disorders. The molecular mechanism of axon degeneration has been primarily elucidated through the study of Wallerian axon destruction pathway. Wallerian degeneration, rst described by the British neurophysiologist Augustus Waller in 1849 (1), refers to rapid axonal fragmentation after a long period of relative latency due to a genetically encoded self-destruction program that is activated distal to the point of the axon cut site (2)(3)(4). This is the most extreme and typical manifestation of axon degeneration. Over the past decades, great progress has been made in the understanding of this active process. The coordinated activity of both the pro-survival factors and the pro-degeneration factors, exempli ed by nicotinamide mononucleotide adenyltransferase 2 (NMNAT2) and sterile-α and toll/interleukin 1 receptor motif containing protein 1 (SARM1), limits degeneration signaling in an "off" state in healthy axons. After axotomy, NMNAT2 is rapidly consumed in the axon segment distal to the injury site due to the interruption of axon transport and the degradation (5,6), resulting in the hindrance of nicotinamide adenine dinucleotide (NAD+) synthesis. The disruption of NAD+ synthesis will increase the amount of nicotinamide mononucleotide (NMN), the precursor of NAD+, in the axon. The raised ratio of NMN and NAD+ (7) activates SARM1, which further consumes NAD+ to switch into the irreversible stage accompanied by adenosine triphosphate (ATP) depletion, neuro lament hydrolysis, and axon fragmentation (8). SARM1 is the de ning molecule of axon destruction. Activation of SARM1 triggers metabolic catastrophe and axon destruction, whereas genetic deletion protects axons from various injury (9). As the central executioner of axon degeneration, SARM1 is evolutionarily highly conserved, having homologues in mouse, Drosophila, zebra sh, Caenorhabditis elegans, amphioxus, and horseshoe crab (10)(11)(12)(13). These homologues share a common domain architecture constituted of autoinhibitory N-terminal armadillo motifs (ARM), tandem sterile α motif (SAM) domains that mediate constitutive homomultimerization, and a C-terminal toll/interleukin 1 receptor (TIR) domain. The N-terminal SAM-TIR domain has NAD+ cleaving activity, and the activation induces axonal NAD+ depletion that is followed by ATP loss. So far, although we have a preliminary understanding of these domains, the function of SARM1 and its regulatory mechanisms still needs to be further studied. It is currently believed that elucidating the exact subcellular localization of SARM1 will offer insights into its functional role. Even though it remains to be de ned, mitochondrial localization has been proposed. The N-terminal 27 amino acids of SARM1 are hydrophobic polybasic and have the capacity to fold into an α-helix that is required for association with the mitochondrial outer membrane. It serves as a mitochondria-targeting sequence, associating SARM1 to the mitochondria (14). SARM1 and mitochondria are intimately connected. In addition to structural biological evidence that SARM1 has a mitochondrial import sequence, the two also have a metabolic connection. ATP produced in mitochondria provides energy for NAD+ synthesis (15), and NAD+ plays an important role in both oxidative phosphorylation and glycolysis (16,17). NAD+ metabolic disorder is a necessary and su cient condition for the activation of SARM1. The activated SARM1 with the NAD+ cleavage site exposed, consumes NAD+, accelerates energy exhaustion and initiates axon fragmentation. Indeed, ATP depletion is a de ning indicator of the transition from the latent period to the irreversible period of Wallerian degeneration (8,9). However, previous studies have shown that Wallerian degeneration is only modestly in uenced by mitochondria (18). The biological signi cance of SARM1 mitochondrial localization has yet to be further explored. To identify conditions that would bene t from blocking SARM1 dependent Wallerian axon destruction pathway, peripheral neuropathies are re-examined. Here, we want to explore whether the SARM1 dependent axon degeneration mechanism is involved in peripheral nerve damage in ACR poisoning? Further, if SARM1 is activated in this progress, are there any potential regulatory mechanisms? As the vinyl monomer for the production of polyacrylamide, ACR is widely utilized in a variety of industrial settings and laboratories (19). In addition to occupational exposure, ACR in food, drinking water, coffee, and cigarette smoke has a potential hazard to the general population (20)(21)(22)(23). In June 2002, a risk assessment of ACR in food was conducted in the joint Food and Agriculture Organization of the United Nations (FAO) and the World Health Organization (WHO) consultation. In reports of the Joint FAO/WHO Expert Committee on Food Additives (JECFA), the potential adverse neurological effects were noted among individuals with high dietary exposure to ACR. Chronic ACR intoxication induces peripheral neuropathy in people (24)(25)(26)(27) and animals (28, 29), which are characterized by progressive axon degeneration of the distal ends of the longest and the largest nerve bres. As exposure continues, progressive retrograde destruction of these distal axon regions ensues with preservation of more proximal segments resulting symptoms, that is, ataxia, skeletal muscle weakness and numbness of the hands and feet. The speci c spatiotemporal pattern of axon damage is similar to the pro le of Wallerian degeneration after axotomy (30)(31)(32)(33)(34) and is named as Wallerian-like degeneration. Studying the changes of SARM1 in such a slowly progressing and moderate axon destruction process will contribute to enhancing the understanding of Wallerian degeneration and explore its potential regulatory mechanism. Animals and treatments Adult male Wistar rats (160-180g, SPF) were supplied by Jinan Pengyue Laboratory Animal Breeding Co., Ltd., Jinan, China. All animals were kept in a barrier system. Food and drinking water were available. The animal room was maintained at approximately 22 ℃ and 50% humidity with a 12 h light/dark cycle. After seven days of acclimatization, rats were randomly divided into groups for experiments. With references to ACR intoxication regimens (35), the in vivo ACR neuropathy model set up four groups of control, low, medium, and high doses, 0, 10, 20 and 40 mg/kg b.w. i.p. every other day, respectively. RAPA intervention experiment had the same ACR dose of the high dose group. The dose of RAPA was 1 mg/kg b.w. i.p. once per week. ACR was dissolved in saline. And the control group was treated with saline. RAPA was dissolved in DMSO, sequentially added with PEG300 and Tween-80 to help dissolve, and diluted with physiological saline (volume ratio: DMSO 2.5%, PEG300 10%, Tween-80 1.25%, saline 86.25%) to get a clear liquid. Neurobehavioral and neurophysiological tests Rotarod latency test ZS-ROM rotarod fatigue equipment (Beijing Zhongshidichuang Technology and Development Co., Ltd., Beijing, China) was utilized. All rats received training before intoxication, that was, staying on the equipment for 60 s at a velocity of 8 rpm. During the formal test, the original velocity was set at 0 rpm and accelerated smoothly to 40 rpm within 200 s. The time that animals stayed on the rod was recorded as its latency to fall (36, 37). Landing foot splay measuring Landing foot splay was the distance between the inner surfaces of the fourth digits of each foot after the animals were dropped from a 30 cm height (38). Gait score evaluation Rats were positioned in an open eld and were observed for 3 min. Following the observation, a gait score was assigned from 1 to 4 where 1 = a normal, unaffected gait; 2 = a slightly abnormal gait (tiptoe walking, hindlimb adduction); 3 = moderately abnormal gait (obvious movement abnormalities characterized by dropped hocks and tail dragging); 4 = severely abnormal gait (dragging hindlimbs and complete absence of rearing) (39,40). Motor nerve conduction velocity measurement The motor nerve conduction velocity of the rat tail was measured with a BL-420E biological function experimental system (Chengdu Taimeng Technology Co., Ltd., Chengdu, China). The electrodes used in our experiments were stainless steel needles, 0.34 mm in diameter and about 15 mm long. The rat was xed in the supine position and its tail was exposed to a pair of stimulating electrodes, which were connected to the two pairs of sensor electrodes, and an earth electrode. A single electric stimulus of 5 V was applied through the stimulating electrodes and two action potential oscillogram curves were recorded. The time between the two peak points and the distance between the two negative sense electrodes was recorded to calculate the motor nerve conduction velocity (41 LC3 turnover experiment Cells were seeded in six-well plates, adhered, differentiated, and pre-treated with 10 µg/mL Pepstatin A and E64d for 1 h before treatment with ACR (42,43). Pathological examination Histopathological examination Rats were anesthetized with a 1:1 mixture of 5% chloral hydrate and 12.5% urethane. After infusion of saline and 4% paraformaldehyde solution, the tissues were quickly dissected and separated. Spinal lumbosacral enlargements (L2-S3) were xed in 4% paraformaldehyde for 48 h, dehydrated in alcohol, and then embedded in para n. Every 20th cross section (5 µm) was processed by haematoxylin and eosin (H&E) and Nissl staining (0.5% Thionine solution), scanned as digital slices through Olympus VS120, and analyzed blindly. Cells with a distinct nucleus and a diameter of at least 25 µm located in the anterior horn ventral to the line tangential to the ventral tip of the central canal were considered to be α motor neurons. Those α motor neurons with abnormal morphological changes, such as hyperchromatic cytoplasm, were counted. Immuno uorescence staining N2a cells were seeded on sterile cover glasses placed in the 24-well plates. Green FM and Lyso Tracker Red DND-99 were diluted with DMED to a nal concentration of 100 nM and 50 nM, respectively. N2a cells treated with 2 mM ACR for 24 h or not were incubated with this working solution for 30 min, and then imaged by a uorescence microscope. Mitochondrial fractionation Mitochondria were isolated using a Mitochondria Isolation Kit by sequential centrifugation. Brie y, samples were homogenized and were lysed by Mitochondria Isolation Solution supplemented with protease inhibitor cocktail and phosphatase inhibitor cocktail. Lysates were centrifuged at 1,000 g for 5 min to remove the plasma membrane fraction, and subsequently, the supernatants were centrifuged at 3,500 g for 10 min to get puri ed mitochondria. Isolated mitochondrial pellets were lysed and subject to immunoblot analysis. Given the low yield of mitochondrial fraction and consistency changes of Wallerian degeneration related molecular in the spinal cord and sciatic nerve, we did not extract the mitochondrial components of the sciatic nerve homogenate. Samples preparation, electrophoresis, and immunoblotting The sciatic nerve was ground into powder in liquid nitrogen and the immunoblotting sample was prepared according to the subsequent steps. The spinal cord was homogenized directly in ice-cold RIPA buffer supplemented with protease inhibitor cocktail and phosphatase inhibitor cocktail. Then, homogenates were centrifuged at 12,000 g for 10 min. Supernatants were used for immunoblotting analysis. After protein concentration was determined by BCA™ Protein assay Kit, the sample was mixed with 4x loading buffer, and then heated at 100°C for 5 min. To assess relative changes in protein content, corresponding protein samples were subjected to sodium dodecyl sulfonate-polyacrylamide gel electrophoresis. Following electrophoresis, proteins were transferred electrophoretically to polyvinylidene uoride membranes. Then the membranes were blocked with 3% fat-free milk for 45 min and incubated with primary antibody (Additional le 1. Reagent) diluted in 0.1% BSA for 8 h. Following primary antibody, membranes were washed in a mixture of Tween 20 and tris-buffered saline and incubated with horseradish peroxidase-conjugated secondary antibody at room temperature for 1 h. After being washed again, the membranes were incubated by using the SuperSignal West Pico Chemiluminescent Substrate reagents for 2 min and then exposed to Tanon-5200 Multi Chemiluminescence Imaging System (Tanon Science & Technology, Shanghai, China). The full-blot images can be found in the additional le (Additional le 2. Original blots). Digitized data were quanti ed as integrated optical density using Fiji Image-J (44). Each protein was repeated at least three times. VDAC and β-actin (voltage-dependent anion-selective channel) were detected as loading control for total proteins and mitochondrial proteins respectively. Statistical analysis Page 9/26 The performers were blinded to the experimental design in data collection and analysis. Data are presented as means ± standard error of the mean (SEM) after analysis using SPSS 18.0 software. Twoway Repeated Measures ANOVA was used for neurobehavioral data. Unpaired t test, one-way ANOVA and two-way ANOVA followed by Bonferroni's post-hoc test, were performed in the right situation (Additional le 3. Statistical analyses). The differences were considered signi cant at p<0.05. SARM1 dependent Wallerian degeneration is involved in ACR neuropathy A rat ACR neuropathy model was established after four weeks of exposure to ACR (0, 10, 20 and 40 mg/kg b.w. i.p. every other day, Fig. S1A). The symptoms of peripheral nerve injury were evaluated by neurobehavioral performances (Fig. 1A). After three weeks of intoxication, rats in the high-dose group had produced a triad of neurological de cits with decreased rotarod staying time, increased landing foot splay distances, and increased gait score. Consistent with neurobehavioral indicators, Axon destruction was seen in rats subjected to ACR for four weeks (Fig. 1B-E). Compared with the age-matched control group, the sciatic nerve axons in rats exposed to ACR for four weeks were lost. The retained axons were swollen with an increased diameter and myelin sheaths were loose (Fig. 1B, 1C). Furthermore, the motor nerve conduction velocity slowed down in a dose-dependent manner (Fig. 1D). The α motor neurons in the lateral anterior horn of the spinal cord, innervating transarticular extrafusicular muscle bres through axons in the sciatic nerve, also showed obvious morphological changes. And the number of abnormal neurons with morphological changes, such as hyperchromatic cytoplasm, increased in a dose-dependent manner (Fig. 1B, 1E, Fig. S1B). Interestingly, these abnormal α motor neurons did not over-express RIPK1 and be negative for TUNEL staining (Fig. S1C, S1D), indicating that they did not die even though they were under stress. Survival neurons with damaged axons revealed the early morphological feature for ACR neuropathy, the axon degeneration. Moreover, Wallerian degeneration-related molecules were detected by Western blotting. The constant up-regulation of SARM1 in the spinal cord and sciatic nerve (Fig. 1F, 1G) suggested that the SARM1 dependent Wallerian axon destruction pathway was activated in ACR-induced axon degeneration. In order to directly observe the axon degeneration caused by ACR, we then measured the axon length of N2a cells processed by ACR for different concentrations and different time (Fig. S1E, S1F). The axons were observed shortened, accompanied by swelling, blebbing and fragmentation (Fig. 1H, 1I). The expression of SARM1 was also up-regulated similarly to that in vivo, and it was dose-and time-dependent (Fig. 1J). Taken together, both the spatiotemporal pattern of axon degeneration and the SARM1 dependent neuropathy indicated that ACR intoxication triggered the activation of Wallerian degeneration machinery. The up-regulated SARM1 accumulates on mitochondria Immuno uorescence was then conducted to determine the intracellular localization of SARM1. As demonstrated in Fig. 2A, SARM1 aggregated into puncta along the nerve bres, and even translocated in the soma of α motor neurons from rats exposed to ACR for four weeks. Combined with the research on the subcellular localization of SARM1, we speculated that these plaque-like and punctate structures may be linked to the mitochondrial localization of SARM1. Then, we quanti ed mitochondrial SARM1 by isolating the mitochondrial fraction. In the low-dose group, SARM1 in the mitochondria fraction changed signi cantly, despite that the cytoplasmic level was unchanged (Fig. 2B). Compared with the changes in the cytoplasm, the more obvious alteration was observed in the mitochondria fraction of the higher dose groups further con rming the accumulation of SARM1 on mitochondria (Fig. 2B). The immuno uorescence co-localization analysis of SARM1 and Parkin further proved the above conclusion (Fig. 2C). In the low-dose group, the co-localization of SARM1 and Parkin dramatically increased along axons. With increasing exposure doses, the degree of co-localization in axons increased. And evident colocalization was found in α motor neuron cell bodies for the high-dose group. Finally, in vitro experiments also veri ed the mitochondrial aggregation of SARM1 (Fig. 2D). The immuno uorescence of SARM1 presented as dots in ACR treated N2a cells. The co-localization degree of SARM1 with mitochondrial molecules, e.g. Parkin and DRP1, also markedly increased, indicating that SARM1 got enriched on mitochondria (Fig. S2A, S2B). Next, our research focused on the role of mitochondria in ACR-induced Wallerian-like degeneration. Mitochondrial dynamics are disturbed and mitophagy is activated Transmission electron microscopy analysis demonstrated that there were a large number of organelles in the swollen axons from rats exposed to ACR for four weeks, including fragmented mitochondria and some autophagy-related structures (Fig. 3A). The morphological changes of mitochondria in the spinal cord and sciatic nerve were similar. The mitochondria in ACR treated groups were spherical and elliptical with severely disorganized and swollen cristae, while the mitochondria in the control formed short tubules with a clear sheet-like structure of mitochondrial cristae. Analysis of those images showed that ACR caused an increase in the mitochondrial number, but the length of mitochondria was shortened. The frequency distribution of mitochondrial length for ACR-treated rats was concentrated in a shorter area (Fig. 3B). Western blotting results were in agreement with the morphological ndings, disclosing the fragmentation trend of the mitochondrial network. The protein levels of DRP1 and p-DRP1 (Ser616), which promote mitochondrial ssion, were raised signi cantly in the mitochondrial fraction. By contrast, the proteins involved in mitochondrial fusion, e.g. Mfn2 and OPA1, were markedly reduced (Fig. 3C). Similarly, in vitro experiments also supported that ACR up-regulated mitophagy-related proteins ( Figure 4A). To ascertain the alteration of autophagic ux, we performed an LC3 turnover experiment in N2a cells (Fig. 4B). Pre-treatment with lysosome inhibitor Pepstatin A and E-64d further elevated the levels of LC3-, LC3-/ , and P62 con rming that ACR induced autophagy with the autophagic ux increased on-rate. In addition, the overlap of mitochondrial marker and intracellular acidic organelle marker greatly increased in ACR treated N2a cells through live cell imaging of Mito Tracker Green FM and Lyso Tracker Red DND-99 (Fig. 4C). Moreover, the co-localization of PINK1 and Tom20 (Fig. 4D), LC3 and Tim23 (Fig. 4E) increased in ACR treated cells. These results fully indicated that ACR activated mitophagy. Rapamycin intervention clears mitochondria accumulated SARM1 and partly alleviates ACR neuropathy Mitophagy selectively degrades defective mitochondria to maintain the mitochondrial network in a ne state. To verify the negative feedback inhibition of mitophagy on mitochondrial aggregated SARM1, we conducted a RAPA intervention experiment (Fig. S5A). Rats in the intervention group were addressed in low-dose, low-frequency RAPA to improve basal autophagy and to limit possible adverse effects. Compared with the ACR-intoxication group, abnormal neurobehavioral performances in the RAPA intervention group were delayed, and the severity was lower (Fig. 5A). Furthermore, pathological injuries of axons and α motor neurons were improved (Fig. 5B-D), indicating that ACR-induced Wallerian-like degeneration was signi cantly alleviated following RAPA intervention. Nerve conduction velocity was also obviously improved (Fig. 5E). The aggregation of SARM1 on the mitochondria was considerably decreased (Fig. 5F), and mitochondrial dynamics-and mitophagy-related proteins returned to nearly normal levels (Fig. 5F, Fig. S5B). More importantly, the shape, number, length, and length distribution of mitochondria in the RAPA intervention group recovered with the elimination of mitochondrial SARM1 (Fig. 5G, 5H, Fig. S5C). The results suggested that autophagy activator RAPA partially rescued the phenotype of ACR neuropathy. Discussion Neurotoxicity is the quintessential effect of ACR, and Wallerian-like degeneration is typical pathological change of chronic ACR intoxication. In this study, we analyzed ACR-induced axon degeneration in vivo and in vitro, con rming that the SARM1-dependent Wallerian axon destruction pathway was associated with peripheral nerve damage in ACR poisoning. These results not only con rmed the neuropathology of ACR intoxication, but also lead to the development of promising therapeutic strategies. Furthermore, we found that ACR induced obvious accumulation of SARM1 on mitochondria. Mitochondrial quality control mechanisms, e.g. mitochondrial dynamics and the PINK1-dependent mitophagy pathway, changed. Finally, pharmacological activation of autophagy by RAPA effectively removed SARM1 which accumulated on the mitochondria and partly alleviated axon degeneration. These ndings indicated that mitophagy limits Wallerian degeneration through the clearance of the prodegenerative factor SARM1 in ACR peripheral neuropathy. The mitochondrial localization of SARM1 may involve in mitochondrial neurites distribution, anoxic degeneration (45) and neuronal survival regulations (46-48). But other studies prove that deletion of the mitochondrial localization sequence does not alter its ability to promote axon destruction (49). We consider that these inconsistent results may be ascribed to differences in the type of injury, the dose, and the course of the disease. The Wallerian degeneration models of axotomy (18) or chemotherapy-induced peripheral neuropathy, e.g. vincristine, bortezomib and paclitaxel (50,51), induce rapid axon degeneration with a relatively short latent period, making it di cult to observe the interaction between mitochondrial quality control and SARM1 in the early incubation stage. The results in this study preliminarily con rmed that ACR could induce the SARM1 dependent axon degeneration. Moreover, the level of NMNAT2 did not decrease (Fig. 1F, G) as in the active degeneration stage. The relevant multi-omics data suggest that the NAD+ level in the ACR-intoxication model is still maintained at a relatively high level (Fold Change of the Control = 0.79, p = 0.005, FDR = 1.61%) (52), instead of depleted by increased SARM1. Although it is not ruled out that some axons have entered the active degeneration stage and lost, these above indicate that there are axons still in the early latent stage of axon destruction in rats subjected to ACR for four weeks. This provides the possibility to fully explore the potential biological signi cance of SARM1 mitochondrial localization. Combined with our results, mitochondrial localization of SARM1 is likely to be related to its clearance through the mitophagy pathway, which may explain these seemingly contradictory observations. Mitochondria are dynamic organelles. They are actively recruited to speci c cellular locations, fuse and divide continually which serves to intermix the lipids and contents of a population of mitochondria, and have dynamic structures under ne quality control conditions (53)(54)(55)(56). At present, increasing evidence support that macroautophagy/autophagy is involved in the process of axon degeneration (57)(58)(59)(60)(61). And mitophagy, a speci c type of autophagy that selectively degrades defective mitochondria, has received extra attention in energy maintenance for damaged axons. PINK1 dependent pathway is one of the beststudied mitophagy mechanisms (62-65). PINK1 is a serine/threonine kinase with an N-terminal mitochondrial targeting sequence. Selective accumulation of PINK1 on the dysfunctional mitochondria can recruit Parkin, OPTN, NDP52 etc., and these binding partners, in turn, induce the degradation of the damaged mitochondria. Previous studies have found that SARM1 in the mitochondrial outer membrane contributes to the stabilization of PINK1 and induces mitophagy (66). Therefore, mitochondrial localization of SARM1 will help to sequester the cytoplasm SARM1 through the mitochondrial dynamic process, and to the nal degradation by mitophagy in ACR peripheral neuropathy. This is also consistent with the related results of RAPA intervention. Conclusions Taken together, the study here nds that the up-regulated SARM1 induced by ACR intoxication accumulates on mitochondria with the N-terminal mitochondrial targeting sequence, which stabilizes PINK1 and triggers the mitophagy degradation machinery. Mitophagy clearance of SARM1 is complementary to the coordinated activity of NMNAT2 and SARM1. Enhancements of mitophagy with pharmacological methods will promote the clearance of SARM1 and prevent it from breaking down NAD+ metabolism, mitochondrial quality control, and other homeostasis mechanisms. Our research preliminarily demonstrated the potential role of mitophagy in ACR-induced toxic peripheral neuropathy. Further elucidating the mechanistic link between mitophagy and SARM1-dependent axon degeneration will help to develop new strategies for the prevention and treatment of a variety of axon destruction diseases. Availability of data and materials Abbreviations The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. and Tom20 (red), (E) LC3 (green) and Tim23 (red) in N2a cells with ACR treated for 24 h. Scale bar, 50 µm. The white lines in the images are the region of interest for gray intensity analysis of red and green channels. Black arrows indicate overlapped points, and black arrowheads indicate non-co-located points. Figure 5
2021-12-16T17:59:30.391Z
2021-12-13T00:00:00.000
{ "year": 2021, "sha1": "2b74cab6016e8f37127093358b18a1fc55da3cca", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1113241/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "69f8b28632b7d56d96787d15d6e3cf21bf1a43f4", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
17974434
pes2o/s2orc
v3-fos-license
Association of Dietary Vitamin C and E Intake and Antioxidant Enzymes in Type 2 Diabetes Mellitus Patients Background: Diabetes mellitus consist of a various metabolic diseases such as hyperglycemia, increase glycosylated hemoglobin (HbA1c) and disorder in antioxidant enzymes activity, hence supplementing with antioxidant nutrients, mainly vitamin C and E seems to reduce oxidative injure in patients with type 2 diabetes mellitus (T2DM). Aim: To evaluate outcome of vitamin C and E supplementation on type 2 DM patients. Setting and Design: The study was completed in 170 T2DM on consumption of vitamin C, E, combination of C & E and placebo. Materials and Methods: The cases groups of this study consist of two major groups, which were named supplementation and placebo group. The group of supplementation consisted of 3 sub-groups, which received three capsules per day for a phase of three months. The parameters such as HbA1c, glucose, superoxide dismutase (SOD) and glutathione peroxides (GSH) were evaluated in baseline and after three months with supplementation. Statistical Analyses: The statistical analyses were evaluated with the use of mean ± SD, ANOVA-test and paired-sample t-test. Results: Mean age of 170 patients, 84 male and 86 female were 53.82±5.26 in the range of 30-60 years. The blood pressure results showed significant differences between the all supplement groups in baseline as compared to after receiving supplements (p<0.05). Use of vitamin C, E, and E & C showed significant differences in concentration of plasma FBS and HbA1c (p<0.05 & <0.001), but there was no significant differences in placebo groups. SOD and GSH enzymes levels showed a significant increased after consumption of vitamins in supplementation groups (p<0.001). Conclusion: This research confirmed that subjects with T2DM after three months supplementation of vitamins demonstrated significantly low level of hypertension, decrease levels of blood glucose, and increase SOD and GSH enzyme activity that can probably reduce insulin resistance by enhanced lowering oxidative stress parameters. Introduction Diabetes mellitus consist of a various metabolic diseases such as hyperglycemia, disturbances in glucose levels, and metabolism of lipid and protein that is resulted from disorder in secretion and/or act of insulin (Nathan, 2009), which has become a serious and common disease leading to many complications and premature death. Oxidative stress plays a central role in the onset of diabetes mellitus as well as in the development of vascular and neurological complications of the disease (AI-Nimer et al., 2012). In diabetes patients, reduction of activity antioxidant enzymes may show the identified sensitivity of these enzymes (GSH, SOD) to radical induced inactive. Consumption of antioxidants is connected with decreased risk of disease with T2DM (Montonen et al., 2004). Consumption of antioxidants nutrients, chiefly vitamin C and E seems to decrease oxidative injure associated with hyperglycemia and pancreatic β cell function and reduces the prevalence of diabetic complications (Wild et al., 2008). The most important benefit claimed for vitamins C, E, and their role as antioxidants, are scavengers of particles known as reactive oxygen species (ROS) (also sometimes called oxidants) (Gade et al., 2001). ROS are involved both in insulin signal transduction and in insulin resistance when produced in excess. Overfeeding, saturated fatty acids, and obesity play a key role in the excessive production of ROS. However, a diet rich in fruits and vegetables, and therefore antioxidants, has confirmed beneficial effects against oxidative damages and insulin resistance (Bishal et al., 2012). Reliable with this, vitamins supplementation was related with improved hyperlipidemia and decreased blood pressure (Caballero, 2004). Several researches showed that low levels of basal vitamin C level in diabetic patients can be resulted in increased oxidative stress parameters in T2DM patients (ADS, 2009). The structural of vitamin C is like to glucose, can change it in many chemical reactions, and therefore is useful in avoidance of non-enzymatic glycosylation reaction of proteins. The rate of T2DM has increased in the Islamic Republic of Iran therefore it is recommended that Iranian health policy-makers initiate more health promotional programmers and effective interventions. The current study was conducted to evaluate the property of antioxidant vitamins C and E supplements on biomarkers for risk of diabetes among T2DM patients. Subjects 170 persons with T2DM were selected from Research Institute of Endocrinology and Metabolism, Iran University of Medical Sciences (IUMS) in this study. Patients with T2DM were chosen base on level of glucose more than 126 mg/dL (7.0 mmol/L), HbA1C equal or more than 6.5 mg/dL. It was necessary the patients were only treatment by oral medication, no insulin therapy and using vitamin and mineral supplements. The subjects with inflammatory diseases, lasting diseases occlusive, cardiovascular disease, chronic renal failure, range of age 30>age>65 years old and rang of BMI <25 (kg/m 2 ), and Type 1 diabetes patients were excluded from the study. Statistical calculation of sample size is based on estimated prevalence of diabetes in Iran. The patients of this study were categorized with two major groups, supplementation and placebo group; which it were consisted of 3 sub-groups. In this regard, each supplementation group received three capsules per day for three months. The dosage consumption of vitamin C was (266.7 mg), vitamin E (300 IU), vitamin C+E (300 IU+266.7 mg) and placebo (placebo has been made of starch). Randomized assignment into the four groups was by proportional randomization method using a table of random numbers generated by Microsoft Excel. For example, if the sequence of the list generated by the computer is ABEDCCBBEDEA, then the first subject was randomized to the supplementation with vitamin C group, vitamin E group, vitamin C + E group, and placebo group until all the subjects were assigned to groups. After three months of supplementation with vitamin C, E and C & E, patients were examined again and the tests repeated. The supplement and placebo capsules looked the same and were particularly arranged for this study by Darou-Pakhsh (Tehran, Iran). After three months of supplementation with vitamins, patients were evaluated once more and the laboratory tests repeated. The research was agreed by the Ethic Committee of Faculty of Medicine and Health Science, University Putra Malaysia (UPM), and Iran University of Medical Sciences (IUMS). The knowledgeable permission was received from the patients at the first step before cooperating in this study. Source of Support was Mt grant 258. Laboratory Measurements About 20 ml blood samples were obtained under fasting situation into tubes with EDTA from patients with T2DM. The plasma were separated, aliquoted and then stored at -80°C until the analysis were done. For determination of antioxidant enzymes, 0.5 ml of patient's whole blood were centrifuged for 10 minutes at 3000 rpm and then the plasma was aspirated and the erythrocytes were washed four times with 3 ml of 0.9% NaCl solution, centrifuged for 10 minutes at 3000 rpm after each washing. The washed red blood cells were then hemolyzed by freezing and thawing. Erythrocyte pellets can either be used fresh for analysis or stored at -80°C as required. Routine Parameters HbA1C parameter was analyzed by HPLC method (Sigma, USA). In determination of HbA1C, plasma extracts were combined and evaporated to dry under a stream of nitrogen, and the dry residue was dissolved in 0.25 ml of mobile phase and injected into the HPLC system. The separated hemoglobin components pass through the LED photometer flow cell where changes in absorbance are measured at 415 nm. The levels of vitamin C in the plasma were measured by spectrophotometer method using phenyl hydrazine indicator (Sigma, USA) as mentioned by (Ahmed et al., 2009). The vitamin E levels in plasma were measured using HPLC (HPLC, UK). Evaluation of superoxide dismutase (SOD) (U/ml) activity was examined by using a kit (Ransod kit Randox, USA). Glutathione peroxidase (GSH) was measured using a kit (Ransel; Randox USA). In the existence of glutathione reductase (GRX) and NADPH, the oxidized glutathione (GSSG) is directly converted to the reduced form with related oxidation of NADPH to NADP + the reduce absorbance was determined at 340 nm. Statistical Analyses The data were calculated using the SPSS for window, version 16.0 (SPSS Inc., Chicago, IL, USA) and were explained as mean ± SD. Vitamin C, E and C & E intake were reported as the percentages, mean and standard deviation. Significance differences by treatment in antioxidant vitamins baseline and after three months within groups were determined by paired-sample t-test and between four groups by one-way analysis variance (ANOVA). Results Mean age of 170 patients with T2DM (84 male, 86 female) was 53.82±5.26 years (range: 30-60 yr). Of these, 44 patients were undergo supplementation with vitamin C, 43 patients vitamin E and 43 patients with vitamin C and E, and 40 patients received placebo. The result did not show significant differences in body mass index (BMI), waist circumference (WC), hip circumference (HC), and waist hip ratio (WHR) between groups, before and after supplementation. But the result in blood pressure parameters showed significant differences between the all supplement groups before as compared with after treatment (p<0.05). The subjects after receiving vitamin C, E, and E&C showed significant differences in plasma levels of FBS and HbA1c (P<0.05 & <0.001), but placebo groups did not show any changes in these parameters. The levels of SOD enzyme showed a significant improved after consumption of vitamin C, E and C & E in supplementation groups (p<0.001). The same results were received for GSH parameters which showed significant differences (p<0.001). In the current research, the levels of vitamin C and E parameters were evaluated that showed a significant increase after consumption (p<0.001). Discussion The data's of the current study showed that T2DM patients showed significantly higher levels of SOD and GSH enzymes after treatment as compared to placebo groups that are in conformity with other studies (Adachi et al., 2004;Afkhami-Ardakani et al., 2009). The result of present study shows that taking three months supplementation vitamin of C, E and C+E caused significant decreased in fasting blood glucose at baseline and after three months. Previous study also showed that useful effects of oral vitamin C (1,000 mg/day for 4 month) can decrease FBS in T2DM patients (Afkhami-Ardakani, 2009;Chen, 2006). In agreement with Afkhami et al (2009), we found that high doses of oral vitamin C and E supplementation would improve level of glucose in T2DM patients (Dean, 2006). HbA1C showed significantly decreased levels in all receiving supplementation groups as compared to placebo after three months. On the contrary, Bishop and Schorah (2005) reported consumption of 500 mg vitamin C as well as placebos to 50 diabetic patients for 4 months, but found no significant difference in levels of HbA1C between groups, which can be mentioned, may be the low dose of vitamin C were used. In general, this study reveals that vitamin C and E can reduce the glycosylation of hemoglobin in patients with T2DM. Antioxidants have high superoxide scavenging activity, which with consumption of daily antioxidants, preferably vitamin C, E, and C+ E, can protect the body from oxidative damage (Brigelius, 2007). Furthermore, the mechanisms involved on antioxidants activity of vitamins C and E, they effect on free radicals and decrease oxidative damage, and also have potential role in raising antioxidant-related defenses in patients with diabetes (Bgelakovic, 2007). The low SOD activity in the diabetic groups could be the result of direct inhibition by H202 or could also is due to glycation of the enzyme (Adavhi et al., 2004). The results in this study revealed that SOD and GSH levels increased significantly in supplementation groups as compared with placebo group after supplementation of vitamins. The activity of SOD enzyme in the body is considered as one of the major enzymatic antioxidant defenses against superoxide radicals. Increases in SOD enzyme activity relates with improved resistance to oxidative stress (Herbeth et al., 2003). Soliman confirmed that standard dietary treatment in T2DM formed a considerable enhancement status of erythrocyte antioxidant and decreased serum and erythrocyte lipid peroxidation (Adachi et al., 2004;Soliman, 2008). On the other hand, GSH activities in supplementation groups with antioxidants were higher than the diabetic placebo groups, these changes may be in response to neutralize superoxide anions and offset oxidative stress. It is known that GSH reduces H202 in T2DM (Bhatia et al., 2003). The levels of plasma vitamin E reveal the amount of α-tocopherol in the body, which low plasma vitamin E levels were before observed in Type 2 diabetic patients Manzella et al., 2007). Vitamin E alone proved to be beneficial in decreasing the levels of free radicals and oxidative stress, it improve the action of insulin in patients with insulin resistance (Upritchard et al., 2008). In the other study recommended that consumption of vitamin E was associated with decreased HbA1c (Gowri et al., 2009). The combination of vitamin C and E were able to improve the action of endothelial only in T2DM. When vitamin E disarms a free radical, it becomes a weak free radical itself. But unlike bad free radicals, the vitamin E can be recycled, or turned back into an antioxidant, by vitamin C (Title et al., 2004;Rafighi et al., 2011). www.ccsenet.org/gjhs Global Journal of Health Science Vol. 5, No. 3;2013 Conclusion The current study showed that the patients with T2DM after three months use of vitamins C, E and also combination of vitamins C and E showed significantly low level of hypertension, and improved insulin action and high level of SOD and GSH enzyme activity. Antioxidants have already shown to be potential role in the treatment of T2DM. Although, it seemed high level of oxidative stress parameter which was accompanied with low level of antioxidant enzymes in diabetes patients, these results may be very important with respect to the high morbidity and mortality rates in these patients. It may be possible that treatment by vitamin C, E and C & E as antioxidants can possibly reduce insulin resistance by improved condition of lowering oxidative stress parameters. On the other hand, suitable diet and treatment schedule may help in decreasing plasma glucose and increasing antioxidants capacity in type 2diabetes patients.
2017-04-25T22:21:19.014Z
2013-03-20T00:00:00.000
{ "year": 2013, "sha1": "480f61460e18008c0b2fdc19e8d8981686ca17a3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/gjhs.v5n3p183", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "480f61460e18008c0b2fdc19e8d8981686ca17a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12364414
pes2o/s2orc
v3-fos-license
Untrained Listeners Experience Difficulty Detecting Interaural Correlation Changes in Narrowband Noises Interaural correlation change detection was measured in untrained normal-hearing listeners. Narrowband (10-Hz) noises were varied by center frequency (CF; 500 or 4000 Hz) and diotic level roving (absent or present). For the 500-Hz CF, 96% of listeners could achieve threshold (79.4% correct at the easiest testing level) if roving was absent, but only 36% of listeners could if level roving was present. No one could achieve threshold at the 4000-Hz CF, unlike trained listeners in the literature. The results raise questions about how individual differences affect learning and generalization of monaural and binaural cues related to interaural correlation detection. 1. Introduction Human psychoacoustical experiments often utilize trained listeners, meaning the listeners have been given explicit instructions on how to perform a particular auditory task and they have been exposed in some way to the task before data collection (i.e., they have had some explicit and beneficial practice). In many cases, listeners may practice the task for hours. Such practice often includes correct answer feedback and continues until there is an apparent saturation in listener performance. A major reason to include such training is to avoid a substantial within-subject performance change during data collection that may obscure across-condition effects. The purpose of this work was to investigate the performance of untrained listeners in a binaural task that shows highly variable performance, namely, detection of interaural correlation changes (ICC) in nar-rowband noises. Of particular interest was to obtain a better understanding of the detection cues listeners might be using when performing ICC detection, and how performance differs between untrained and trained listeners. In some psychoacoustical tasks, it is unnecessary for listeners to receive explicit training because they do not improve over time. For example, Trahiotis et al. (1990) showed that listeners had stable and unchanging thresholds over 25 sessions when detecting an interaurally in-phase or out-of-phase 500-Hz tone embedded in a 2900-Hz bandwidth diotic noise (called NoSo and NoSp detection, respectively). In other cases, training is necessary as significant improvements in performance can be observed. For example, Wright and Fitzgerald (2001) showed that untrained listeners had 500-Hz tone interaural time difference (ITD) discrimination thresholds of about 40–60 ls, and these thresholds improved to 20 ls over two weeks of training (specifically , nine days of training with one hour of training/day; however, most of the improvement occurred over the first 30 min of training). Likewise, these listeners had 4000-Hz tone interaural level … Introduction Human psychoacoustical experiments often utilize trained listeners, meaning the listeners have been given explicit instructions on how to perform a particular auditory task and they have been exposed in some way to the task before data collection (i.e., they have had some explicit and beneficial practice).In many cases, listeners may practice the task for hours.Such practice often includes correct answer feedback and continues until there is an apparent saturation in listener performance.A major reason to include such training is to avoid a substantial within-subject performance change during data collection that may obscure across-condition effects.The purpose of this work was to investigate the performance of untrained listeners in a binaural task that shows highly variable performance, namely, detection of interaural correlation changes (ICC) in narrowband noises.Of particular interest was to obtain a better understanding of the detection cues listeners might be using when performing ICC detection, and how performance differs between untrained and trained listeners. In some psychoacoustical tasks, it is unnecessary for listeners to receive explicit training because they do not improve over time.For example, Trahiotis et al. (1990) showed that listeners had stable and unchanging thresholds over 25 sessions when detecting an interaurally in-phase or out-of-phase 500-Hz tone embedded in a 2900-Hz bandwidth diotic noise (called NoSo and NoSp detection, respectively).In other cases, training is necessary as significant improvements in performance can be observed.For example, Wright and Fitzgerald (2001) showed that untrained listeners had 500-Hz tone interaural time difference (ITD) discrimination thresholds of about 40-60 ls, and these thresholds improved to 20 ls over two weeks of training (specifically, nine days of training with one hour of training/day; however, most of the improvement occurred over the first 30 min of training).Likewise, these listeners had 4000-Hz tone interaural level difference (ILD) discrimination thresholds of about 4 dB, and thresholds improved to 2 dB after training (the time course of improvement was relatively longer than for the ITD discrimination task).While these two example studies refer to different tasks, they have some commonalities: they were presented over headphones to precisely control the properties of the stimuli, they used nonecologically valid stimuli that are rarely experienced outside of the laboratory, and they tested binaural processing abilities (meaning the detection cues were accessed through an interaural comparison of the signals). Of interest in this study is sensitivity to ICC, which is related to the binaural masking level difference (e.g., Durlach et al., 1986;Goupell and Litovsky, 2014) and how speech is experienced and understood in background noise and reverberant rooms (e.g., Lavandier and Culling, 2010).Listeners can be highly sensitive to ICC changes in noises (Gabriel and Colburn, 1981), where q ¼ 1 for a perfectly correlated noise and q < 1 for a decorrelated noise.As q decreases, larger fluctuations in the ITDs and ILDs are introduced (Goupell, 2010), and the salient perceptual change is a widening or blurring of intracranial image (for larger bandwidth signals; Whitmer et al., 2012) or a moving intracranial location (for small bandwidths of about 10 Hz or less; Gabriel and Colburn, 1981;Goupell and Litovsky, 2014). ICC sensitivity can be highly variable across experienced listeners when presented narrowband noises (Koehnke et al., 1986;Goupell, 2012), but it is unclear why this variability exists.One reason could be that some listeners are inherently more sensitive to binaural cues (Koehnke et al., 1986).Another reason could be that listeners use different cues to perform the task.For example, one listener could rely more on fluctuating ITDs and another on fluctuating ILDs (Goupell and Hartmann, 2007;Goupell, 2010;Mao and Carney, 2014).Others may ignore the spatial percepts and attempt to use an increase in loudness for the dichotic target compared to the diotic non-target stimuli (Edmonds and Culling, 2009), which would imply that the performance of these listeners would be particularly susceptible to diotic level roving where the loudness cue would be made unreliable.Or perhaps some listeners confuse monaural envelope fluctuations (i.e., roughness) with the binaural fluctuations (Goupell and Hartmann, 2006).Therefore, one goal of this work is to examine what cues untrained listeners rely on to detect ICC by varying stimulus center frequency (CF) and the absence or presence of diotic level roving.If we can determine what cues listeners are using, such an approach could help explain the relatively large inter-individual variability observed in ICC detection (e.g., Koehnke et al., 1986;Goupell, 2012), and could improve binaural models' ability to explain ICC and binaural unmasking performance (Goupell and Hartmann, 2007;van der Heijden and Joris, 2009;Goupell, 2010;Mao and Carney, 2014). It is also unclear what the time course of training-induced improvement is for ICC detection as listeners gain experience with this task.Therefore, another goal of this work was to characterize the initial untrained performance and improvement of listeners in ICC detection if they were provided correct answer feedback.There is evidence that the improvement and saturation in ITD sensitivity can occur within 30 min of training using 500-Hz tones (Wright and Fitzgerald, 2001). We hypothesized that untrained listeners would be worse at ICC detection than what has been previously reported in experienced listeners because the untrained listeners might ignore the binaural fluctuations and attempt to use other potentially confusing cues.We also hypothesized that there would be rapid improvement in ICC thresholds at 500 Hz, but not 4000 Hz (Wright and Fitzgerald, 2001).This is because ICC sensitivity at 500 Hz is thought to be dominated by fluctuating ITDs (van der Heijden and Joris, 2009), whereas ICC sensitivity at 4000 Hz is thought to be dominated by fluctuating ILDs (Goupell, 2012). Listeners and equipment Fifty-nine listeners participated in this study, all of whom were considered untrained listeners because they had no experience in detecting ICC in psychoacoustical headphone experiments.The listeners (age range ¼ 18-42 years; mean age ¼ 20.0 years; 49 females) had normal audiometric thresholds ( 20 dB hearing level at octave frequencies between 250 and 8000 Hz) and no appreciable interaural asymmetries in hearing thresholds (<10 dB at any tested frequency).Most of them were college undergraduates and were compensated with class credit or a small payment. The stimuli were generated on a personal computer in MATLAB (Mathworks, Natick, MA), delivered by a sound card (Edirol UA-25EX, Roland Corporation, Japan) to a power amplifier Crown Audio,Elkart,IN) and then to open-backed circumaural headphones (HD650, Sennheiser Corporation, Germany).The listeners were seated in a double-walled sound attenuating booth (IAC, Bronx, NY) for the testing. Stimuli The stimuli were 10-Hz bandwidth noises with a 500-or 4000-Hz CF.The rationale for using 10-Hz bandwidth noises was listeners may demonstrate greater idiosyncratic weighting of the detection cues, namely, the weighting of fluctuating ITDs and ILDs (Goupell and Hartmann, 2007;van der Heijden and Joris, 2009;Goupell, 2010;Mao and Carney, 2014).The stimuli had a duration of 300 ms and were shaped by a Tukey window with a 10-ms rise-fall time.They were presented at 65 dB-A, unless there was diotic level roving where the level was randomly varied over a 10-dB range (65 dB of rove chosen from a rectangular distribution).The stimulus interaural correlation was precisely controlled using an orthogonalization procedure (Culling et al., 2001).The number of listeners tested in each condition is reported in Table 1. Procedure Listeners performed a three-interval, two-alternative forced-choice task in a threedown, one-up adaptive procedure to obtain a threshold that targeted 79.4% correct (Levitt, 1971).Difficulty was varied by changing the interaural correlation of the noise (Dq) and followed the adaptation rules in Goupell and Litovsky (2014).The only major difference was if listeners could not reliably detect ICC at the easiest value (target q ¼ 0), the adaptive procedure did not terminate early.The procedure continued to present target q ¼ 0 trials until there were three correct answers in a row or until the completion of all of the trials.There were five simultaneous adaptive tracks of the same condition and, on a given trial, the track was randomly chosen.Each track consisted of 50 trials.Therefore, each listener experienced the same number of trials, 250 per block. In a single trial, listeners were presented three stimuli that were separated with a 300-ms interstimulus interval.The first stimulus was always interaurally correlated (q ¼ 1).The other intervals contained an interaurally correlated non-target and decorrelated (q < 1) target, where the order was randomized on each trial.Correct answer feedback was provided after each trial.If there was diotic level roving, the level was randomly varied across the three intervals in a trial. Listeners performed three separate blocks of the same condition, which took approximately 45 min to complete.Since each listener only performed one CF and roving condition, there was no randomization across blocks.Thresholds were calculated by averaging the reversals that occurred in all five adaptive tracks. Results Table 1 shows the proportion of untrained listeners who could achieve threshold performance (i.e., 79.4% correct for target q ¼ 0) for at least one of three testing blocks.Of the listeners who performed the 500-Hz roving-absent condition, most (24/25 ¼ 96%) achieved threshold performance.This is in contrast to the listeners who performed the 500-Hz roving-present condition, where only 5/14 ¼ 36% listeners achieved threshold performance.None of the listeners achieved threshold performance for either of the 4000-Hz conditions. Figure 1 shows the individual and average ICC thresholds for the three testing blocks for the 500-Hz CF conditions.Clearly, performance was highly variable across listeners, where some could barely achieve threshold performance and some performed nearly as well as trained listeners from previous studies (shaded area or dashed line).On average, the untrained listeners in our study performed approximately a factor of 10 worse than the previously reported data in trained listeners. A two-way analysis of variance (ANOVA) was performed on the data with factors CF and roving. 1 Thresholds did not significantly change with CF or roving (p > 0.05).There was a significant CF  roving interaction [F(1,165) ¼ 960, p ¼ 0.001, g 2 p ¼ 1], which resulted from none of the listeners achieving threshold performance for either 4000-Hz condition. To assess if rapid improvement of ICC detection occurred within the three blocks, the difference in threshold between the first and third testing block can be observed for the 29 listeners plotted in Fig. 1.Ten listeners were found to have a threshold improvement Da À0.05. 2 However, eight listeners had almost no threshold change (À0.05 < Da 0), and 11 listeners had a threshold increase (Da > 0).Therefore, Table 1.Proportion of untrained listeners who could achieve threshold performance (i.e., !79.4% correct) for the starting point of the adaptive track (i.e., reference q ¼ 1 and target q ¼ 0).there was no significant change in threshold with testing block and there was no block  roving interaction (two-way ANOVA with factors block and roving; p > 0.05 for both). Discussion This work aimed to be a starting point to characterize stimulus and listener factors that lead to individual variability in binaural tasks.The experiment demonstrated that untrained listeners exhibit a wide range of ICC sensitivity, which depends on the CF and whether diotic level roving was introduced (Table 1 and Fig. 1).Individual variability is commonly seen in some binaural experiments (McFadden et al., 1973;Koehnke et al., 1986;Wright and Fitzgerald, 2001), but not others (Trahiotis et al., 1990). It may be that some of the untrained listeners attempted to use loudness cues to perform this task because only 36% of the listeners could achieve threshold performance with level roving, in contrast to the 96% who could achieve threshold performance when level roving was absent.Furthermore, untrained listeners seem to be less able to initially use fluctuating ILDs to detect ICC because no listener could perform the task at the 4000-Hz CF where lack of phase locking to the carriers would make fluctuating ITDs inaccessible.The results of this study are in contrast to the previous literature that demonstrate that listeners are very sensitive to ICC for 10-Hz bandwidth noises at 500-Hz CF, and sometimes at 4000-Hz CF (Goupell, 2012).This difference appears to be primarily due to the experience of the listeners.The listeners in Goupell (2012) were tested only after several hours of training and practice, until the apparent saturation in performance had occurred.In addition, they were explicitly told to ignore monaural cues, like loudness, and rather attend to binaural cues, like image width.In contrast, the listeners of the present study were given minimal instruction on which cues to attend to during the task.However, note that interesting individual patterns in ICC performance occur across CF (see Fig. 2 in Goupell, 2012), which suggests that listeners might weight the detection cues in different ways (Goupell and Hartmann, 2007;Goupell, 2010;Mao and Carney, 2014). The data from this study are also interesting when considering the percentage of listeners who could not achieve threshold performance and the notably poor performance of some listeners.Wright and Fitzgerald (2001), who measured the ability to detect static ITDs and ILDs, did not have listeners who could not achieve threshold performance.Therefore, there seems to be something unique about ICC detection and fluctuating ITDs and ILDs that distinguishes itself from static ITD and ILD detection.The average threshold of the untrained listeners who could achieve threshold performance in this study was a factor of 10 worse than studies that used trained listeners (Gabriel and Colburn, 1981;Goupell and Litovsky, 2014).There are at least three possible reasons for this.First, the listeners in this study might all achieve thresholds comparable to the previous literature with sufficient training (longer than 45 min and over multiple days to allow for consolidation of learning).Second, the performance reported in the previous ICC detection literature is taken from listener samples that are not representative of the greater population.In other words, those listeners may have been selected, either intentionally or unintentionally, from a group of exceptionally sensitive listeners.Other reports highlight individual variability in binaural tasks (McFadden Gabriel and Colburn (1981).The shaded region shows the average 61 standard deviation from nine listeners in Goupell andLitovsky (2014). et al., 1973;Koehnke et al., 1986).It is worth noting that the listeners in Wright and Fitzgerald (2001) had thresholds after training that were noticeable higher than those in other studies (approximately 60 ls at start and 30 ls at end, as compared to 10 ls).Only the best two listeners in Wright and Fitzgerald (2001) performed at levels commonly reported in the literature (e.g., Brughera et al., 2013).Third, our listeners may have had interaural asymmetries that we were not aware of.An alternative explanation for some of the variability in the data may not be related to how cues are being utilized and weighted, but rather how the cues are encoded.For normal-hearing listeners, it is assumed that they have the same loudness growth and temporal modulation transfer functions across the ears.However, if differences exist, these asymmetries may cause overall poorer performance.Since the binaural system is acutely sensitive to interaural differences, it may be that seemingly small differences in monaural performance could have a large impact on binaural performance.Considering such factors may also explain the variability in performance seen in trained listeners (e.g., Goupell, 2012).The second and third explanations are also not mutually exclusive; it may be that exceptionally sensitive listeners have relatively more interaural symmetry in their monaural auditory processing. There are a number of possible cues to perform ICC detection, some discussed in this work and likely many not discussed.Future work should focus on understanding the binaural and monaural cues that untrained listeners attend to when learning to detect ICC in narrowband noises.It is possible that our poorly performing untrained listeners confused monaural envelope fluctuations with interaural fluctuations despite correct answer feedback.When presented diotic stimuli, listeners tend to choose stimuli that have more monaural envelope fluctuations when they are asked to choose the interaurally decorrelated stimulus (Goupell and Hartmann, 2006).It is also possible after sufficient training (likely over longer time scales than the testing in this study), listeners would learn to ignore the monaural cues if they were harmful for ICC detection. Our data showed that a subset of listeners could rapidly improve at ICC detection at 500 Hz (Fig. 1), consistent with our hypothesis that was based on the results of Wright and Fitzgerald (2001).However, other listeners showed no change or worse performance over time, therefore resulting in no improvement over the entire group.Other studies have also reported groups of non-learners (Zhang and Wright, 2009), consistent with our results.Fatigue effects or frustration may have affected the non-learners.Or it is possible that the non-learners needed rest and time for consolidation when learning to perform a new auditory task (Wright and Fitzgerald, 2001;Ortiz andWright, 2009, 2010).The changes in ICC detection thresholds for narrowband noises are also in great contrast to the NoSp detection thresholds of a 500-Hz tone in a relatively wideband noise (Trahiotis et al., 1990), even though NoSp and ICC detection are both thought to rely on changes in q (e.g., Durlach et al., 1986;Goupell and Litovsky, 2014).In Trahiotis et al. (1990), listeners showed absolutely no change in performance over presumably many hours of testing, suggesting that when the bandwidth of the stimuli is large enough such that acrosschannel comparisons can be performed, listeners can utilize a set of detection cues that require little to no training or learning to access.Or it could be that the slow fluctuations that occur in a 10-Hz narrowband noise (Goupell and Litovsky, 2014) might initially confuse people, thus making them attend to monaural envelope fluctuations or loudness. The data from the present study are also interesting because we know very little about transfer effects and generalization of learning ICC detection from one CF to another.Generalization must occur as none of the untrained listeners in this study could perform the ICC detection at 4000 Hz (Table 1), but listeners trained at low frequencies (e.g., 500-Hz CF) in other studies can detect ICC at 4000 Hz, sometimes exceedingly well (Goupell, 2012).For other binaural tasks like static ITD or ILD discrimination, there seems to be minimal transfer or generalization of learning from 500to 4000-Hz CFs (Zhang and Wright, 2007;Wright and Zhang, 2009;Zhang and Wright, 2009), which would be in contrast to what the ICC data from this and other studies suggest. In conclusion, untrained listeners demonstrated much higher thresholds than trained listeners reported in the literature; however, there was great variability in performance with some listeners near trained performance and many who could not perform the task.This work provides new insight on the cues used in ICC detection and the weighting that may occur with them.Further understanding of ICC detection could be gained from a formal multi-day ICC detection training experiment. Fig. 1 . Fig. 1.ICC thresholds for the 500-Hz CF conditions for three testing blocks.The left panel shows performance when diotic level roving was absent and the right panel shows when roving was present.The average (solid circles) and the individual (open circles) thresholds are shown.The dashed line represents average performance from two experienced listeners inGabriel and Colburn (1981).The shaded region shows the average 61 standard deviation from nine listeners inGoupell and Litovsky (2014).
2016-11-08T18:56:27.780Z
2015-07-23T00:00:00.000
{ "year": 2015, "sha1": "305ab5b5f8b1acd44231a42e7762ad41b50899ad", "oa_license": "CCBY", "oa_url": "https://asa.scitation.org/doi/pdf/10.1121/1.4923014", "oa_status": "HYBRID", "pdf_src": "Grobid", "pdf_hash": "305ab5b5f8b1acd44231a42e7762ad41b50899ad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
235710137
pes2o/s2orc
v3-fos-license
How Bacteria Change after Exposure to Silver Nanoformulations: Analysis of the Genome and Outer Membrane Proteome Objective: the main purpose of this work was to compare the genetic and phenotypic changes of E. coli treated with silver nanoformulations (E. coli BW25113 wt, E. coli BW25113 AgR, E. coli J53, E. coli ATCC 11229 wt, E. coli ATCC 11229 var. S2 and E. coli ATCC 11229 var. S7). Silver, as the metal with promising antibacterial properties, is currently widely used in medicine and the biomedical industry, in both ionic and nanoparticles forms. Silver nanoformulations are usually considered as one type of antibacterial agent, but their physical and chemical properties determine the way of interactions with the bacterial cell, the mode of action, and the bacterial cell response to silver. Methods: the changes in the bacterial genome, resulting from the treatment of bacteria with various silver nanoformulations, were verified by analyzing of genes (selected with mutfunc) and their conservative and non-conservative mutations selected with BLOSUM62. The phenotype was verified using an outer membrane proteome analysis (OMP isolation, 2-DE electrophoresis, and MS protein identification). Results: the variety of genetic and phenotypic changes in E. coli strains depends on the type of silver used for bacteria treatment. The most changes were identified in E. coli ATCC 11229 treated with silver nanoformulation signed as S2 (E. coli ATCC 11229 var. S2). We pinpointed 39 genes encoding proteins located in the outer membrane, 40 genes of their regulators, and 22 genes related to other outer membrane structures, such as flagellum, fimbria, lipopolysaccharide (LPS), or exopolysaccharide in this strain. Optical density of OmpC protein in E. coli electropherograms decreased after exposure to silver nanoformulation S7 (noticed in E. coli ATCC 11229 var. S7), and increased after treatment with the other silver nanoformulations (SNF) marked as S2 (noticed in E. coli ATCC 11229 var. S2). Increase of FliC protein optical density was identified in turn after Ag+ treatment (noticed in E.coli AgR). Conclusion: the results show that silver nanoformulations (SNF) exerts a selective pressure on bacteria causing both conservative and non-conservative mutations. The proteomic approach revealed that the levels of some proteins have changed after treatment with appropriate SNF. Introduction Since ancient times, silver has been known for its antiseptic properties. In the past, it was used as either silver ions or metallic silver. The development of nanotechnology has brought about new possibilities of using the antimicrobial potential of this metal. Among numerous metals with antibacterial properties, silver is in the spotlight of nanotechnologists due to its promising possibilities of use, both in and outside of the clinic. Over the last years, a significant increase in the production of silver nanomaterials has been observed, towards different purposes and with various methods being used, which is connected with a high diversity of silver nanomaterials in terms of size, shape, surface charge, and composition. The physical and chemical properties, frequently resulting from the type of synthesis, determine the interaction with bacterial cells, the mode of action, and bacteria's cell response to silver and other antibacterial agents (such as antibiotics and biocides) [1][2][3]. The proposed mode of action of silver on bacterial cells has been previously described [2,4]. Direct interactions with bacterial cell structures and a physical impact on the cell envelope (outer membrane, peptidoglycan, or cell membrane) are the basic methods of cell disruption by forming gaps in the cell membrane or the inhibition of biochemical pathways. One of the proposed mechanisms is the uptake of silver into the cell by outer membrane proteins (OMP), such as OmpC and/or OmpF. Then, Ag atoms can interact with internal biomolecules, such as proteins and nucleic acids (DNA or RNA) [5][6][7][8]. Randall et al. [8] have confirmed that a bacterial cell deprived of OmpC and OmpF becomes more resistant to silver ions. A most important mode of action of silver is production of reactive oxygen species (ROS) that destroy cell components causing rearrangement in the cell envelope and changes in the biochemical pathways. A separate type of interaction with a bacterial cell is the intercalation with the genophore resulting in the inhibition of cell division or introduction of changes to the genetic material [2,5,8]. The substantial changes in the sensitivity of some bacterial strains to particular silver nanoformulations (SNF), and some antibiotics after a long-term treatment with SNF, as the effect of adaptation of the bacteria cell to the environmental stress or mutational changes, have been observed [1]. There are a number of papers speculating about and confirming the direct cytotoxicity of silver nanoparticles [4,5,[8][9][10], but the genetic basis for the changes still remains unconfirmed. For a deeper explanation changes in the genes encoding OMP, flagella, fimbria, lipopolysaccharide (LPS), and exopolysaccharide by assigning the BLOSUM62 score [11,12], and based on an analysis of the mutfunc mutation database [13], has been showed. To our knowledge, this is the first publication where such a detailed analysis of the genome after bacteria treatment with selected SNF is shown. The E. coli cell phenotype using the outer membrane proteome analysis has also been included. Genome Analysis Genes encoding outer membrane proteins carrying conservative and non-conservative mutations in E. coli ATCC 11229 var. S2, obtained using NGS genetic analysis, and grouped according to their involvement in molecular functions are listed in Table 1. They are related to transmembrane transport (channels, siderophore transporter and others), peptidoglycan (penicillin-binding protein activator, Braun's lipoprotein, membrane-bound lysozyme inhibitor, membrane-bound lytic murein transglycosylase B and murein hydrolase activator), lipids, intermembrane phospholipid transport system lipoprotein and metalloprotease) and others (outer membrane protein assembly factors, cellulose synthase operon protein C, bacteriophage adsorption protein A, poly-beta-1;6-N-acetyl-D-glucosamine N-deacetylase, outer membrane lipoprotein, trans-aconitate 2-methyltransferase). Among 145 E. coli genes encoding outer membrane associated proteins, we have detected 39 genes with a total of 94 missense mutations, among which 26 were non-conservative mutations (BLOSUM62 criterion) and 7 mutations were selected by mutfunc as impactful (altered amino acids were present in the conservative region of the protein or alterations could potentially destabilize the protein). All the proteins, along with the list of conservative and non-conservative mutations, are summarized in Supplementary Materials Table S1. Products of these selected genes are responsible for the bacterial cell structure, transport, secretion, adhesion, and adsorption. Among the OMP, the following porins: OmpC, OmpF, OmpG, and OmpN were pinpointed with conservative or non-conservative mutations (Supplementary Materials Table S1). The genes encoding proteins, which are related to OM and other outer membrane structures, such as flagellum, fimbria, LPS, and exopolysaccharide, were analyzed in a different way and the genes selected with mutfunc together with the list of conservative mutations detected with BLOSUM62 are summarized in Supplementary Materials Table S2, in total, 22 genes. Selected genes are mainly responsible for the structure of cellular flagellum, transport, oxidation stress, and respiratory cell. Moreover, regulators of genes identified in Supplementary Materials Tables S1 and S2 were pinpointed (40 genes listed in Supplementary Materials Table S3), in the list of conservative and non-conservative mutations. High number of mutations were identified in this case. Impactful mutations selected with mutfunc and accompanied by predicted effects of mutations for proteins were pinpointed in some transmembrane transporter activity fimD, ompN, yehB, fhuE, and pgaA, and others, such as yceB (uncharacterized lipoprotein). Proteome Analysis The proteomic analysis of the bacterial strains has also revealed significant differences. The comparison of electropherograms of the outer membrane proteome of Escherichia coli BW25113 and its silver-resistant mutant (E. coli AgR), shows several differences, which emerge after prolonged exposure of the wild type strain to Ag + . The protein spots with the most conspicuous alterations between strains were subjected to protein identification using mass spectrometry (Supplementary File S1). All spots selected for analysis are clearly marked in Figure 1. The results showed that some differences did not solely concern a fraction of OMP, probably as a result from the isolation method, but were also recognized as cytosolic or inner membrane molecules with a crucial function, i.e., in the cellular response to oxidative stress, transport of macromolecules, aerobic/anaerobic metabolism, or the maintenance of heavy metal homeostasis. It could be helpful to understand the involvement of proteins from other cellular compartments and their role in the cellular response to the action of silver ions and nanoformulations. The presented 2-DE electropherograms ( Figure 1) differ in the number of protein spots or in their optical density (OD). One of the noticeable differences among the electropherograms of E. coli BW25113 and its mutant AgR is connected to flagellin, a subunit protein involved in the formation of bacterial flagella. All proteins, irrespective of their position in the gel, were identified as FliC (spots 1-4, Figure 1). Spots 1 and 2 were not detected in the electropherogram of the wild type E. coli BW25113, while spots 3 and 4 differ in staining intensity. Those detected in the E. coli AgR mutant (especially spot no. 4) exhibit much higher OD that cannot be missed. The scattered location of flagellin could be explained by some modifications of FliC made shortly after the translation process, which have an impact on the molecular mass and pI value of proteins in each spot. In order to obtain additional information about the potential changes of the OMP of E. coli, we decided to expand the analysis by using another strain-E. coli J53, with resistance to Ag + determined by the pMG101 plasmid ( Figure 1, Table 2). In the protein profile of E. coli J53, none of the four previously mentioned spots was detected. Further analysis of E. coli BW25113 electropherograms made it possible to identify two changes concerning spots no. 7 and 8. They were recognized as D-galactose-binding periplasmic protein and superoxide dismutase. OD of both mentioned spots decreases in the electropherogram of E. coli BW25113 AgR, strain loss of the OmpC/F porins and derepression of the CusCFBA efflux transporter ( Figure 1, Table 2). The spot identified as D-galactose-binding protein seems to have a higher OD in E. coli J53 than the corresponding spot recognized in E. coli BW25113 AgR. Furthermore, the OD value of the aforementioned E. coli J53 spot may be similar in terms of intensity to its wild type counterpart. In the electropherograms of E. coli BW25113 and its AgR mutant, there were two spots identified as aldehyde reductase YahK (no. 9) and protein CutC (no. 10). The aforedescribed spots could not be found in the electropherogram of E. coli J53. Other differences between the analyzed electropherograms concern two spots identified as chaperone protein DnaK (no. 5) and isocitrate lyase (no. 6). These spots were detected in both BW25113 strains (wt and AgR) with no differences between them. When compared to the protein profile of E. coli J53, they seem to be more distinctive, because the OD of both detected spots notably decreases in relation to the electropherograms of BW25113 strains. The last analyzed change was connected with superoxide dismutase (no. 8). Its intensity in the gels of E. coli AgR and J53 were much lower than the OD of the wild type corresponding spot (E. coli BW25113). Discussion Much effort has been made to explain the molecular mode of the antibacterial action of silver nanomaterials, and the observations made are inconsistent. On the basis of published results, it can be speculated that the antibacterial mechanism of action strongly depends on the physical and chemical properties of the used silver formulation probably such as size, shape, charge, composition, surface [1,2,4,9,10]. Moreover, after repeated, prolonged treatment with Ag + or various types of silver nanoparticles, phenotypic as well as genetic changes in the bacterial cell have been observed [1,4,8]. Observation of the bacterial structures indicated the strong interaction of SNF with the cell wall [1,4]. Wen-Ru Li et al. [14] have shown that silver nanoparticles (with different sizes, 5 nm, 20 nm, respectively) can cause severe damage to bacteria cell, but they have not made a differentiation spot with increased optical density; spot with decreased optical density. Discussion Much effort has been made to explain the molecular mode of the antibacterial action of silver nanomaterials, and the observations made are inconsistent. On the basis of published results, it can be speculated that the antibacterial mechanism of action strongly depends on the physical and chemical properties of the used silver formulation probably such as size, shape, charge, composition, surface [1,2,4,9,10]. Moreover, after repeated, prolonged treatment with Ag + or various types of silver nanoparticles, phenotypic as well as genetic changes in the bacterial cell have been observed [1,4,8]. Observation of the bacterial structures indicated the strong interaction of SNF with the cell wall [1,4]. Wen-Ru Li et al. [14] have shown that silver nanoparticles (with different sizes, 5 nm, 20 nm, respectively) can cause severe damage to bacteria cell, but they have not made a differentiation spot with increased optical density; spot with decreased optical density. Discussion Much effort has been made to explain the molecular mode of the antibacterial action of silver nanomaterials, and the observations made are inconsistent. On the basis of published results, it can be speculated that the antibacterial mechanism of action strongly depends on the physical and chemical properties of the used silver formulation probably such as size, shape, charge, composition, surface [1,2,4,9,10]. Moreover, after repeated, prolonged treatment with Ag + or various types of silver nanoparticles, phenotypic as well as genetic changes in the bacterial cell have been observed [1,4,8]. Observation of the bacterial structures indicated the strong interaction of SNF with the cell wall [1,4]. Wen-Ru Li et al. [14] have shown that silver nanoparticles (with different sizes, 5 nm, 20 nm, respectively) can cause severe damage to bacteria cell, but they have not made a differentiation Discussion Much effort has been made to explain the molecular mode of the antibacterial action of silver nanomaterials, and the observations made are inconsistent. On the basis of published results, it can be speculated that the antibacterial mechanism of action strongly depends on the physical and chemical properties of the used silver formulation probably such as size, shape, charge, composition, surface [1,2,4,9,10]. Moreover, after repeated, prolonged treatment with Ag + or various types of silver nanoparticles, phenotypic as well as genetic changes in the bacterial cell have been observed [1,4,8]. Observation of the bacterial structures indicated the strong interaction of SNF with the cell wall [1,4]. Wen-Ru Li et al. [14] have shown that silver nanoparticles (with different sizes, 5 nm, 20 nm, respectively) can cause severe damage to bacteria cell, but they have not made a differentiation Discussion Much effort has been made to explain the molecular mode of the antibacterial action of silver nanomaterials, and the observations made are inconsistent. On the basis of published results, it can be speculated that the antibacterial mechanism of action strongly depends on the physical and chemical properties of the used silver formulation probably such as size, shape, charge, composition, surface [1,2,4,9,10]. Moreover, after repeated, prolonged treatment with Ag + or various types of silver nanoparticles, phenotypic as well as genetic changes in the bacterial cell have been observed [1,4,8]. Observation of the bacterial structures indicated the strong interaction of SNF with the cell wall [1,4]. Wen-Ru Li et al. [14] have shown that silver nanoparticles (with different sizes, 5 nm, 20 nm, respectively) can cause severe damage to bacteria cell, but they have not made a differentiation spot with decreased optical density. The next step of our research was carried out on E. coli ATCC 11229 strains, wild type and both variants (Figure 1). In case of both variants of E. coli ATCC 11229 var. S2 and S7, changes in the outer membrane structures were identified that affected the bacterial susceptibility to the tested nanomaterials and made E. coli cells more resistant to those silver nanoformulations (S2 and S7) [1]. Changes observed in 2-DE electropherograms were distinctive for each of the obtained variants (strain with silver-driven differences) and concerned the absence of selected spots or fluctuations of their OD. As in the case of the previously described E.coli BW25113, the group of the identified spots contained proteins with different localization (cytosol, periplasm, OM) and function. The concentration of OmpC (no.1) decreased in the case of E. coli ATCC 11229 var. S7 in comparison with its wild type. The opposite situation was observed in the electro-pherogram of E. coli ATCC 11229 var. S2, where the OD of the aforementioned spot was much higher. A more significant decrease of OD (regarding the wild type) was observed in another spots (no. 2) of glutaredoxin-4 present on the gels of both variants. The results of the identification showed that spots no. 5 and 6 were actually the same protein-OPPA. It seems that spot no. 6 (present only in the electropherogram of E. coli ATCC 11229 var. S2) is probably a molecule that had undergone some post-transcriptional modifications. In E. coli ATCC 11229 var. S7, OD of spot no. 5 was higher compared to the wild type and S2. Nevertheless, in the case of the second variant mentioned above, the OPPA was represented by two spots; therefore, the final amount of this protein in both variant samples could be the same as in var. S7. Spot no. 3 (Figure 1) was recognized as D-galactose-binding periplasmic protein. The same structure was identified in the electropherograms of E. coli BW25113. The spot occurred only in the wild type and E. coli ATCC 11229 var. S7, but an increase of its OD could be observed only in the gel of the variant (Figure 1). The same situation was observed in the case of protein no. 7, identified as a thiosulfate-binding protein. The last analyzed difference between E. coli ATCC 11229 strains concerned spot no. 4, recognized as malate dehydrogenase (found only in E. coli ATCC 11229 var. S7). A correlation was discovered between the genetic changes and proteins detected in 2-DE, and referred to: (i) the subunit protein, which polymerizes to form the filaments of bacterial flagella (extracellular component)-fliC (conservative mutation) in E. coli BW25113 AgR; (ii) the active transport of galactose and glucose; (iii) chemotaxis towards the two sugars by interacting with the trg chemoreceptor-mglB (non-conservative mutation) in E. coli ATCC 11229 var. S7 (Table 3). Moreover, conservative mutation was detected in a component of the oligopeptide permease, a binding protein-dependent transport system, involved in the binding of peptides up to five amino acids long with high affinity to the-appA gene in E. coli ATCC 11229 var. S2 and S7 (Table 3). It is worth emphasizing that conservative mutation was also found in rpoD-initiation factors that promote the attachment of RNA polymerase to specific initiation sites and then its release. This factor is the primary sigma factor during exponential growth. Preferentially transcribed genes are associated with fast growth and they include ribosomal operons, other protein-synthesis related genes, rRNAand tRNA-encoding genes, and prfB (directing to the termination of translation in response to the peptide chain termination codons UGA and UAA). Discussion Much effort has been made to explain the molecular mode of the antibacterial action of silver nanomaterials, and the observations made are inconsistent. On the basis of published results, it can be speculated that the antibacterial mechanism of action strongly depends on the physical and chemical properties of the used silver formulation probably such as size, shape, charge, composition, surface [1,2,4,9,10]. Moreover, after repeated, prolonged treatment with Ag + or various types of silver nanoparticles, phenotypic as well as genetic changes in the bacterial cell have been observed [1,4,8]. Observation of the bacterial structures indicated the strong interaction of SNF with the cell wall [1,4]. Wen-Ru Li et al. [14] have shown that silver nanoparticles (with different sizes, 5 nm, 20 nm, respectively) can cause severe damage to bacteria cell, but they have not made a differentiation between silver ions and silver nanoparticles. Yan et al. [15] have confirmed that both tested silver forms (ions and nanoparticles) can penetrate into the bacterial cell. They have investigated the molecular mechanisms of the antimicrobial activity of silver nanoparticles in Pseudomonas aeruginosa using the proteomic approach and have suggested that the interference with cell-membrane functions and generation of intracellular reactive oxygen species (ROS) are the main pathways for the antibacterial activity of silver nanoparticles and silver ions. The differences between antibacterial mode of action in case of the two kinds of silver (Ag + and silver nanoparticles) have also been indicated by the others and genotoxicity consequences have been initially performed [4,8,14]. Anuj et al. [16] have suggested that nanosilver can modify the membrane integrity of E. coli in addition to the obstruction of the activity of efflux pumps. Silver nanoparticles may be localized inside the E. coli cell membrane or they may completely separate the cell membrane causing membrane damage [16]. Our latest results suggest that multiple OMP proteins are responsible for uptake of silver ions and silver nanoformulations [4]. As we pinpointed, SNF were more efficacious against all tested bacterial strains than silver ions, and this was confirmed with computational methods: weaker interactions of Ag 0 with amino acids of inner layers of both investigated proteins allow Ag 0 to "slide" inside the cell more effectively-with a lower energy barrier in comparison to Ag + [4]. However, according to Lok et al. [17] mode of action of silver ions was similar to spherical nano-Ag (average diameter 9.3 nm), but nano-Ag was found as efficacy at lower concentration than silver ions. Using the proteomic approach, we showed the decrease of OmpC protein expression in E. coli ATCC 11229 after exposure to SNF S7, and its increase after the sample treatment with S2 silver (this was observed only in E. coli ATCC 11229, cf. Figure 1, Table 3). In contrast to E. coli AgR strains carrying mutations after silver ions treatment [8], no changes in ompR was identified in case of E. coli ATCC var. S2. However, some conservative mutations in ompC and non-conservative mutations in ompF were separately noticed. Observed mutation in the ompC gene could have influenced the ompC gene overexpression and uptake the biocides to bacteria cell. In the cusS gene, different mutation was also noticed by us in E. coli ATCC var. S2 than in E. coli AgR obtained by Randall et al. [8]: Thr81Ala, Asn117Asp, Thr118Ser in contrast to Ile213Ser, Ala312Glu and Arg377His, respectively. Anuj et al. [16] have postulated that O-antigen (part of the LPS) in the E. coli strain may also be responsible for the interaction of silver nanoparticles with bacterial cell, so strains with mutations in this part of cell structure may be less or more sensitive to silver nanoformulations. Besides O-antigen, flagella is considered as part of a bacterial structure responsible for increase to silver nanoparticles resistance. Wen-Ru Li et al. [14] have observed that the flagella of the E. coli strain have been damaged or even eliminated, finally causing impairment of the cell movement. The upregulation of the FliC protein has also been observed in the bacterial cell after treatment with silver ions and nanoparticles by Yan et al [15]. According to Panáček et al. [10], the flagella of E. coli cause aggregation of silver nanoparticles without any genetic changes. In our studies, OD of FliC spot increased after treatment with Ag + in case of E. coli BW25113. We identified the conservative mutations in the regulatory genes of fliC, flagellar basal-body P-ring formation protein (flgA) and a regulator of cell motility (fliZ) in E. coli ATCC var. S2 strain (Supplementary Materials Table S2). Moreover, the upregulation of fliC in the proteinogram of the endogenously silver-resistant strain (E. coli AgR) was observed by us, while the model of an exogenously silver-resistant strain, E. coli J53, exhibited downregulation of this protein (Figure 1). It is interesting that E. coli ATCC 11229 strain and its variants (S2 and S7), besides of numerous mutations indicated in E. coli ATCC 11229 var. S2, still stay sensitive to antibiotics, while other Gram-negative (such as E. coli, Klebsiella pneumoniae or Enterobacter cloacae) and Grampositive bacteria (Staphylococcus aureus) bacteria strains stay the same or become resistant to antibiotics [4]. The consequences of the phenotypic (including membrane rearrangements) and genetic changes may alter the sensitivity of bacteria to biocides and antibiotics (depending on the properties of the applied silver type) after repeated treatment with silver ions or nanoformulations [1,16]. The implications of those mutations, and how those mutations are correlated with the silver nanoformulation treatment, is not clear now. The importance of the observed mutations and their correlation with the silver nanoformulations need more explanation. Strains The following bacteria strains were subjected to proteomic and genetic analysis to assess changes induced by the silver treatment: E. coli BW25113 wt (wild type) and its mutant E. coli BW25113 AgR, E. coli J53, E. coli ATCC 11229 wt (wild type) and its variants: E. coli ATCC 11229 var. S2, and E. coli ATCC 11229 var. S7 (they are described in details in Table 4) [1,8,18,19]. They were store in −70 • C (Revco) after selection as described previously [1,8]. We compared the proteome changes in all of the tested strains, additionally one of the strains: E. coli ATCC 11229 var. S2 was selected for a detailed genetic analysis according to the number of the selected mutations (general genetic information was mentioned in our previous study [1]). It is worth emphasizing that in this case attempts to obtain variants of E. coli BW25113 resistant to S2 and S7 failed, as this strain has remained sensitive to those silver nanoformulations samples during our experiment. Small number of mutations analyzed in previous work [1] This work * Legend: silver nanoformulation signed as S2 refers to titanium dioxide doped with silver nanoparticles, while silver nanoformulation signed as S7 refers to water colloid of silver [18,19] Genome Analysis Genomic DNA of the bacteria was isolated using Genomic Mini Kit (2 mL of overnight bacterial cultures in LB medium) (Biomaxima, Lublin, Poland). The purity and concentration of the product was measured with a nano spectrophotometer (Implen). Genomic libraries were prepared using NEBNext DNA Library Prep Master Mix Set for Illumina and sequencing was performed in Genomed (Warsaw, Poland) using Illumina MiSeq. NGS reads were preprocessed with Cutadapt 1.9.1 [20], assembled with spades [21], and contigs were rearranged with progressive Mauve in Mauve 2.4.0 [22,23], using the genome of E. coli K-12 substr. MG1655 (NC_000913.3) as reference. A mutations list was generated with snippy [24]. Membrane related genes containing mutations were extracted from the list on the basis of the results obtained from the UniProt KB database (keywords: flagellum (KW-0975), bacterial flagellum biogenesis (KW-1005), cell adhesion (KW-0130), exopolysaccharide synthesis (KW-0270), fimbrium biogenesis (KW-1029), fimbrium structural protein (KW-0281), flagellar rotation (KW-0283), lipopolysaccharide biosynthesis (KW-0448), membrane (KW-0472), and the organism Escherichia coli (strain K12) (83333)) and from the current E. coli Membranome database [25]. Non-synonymous mutations in those genes were assessed in two ways: by assigning the BLOSUM62 score [11,12], and using an analysis of the mutfunc mutation database [26]. Non-conservative mutations were selected as those with a negative BLOSUM62 score or were proposed directly by mutfunc. NGS reads are available in the NCBI SRA database (SRR9733699, SRR9733700, SRR9733697), and the assembled genomes in the NCBI Nucleotide database (VLTC00000000, VLTB00000000, VLTA00000000, and ASRI00000000). Proteome Analysis OMP isolation was performed according to the Murphy and Bartos procedure with minor modifications [27,28]. Overnight culture of bacteria (LB medium, 37 • C, 18-24 h) was harvested (1500 g, 4 • C, 20 min) and suspended in 1.25 mL of buffer A. 11.25 mL of buffer B was added and the mixture was stirred (rt, 1 h). 3.13 mL of cold ethanol was added slowly in order to precipitate the nucleic acids and the mixture was centrifuged (17,000× g, 4 • C, 10 min). Then 46.75 mL of cold ethanol was added to the supernatant (17,000 g, 4 • C, 20 min). The pellet was dried, resuspended in 2.5 mL of buffer C and stirred (rt, 1 h). The solution was incubated at 4 • C overnight. OMP remained in the soluble fraction of the buffer and the insoluble material was removed by centrifugation (12,000× g, 4 • C, 10 min). BCA Protein Assay Kit was used for the total protein concentration measurement. ReadyPrep 2-D clean up kit (Bio-Rad) was used for sample preparation. Isoelectric focusing (IEF) was performed by a stepwise increase of voltage as follows: 250 V, 20 min; 4000 V, 120 min (linear) and 4000 V (rapid), until the total volt-hours reached 14 kVh. IPG strips were loaded onto the top of gel slabs using 0.5% agarose in the running buffer. Electrophoresis was carried out at 4 • C with constant power current (1 W) until the dye front reached the bottom of the slab [29,30]. Protein spots were visualized with Coomassie Brilliant Blue staining. PDQuest software (Bio-Rad) was used for protein spot pattern analysis. Protein spots selected for the mass spectrometry analysis were subjected to the in-gel tryptic digestion according to Shevchenko et al. [31] Mass spectrometry analysis using the MALDI TOF ultrafleXtreme instrument (Bruker Daltonics) was performed afterwards. The peptides were eluted directly on a MALDI plate using a solution of α-cyano-4-hydroxycinnamic acid as the matrix. Protein identification was accomplished with a bioinformatics platform (ProteinScape, Bruker Daltonics) and MASCOT (Matrix Science) as a search engine in protein sequence databases (NCBI, SwissProt). Conclusions Both E. coli wild types ATCC 11229 and E. coli BW25113 were treated with silver ions and silver nanoformulations (SNF), but E. coli ATCC 11229 has changed sensitivity mainly after treatment with SNF S2, the change in sensitivity was less for SNF S7, and no change of sensitivity was observed for Ag + , while E. coli BW25113 has not changed after exposure to SNF S2 and SNF S7, but showed derivations after silver ions treatment. Silver nanoformulations exert a selective pressure on bacterial cells, causing both conservative and non-conservative mutations, and/or phenotypical changes in different way than silver ions. A genetic analysis by the whole-genome sequencing provided a better understanding of the interactions between silver nanoformulations and E. coli strains. The following genes were selected with mutfunc and analyzed by Blosum62 as those with conservative and non-conservative mutations: encoding proteins located in the outer membrane (including OmpC, OmpF, OmpG, and OmpN) and their regulators, genes related to OM and other outer membrane structures, such as flagellum, fimbria, LPS, or exopolysaccharide. Using the proteomic approach with protein isolation and 2-DE experiment, we showed that the optical density changed for some protein spots in 2D electropherograms, such as OmpC or FliC, isocitrate lyase AceA, chaperone protein DnaK, D-galactose binding protein MglB, thiosulfate-binding protein CysP, malate dehydrogenase Mgh, glutaredoxin-4 GrxD, periplasmic oligopeptide-binding protein OppA, and OmpC, depending on the silver form used for treatment. The molecular mechanism of the antibacterial activity of silver and molecular changes in bacterial cells strongly depend on the physical and chemical properties of the tested SNF form. At this time, it is difficult to conclude what physicochemical properties determine antibacterial cytotoxicity and genotoxicity. Therefore, the mode of action of antibacterial SNF is apparently much more complex and the phenomenon of bacterial resistance to silver requires further and deeper studies. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/pathogens10070817/s1, Table S1. Genes encoding outer membrane proteins selected with mutfunc and list of conservative and non-conservative mutations selected with BLOSUM62 ; Table S2. The other part of genes encoding proteins connected with outer membrane (OM), flagellum, fimbria, lipopolysaccharide (LPS), exopolysaccharide selected with mutfunc and list of conservative mutations selected with BLOSUM62; Table S3. Regulatory genes selected with EcoCyc Database and list of conservative and non-conservative mutations selected with BLOSUM62. File S1: Mass spectrometry results of each spot for E. coli BW25113 and E. coli ATCC 11229. Author Contributions: A.K.: conceptualization, resources, genetic investigation, genome data analysis, proteome data analysis, writing, drawing conclusions; M.S.: proteome investigation, partial conceptualization, including protein isolation and 2-DE electrophoresis, proteome data analysis, writing; M.W.: genome analysis, writing; B.D.: protein isolation and 2-DE electrophoresis supervision, writing; K.K., E.K.: MALDI sequencing and identification of proteins, supervision, writing; J.R.: supervision, writing-review and editing; G.B.-P.: supervision, resources, writing-review and editing. All authors have read and agreed to the published version of the manuscript. Funding: This work was partially supported by the National Science Centre (grant number 2017/01/ X/NZ1/00765); and by a special-purpose grant from the Ministry of Science and Higher Education, for conducting research, development work, and tasks related to the development of young scientists and doctoral participants (grant number 0420/2559/18). Publication of this article was financially supported by the Excellence Initiative-Research University (IDUB) program for the University of Wroclaw.
2021-07-03T06:17:04.287Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "9390fe9f15e0309df8683fd4e6da8b82639db3f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/10/7/817/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ece107e2f481cd543bfd451e95722c2635131d47", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233778021
pes2o/s2orc
v3-fos-license
Students’ epistemological beliefs from grade level perspective and relationship with science achievement in Kenya ABSTRACT This study investigated the influence of grade level on the development of science epistemological beliefs and the relationship between science epistemological beliefs and science achievement among co-educational secondary schools of Homa Bay County, Kenya. The study employed cross-sectional and correlational survey designs with purposive sampling. Epistemological Beliefs Questionnaire (EBQ) was used to measure science epistemological beliefs. The instrument was administered to 214 students from 2 co-educational schools (Grade 9, n = 116, Grade 12, n = 98). Students’ achievement in Biology, Chemistry and Physics were computed for Science Achievement Scores (SAS). The data were analysed by grade level using independent sample t-tests and by dimensions and achievement scores using multiple regression analysis. The findings indicate statistically significant grade level differences in terms of source, certainty and development and non-significant grade level differences in terms of justification. The findings also indicate that certainty and justification dimensions were significant predictors of science achievement. It is concluded that grade level has an influence on development of epistemological beliefs (source, certainty and development) and certainty and justification dimensions were predictors of science achievement. Implications for practice and further research are herein explained. Introduction One of the goals of science education in Kenya is to promote scientific knowledge about the natural world and to understand the connections between scientific knowledge and issues and problems of the modern society (Kenya Institute of Education, 2002). For this goal to be achieved, the students of science education need to have an understanding about the nature of scientific knowledge and the process of knowing science. Epistemological beliefs refer to beliefs about the nature of knowledge and knowing (Hofer & Pintrich, 1997). Science epistemological beliefs are therefore conceptions about the nature of scientific knowledge and the process of knowing science (Conley, Pintrich, Vekiri, & Harrison, 2004;Hofer, 2008;Hofer & Pintrich, 1997). In science education, epistemic competence is of particular significance since it is a determinant of acquisition of coherent knowledge that is of practical value in novel and real-life contexts. The acquisition of coherent knowledge is of strategic value especially at a time when greater premium is placed on transfer of knowledge and provision of solutions to problems in the society (Dede, 2007;Malamed, 2017;Orey, 2010;Ozbay & Koskal, 2016). According to Conley et al. (2004) the developmental research in epistemological beliefs has over time raised important questions about what changes and how to describe the changes. In response to this discourse there are two main categories of models that have come to the fore, that is, developmental and multidimensional (Kaya, 2017). In the developmental models of epistemological beliefs, the construct is unidimensional and individuals move through a patterned sequence of developmental stages. According to Hofer (2001) in the developmental models, epistemological thinking begins with the objectivist view of knowledge to a situation where individuals begin to allow for uncertainty of knowledge then to extreme subjectivity followed by the ability to acknowledge relative merits of different points of view. For instance, Perry's model (1970) conceptualised epistemological beliefs as developing in successive stagelike fashion from dualism, to multiplism, relativism and finally commitment. According to Baxter Magolda (1992) epistemological beliefs progress in four stages known as absolute knowing, transitional knowing, independent knowing and contextual knowing. Boyes and Chandler (1992) concluded that young people move through a developmental sequence of naïve realism to a dogmatism-sceptism axis and finally to a post-sceptical rationalism. Kuhn, Cheney, and Weinstock (2000) proposed a developmental model in which epistemological beliefs progress in four levels as realist, absolutist, multiplistic and evaluatist. King and Kitchener (2004) proposed a reflective judgement developmental model of seven stages grouped into three levels characterised by pre-reflective thinking (stage 1-3), quasi-reflective thinking (stages 4-5), and reflective thinking (stages 6-7). In contrast to the developmental models of epistemological beliefs, Schommer (1990) contended that personal epistemology is too complex to be captured in a unidimensional fashion and hence proposed that epistemological beliefs exist in a multidimensional system of more or less independent beliefs. This implies that there are multiple beliefs to consider and these beliefs may or may not develop synchronously. Accordingly, some dimensions may emerge or be positively related to learning earlier than others (Lodewyk, 2007). The implication is that one may be mature in one dimension of epistemological belief and quite immature in another. Conversely, one may be mature in one dimension of a specific domain but quite immature in the same dimension in another domain. In this model, people may hold sophisticated and naïve beliefs simultaneously. Consequently, Schommer (1990) developed a model of five dimensions as stability of knowledge (tentative to unchanging), structure of knowledge (isolated to integrated), source of knowledge (authority to observation or reason) speed of acquisition (quick or gradual) and control of acquisition (fixed at birth or lifelong improvement). Despite the existence of empirical evidence for the five dimensions, Hofer and Pintrich (1997) found some problems with some of Schommer's dimensions of speed of acquisition and control of acquisition. They held that these dimensions are concerned with the nature of learning and not the nature of knowledge and knowing. Hofer and Pintrich (1997) therefore suggested that there are four general epistemological dimensions including certainty of knowledge (stability), simplicity of knowledge (structure), source of knowing (authority), and judgement for knowing (evaluation of knowledge claims). In addition to the developmental and the multidimensional models, other models have arisen. For instance, Hammer and Elby (2003) proposed a view of personal epistemology that is more situated and less stable and is referred to as the epistemological resources model. According to this model, individuals do not have a fixed set of beliefs but they rather have a range of resources for understanding knowledge. These resources are activated in different contexts and can be linked in a multiplicity of combinations. Chinn, Buckland, and Samarapungavan (2011) expanded a framework for models of epistemological beliefs based on the position of Hofer and Pintrich (1997) with a philosophical dimension. This framework consists of a network of interconnected cognitions that cluster into five distinguishable components. These are, epistemic aims and epistemic value; the structure of knowledge and other epistemic achievements; the sources and justification of knowledge of other epistemic achievements together with related epistemic stances; epistemic virtues and vices; reliable and unreliable processes for achieving epistemic aims. The model by Conley et al. (2004) arose from other models (Elder, 2002;Hofer, 2000;Hofer & Pintrich, 1997) out of a desire to have a discipline-specific model that reflects what happens in science education where the use of evidence and justification of knowledge claims are an explicit focus of teaching. It marked the turning point towards a model with a specific focus in the domain of science. Scholars now opine that the multidimensional model of epistemological beliefs is significant in establishing whether views on science epistemological belief dimensions are separate and may develop in an asymmetrical fashion (Hofer, 2008;Kampa, Neumann, Heitmann, & Kremer, 2016;Lee, Liang, & Tsai, 2016;Lodewyk, 2007). The characteristics of students' science epistemological beliefs are intended to comprehend students' thinking and reasoning which may guide pedagogic practices in science classrooms. The present study, therefore, progresses in the model proposed by Conley et al. (2004). Literature review Literature in this section has been reviewed in terms of the relationships between epistemological beliefs and grade level and academic achievement. Epistemological beliefs and grade level Different studies have been carried out to establish the influence of grade level on the development of epistemological beliefs from a domain-general model perspective (Cano, 2005;Eren, 2007;Schuyten, 2005;Topkaya, 2015;Yenice, 2015). Cano (2005) carried out a study to investigate the changes in epistemological beliefs among Spanish secondary school students in middle, junior high and senior high grades using Schommer's questionnaire (Schommer, 1990). The findings indicated that throughout secondary education, epistemological beliefs undergo a change, becoming less naive and simplistic and more realistic and complex. Schuyten (2005) examined the influence of grade level on the development of epistemic beliefs in South California using multiple measures of Schommer-Aikins Questionnaire (Schommer-Aikins, Mau, Brookhart, & Hutter, 2000) and Conley et al.'s (2004) questionnaire among 6th and 8th graders in an urban middle school. The findings indicated little evidence that epistemic beliefs develop significantly across middle school years. A study by Eren (2007) examined the differences among epistemological beliefs of Turkish undergraduate year one and year two students who were pursuing fine arts teaching, physical education and business administration using Schommer's epistemological questionnaire (Schommer, 1990). The findings indicated that first years had more sophisticated effort beliefs than second years, while second years had more sophisticated unchanging truth beliefs than first years. Topkaya (2015) investigated how epistemological beliefs vary by grade level using Schommer's epistemological belief scale (1990) among Turkish pre-service teachers. The findings revealed significant differences between 1st and 4th graders in favour of first graders for social studies and science and technology pre-service teachers. Yenice (2015) carried out a study to investigate the relationships between epistemological beliefs of student teachers and grade level using a Turkish adapted version of Schommer's epistemological beliefs questionnaire (1990). The findings showed that grade level did not have a significant impact on the epistemological beliefs of the participants and their beliefs did not change based on grade level. The findings of these studies on the influence of grade level on epistemological beliefs are mixed and inconclusive. Other scholars have endeavoured to carry out domain-dependent studies on epistemological beliefs particularly in science disciplines for instance Fatma (2009), Aydemir, Aydemir andBoz (2013), Shaakumeni (2019). Fatma (2009) set out to establish that epistemological beliefs are multidimensional and vary as a function of grade level among grade 6th, 8th and 10th among students of Ankara using the questionnaire by Conley et al. (2004). The findings revealed that epistemological beliefs develop over time. The 10th-grade students had more sophisticated beliefs in source of knowledge, certainty of knowledge and development of knowledge compared to 6th-and 8th-grade students. Aydemir, Aydemir and Boz (2013) carried out a study to investigate how Turkish grade 9 and 11 students change with grade level using the epistemological beliefs questionnaire by Conley et al. (2004). The results showed that students' epistemological beliefs became less sophisticated with respect to development and justification with an increase in grade level. On the other hand, 11th-grade students believed scientific knowledge may not always be correct (certainty). In a study by Shaakumeni (2019) to validate a questionnaire for assessing Namibian students' science epistemological beliefs grades 11 and 12 were examined using epistemological beliefs questionnaire by Conley et al. (2004). The findings indicated that there were statistically significant differences in beliefs about source and certainty in terms of grade level. The findings from these science domain-based studies are also equivocal. The uncertainty of the findings within the domain of science requires continuous investigation. Epistemological beliefs and academic achievement The relationship between epistemological beliefs and academic achievement have been studied using Schommer's domain general model of epistemological beliefs (Arslantas, 2016;Cano, 2005;Lodewyk, 2007;Ricco, Pierce, & Medinilla, 2010;Savoji, Niusha, & Boreiri, 2013;Schuyten, 2005;Topcu & Yilmaz-Tuzun, 2009). Cano (2005) carried out a study to investigate the relationship between epistemological beliefs and academic achievement among Spanish secondary school students in middle, junior high and senior high grades using Schommer's questionnaire (1990). The findings indicated that epistemological beliefs influenced academic achievement directly. Schuyten (2005) examined the relationships among epistemic beliefs and academic performance for 6th and 8th graders in an urban middle school in South California using self-report measures of epistemic beliefs and students' science grades. The findings indicated that the development dimension was positively related to science grades. Lodewyk (2007) investigated the relationship between epistemological beliefs and academic performance using Schommer's questionnaire (1993) and students' academic scores from secondary school students of Western British Columbia. The findings indicated that the 'Fixed and Quick Ability to Lean' and 'Simple Knowledge' dimensions of the instrument were significantly related to the estimates of overall achievement. Topcu and Yilmaz-Tuzun (2009) carried out a study involving 4th-, 5th-, 6th-, 7th-and 8th-grade students to establish the relationship between epistemological beliefs and science achievement using the epistemological beliefs questionnaire of Schommer (1990). The findings revealed that the epistemological beliefs of students were associated with science achievement. Ricco et al. (2010) investigated the relationship between epistemological beliefs and science achievement grades among 6th-, 7th-and 8th-grade students of California using Schommers questionnaire (Schommer, 1990). The findings of regressions predicting science grade showed that the epistemic beliefs successfully predicted science achievement among early adolescents. Savoji et al. (2013) investigated the nexus between epistemological beliefs and high school students' academic achievement. The results of multiple regression analysis revealed that academic achievement can be predicted by dimensions of epistemological beliefs and motivational strategies. Among the dimensions of epistemological beliefs, knowledge stability and acquisition speed were negative predictors of academic achievement. Arslantas (2016) carried out a study aimed at identifying the relationship between teacher candidates' epistemological beliefs and academic achievement. An epistemological beliefs scale made of three dimensions was used. The findings showed that the teacher candidates' epistemological beliefs differed based on major. In addition, it was found that there was a statistically significant relationship between only one dimension of epistemological beliefs and academic achievement. The findings from the relationship between Schommer's domain-general model of epistemological beliefs and academic achievement revealed that different domains of epistemological beliefs were strongly related to academic achievement. The relationship between epistemological beliefs and science achievement have also been investigated using Conley et al.'s (2004) domain-specific model of science epistemological beliefs (Chen & Pajares, 2010;Ozkan, 2008;Shaakumeni, 2019). Ozkan (2008) explored the relationships between elementary students' beliefs and their science achievement among 7th-grade students from Ankara. The findings indicated that students' epistemological beliefs predicted science achievement directly. Source and certainty dimensions predicted science achievement. Chen and Pajares (2010) investigated the relationships of epistemological beliefs with academic motivation and science achievement among 6th-grade students. The results from path analysis showed that epistemological beliefs played a mediating role between association of implicit theories of ability with achievement goal orientations, self-efficacy and science achievement. Greene, Cartiff, and Duke (2018) carried out a meta-analysis of non-experimental studies in literature and found epistemic cognition as measured predominantly in terms of beliefs was positively correlated with academic achievement, r = 0.16, p < 0.001and an effect size of 0.16 overall indicating small but meaningful relationship. Further, the instruments focussing on development and justification of knowledge had higher correlations with academic achievement than those focussed on constructs related to authority. In a study by Shaakumeni (2019) to explore the relationship between science epistemological beliefs and achievement in science, grades 11 and 12 were examined using Conley et al.'s (2004) epistemological beliefs questionnaire. The findings indicated the dimensions of certainty and justification statistically significantly predicted achievement in science. The findings of the reviewed studies have indicated varied relationships between different dimensions of epistemological beliefs and academic achievement. There is need for a continuous investigation of these constructs to unequivocally establish the relationships. There is also an implication of a research gap in this construct from an African perspective which is important for a holistic conception in terms of regions and at the same time further explore the epistemic beliefs versus achievement study. Context of the study The current structure of secondary education in Kenya consists of four years of secondary education and a minimum of four years of university education (Kenya Institute of Education, 2002) The schools implement a centralised national curriculum under the supervision of the ministry of education. After 4 years of secondary education, the students take a compulsory Kenya Certificate of Secondary Education (KCSE) examination which is administered by the Kenya National Examination Council (KNEC). Apart from summative assessment, clusters of schools organise joint assessment at the formative level. The performance of students in the national examination is a function of many factors and has been low. The table below shows students' percentage mean performance in Biology, Chemistry and Physics from the years 2013 to 2018. As can be seen from the percentage mean performance above, the students' performance has been low for the six years. This implies that the students' scores in the years were more on the lower side of the distribution. This situation is of research significance to find out the correlates of learning and learning outcomes like science epistemological beliefs. The current investigation Contemporary researchers in epistemological beliefs have emphasised the significance of this construct in affecting cognitive and non-cognitive variables of learning either directly or indirectly in specific disciplines and general domains. At the same time, scholars in epistemological beliefs have in many instances sought to establish that changes in epistemological beliefs are as a result of age, grade level and time among other variables. The findings of most of these studies have been inconsistent. The current trend in epistemological research is to view the construct as multidimensional and variable in developmental trajectory. Secondly, it has been documented that students in Kenya continue to register low achievement in sciences as can be seen in Table 1 above. From a research perspective, it is significant to find out whether the nature of epistemological beliefs could be making a contribution to this low achievement in science. Not much research has been done in Africa and Kenya on this construct and its relationship to science achievement. The purpose of this study was to investigate the influence of grade level on the development of science epistemological beliefs and the relationship between science epistemological beliefs and science achievement among co-educational secondary schools in Kenya. Research objectives The study was guided by the following objectives. (i) To determine the influence of grade level on the development of science epistemological beliefs (ii) To identify the relationship between science epistemological beliefs and science achievement Theoretical framework There is a growing consensus that the concept of epistemological beliefs is multidimensional, multi-layered and context-sensitive (Buehl & Alexander, 2006;Hofer, 2016;Muis, Bendixen, & Haerle, 2006). That is, an individuals' epistemological belief system comprises multiple independent dimensions and may vary due to different levels of context specificity (Muis et al., 2006;Muis & Gierus, 2014;Schommer, 1990). In the same vein, Greene, Sandoval, and Braten (2016) recommended domain-specific approaches for doing these studies since there is evidence that epistemological beliefs may vary across different domains. The present study, therefore, limited its focus on students' epistemological beliefs in the science domain. In the domain of science, Conley and colleagues drawing from Hofer (2000) and Elder (2002) theorised that science epistemological beliefs consist of four dimensions and develop in asynchronous fashion (Conley et al., 2004). (2015), Kampa et al. (2016), Lee et al. (2016) and Winberg, Hofverberg, and Lindfors (2019). The present study, therefore, adopted a multidimensional approach to personal epistemology in the context of science as theorised by Conley and colleagues and was conceptualised to be influenced by grade level and to be related to achievement in science. Methodology This section describes the research design for the study, the participants, the measures and the statistical analyses. Research design The study adopted a blend of cross-sectional and correlational survey designs. The cross-sectional survey model was significant in determining the developmental characteristic of epistemological beliefs in students of co-educational secondary schools at different grades (Fraenkel & Wallen, 2008;Gall, Borg, & Gall, 2003). This was done without manipulating variables. The correlational component was useful in establishing whether the variables of science epistemological beliefs and science achievement change together and their degree of change. Further, it was useful in identifying the predictive relationships of students' science epistemological beliefs and science achievement (Ary, Jacobs, & Razavieh, 1996;Fraenkel & Wallen, 2008). Participants The participants in this study were purposively drawn from two schools in Homa Bay County in grades 9 and 12. Purposive sampling is valuable where the characteristics of a population are known or abundant in the data intended for the study (Gall et al., 2003). This technique was used to ensure that only schools with requisite characteristics (grade 9 and 12) were part of the study sample. Since gender imbalances exist in some schools, care was taken during sampling to ensure that only schools that were balanced in terms of gender were included in the study. Data were therefore collected from grade 9 students at the point of entry in secondary education and grade 12 at the point of exit to establish change in epistemological thinking. There were 116 grade 9 students representing 54.20% and 98 grade 12 representing 45.80%. The same sample had 54.20% boys and 45.80% girls. The ages of grade 9 students ranged from 15 to 17 with a mean of 15.52 years. The ages of grade 12 students ranged from 18 to 20 with a mean of 18.45 years. The sample for the study was therefore 214 students. Table 2 shows the sample size according to gender and grade level. Measures Two instruments were used in this study to collect data: Epistemological Beliefs Questionnaire (EBQ) and Science Achievement Scores (SAS). Epistemological Beliefs Questionnaire (EBQ) was adapted from Conley et al. (2004) which was developed in the USA and was validated for use in Africa by Shaakumeni (2019). The validation involved rewording items to make them more meaningful in the African context. The questionnaire focused on science epistemological thinking. The questionnaire has 26 items with 4 dimensions as source (with 5 items), certainty (with 6 items), development (with 6 items) and justification (with 9 items). The participants rated the items on a 5-point Likert scale (with 1 = Strongly Disagree, 5 = Strongly Agree). The instrument was piloted in a school that was not participating in the study. Piloting revealed that the various dimensions had reliabilities as follows: source 0.83, certainty 0.80, development 0.71, and justification 0.78 giving an overall reliability of 0.78. The source dimension items measures beliefs about scientific knowledge residing in external authorities (for example, whatever the teacher says in science class is true). The certainty dimension refers to belief in a right or a wrong science answer (for example, all questions in science have one right answer). The development dimension concerns beliefs about science as an evolving and changing subject (for example, ideas in science books sometimes change). The justification dimension concerns the role of science experiments and how individuals justify knowledge (for example, it is good to try experiments more than once to be sure of your findings). The items for the source and certainty scales were reversed and consequently, scoring was reversed to reflect this. Higher scores in these scales, therefore, reflected epistemic competence. This instrument was administered to students in the two schools and in all the grades 9 and 12 and the investigator was assisted by the science teachers in the sampled schools. The instrument was administered for 30 min. The data were got in term 2 of the school calendar in Kenya. Science Achievement Scores (SAS) were obtained from the schools where data were collected. The investigator obtained data about students' scores in Biology, Chemistry and Physics from official school documents. In Kenya, the practice is to examine students in all the subjects at the end of the term. It is also a common trend for a few schools to jointly evaluate students to enhance hard work and competition. In this regard, teachers come together, set exams, moderate and thereafter administer. The evaluation ends with joint marking and computation of results. The schools that were sampled did common standardised exams. A student's scores in Biology, Chemistry and Physics subjects were later computed/averaged to get a single Science Achievement Score (SAS). Statistical analyses The data from the questionnaire were subjected to statistical treatments according to the dimensions of the instrument. Each question in the questionnaire was worth a lowest score of 1 point and a highest score of five points. The highest scores were 25 for dimension of source, 30 for certainty, 30 for development and 45 for justification. Descriptive statistics were used to summarise raw data. Inferential statistics were used to test the research hypotheses. The hypotheses were accepted at a significance level of α = 0.05. To determine the influence of grade level on the development of science epistemological beliefs, independent sample t-tests were carried out. The t-test is an inferential statistical procedure used to determine whether means of two samples are significantly different (Fraenkel & Wallen, 2008). The dependent variable (science epistemological beliefs) was data in ratio scale, the groups were mutually exclusive, there were no relationships between observations in each group and grade level consisted of two categorical independent groups. In this regard, independent sample t-test was appropriate for this analysis (Gall et al., 2003). To determine the relationship between science epistemological beliefs and science achievement, Multiple Regression Analysis was carried out (Fraenkel & Wallen, 2008;Gall et al., 2003). Multiple regression analysis was found robust in determination of the overall contribution of the dimensions of epistemological beliefs on achievement, the predictive ability of each of the dimensions of epistemological beliefs on science achievement and the significance of the epistemological beliefs in accounting for the variance in science achievement. Data analysis was conducted with the aid of Statistical Package for Social Sciences (SPSS) version 23. Results The first objective of this study was to determine the influence of grade level on the development of science epistemological beliefs. Pursuant to this, independent sample t-tests were carried out for each dimension of epistemological beliefs. Table 3 represents means, standard deviations, four separate independent sample t-tests for the four dimensions of science epistemological beliefs of grade 9 and 12 students, p-values and effect sizes. To test for the homogeneity of variances of the samples, Levene's test was used. The resulting p-values were greater than 0.05 showing that the variances of the samples were not statistically significant. Consequently, the t-tests are based on equal variances assumed. As can be seen from Table 3, the mean scores of the students in grades 9 and 12 were above the mid-points (12.5, 15, 15, and 22.5 for source, certainty, development and for justification, respectively) for all the dimensions. The scoring in the dimensions of source and certainty was done in the reverse. The grade 12 students had higher means scores compared to grade 9 students in all the dimensions of the epistemological beliefs. To determine whether these differences were statistically significant, independent sample t-tests revealed that there were statistically significant differences in terms of source of scientific knowledge t (212) = −3.28, p < 0.05; certainty of scientific knowledge t (212) = −2.87, p < 0.05; and development of scientific knowledge t (212) = -5.12, p < 0.05 all in favour of grade 12 students; However, there were no statistically significant differences between grade 9 students and grade 12 with regard to justification of scientific knowledge t (212) = −1.52, p > 0.05. The effect sizes as shown by Cohen's d values indicate that 44.1% of the variance in source of scientific knowledge, 38.7% of variance in certainty of scientific knowledge, 66.5% of variance in development of scientific knowledge and 20.7% of variance in justification of scientific knowledge were related to grade level. The second objective of the study concerned identification of the relationship between science epistemological beliefs and science achievement. Pursuant to this, multiple regression analysis was done to establish the predictive ability of the various domains of epistemological beliefs on science achievement. A dichotomous variable of grade level (grade 9 and 12) was added to control for any grade level differences. Table 4 below shows the results of the output of multiple regression analysis with unstandardised coefficients (B-values) and standardised coefficients (β-values). These are unique variances that each of the predictors (dimensions of epistemological beliefs) made on science achievement. The t and p values are also shown. Table 4 indicates that certainty (β = 0.168, p < 0.05), justification (β = 0.162, p < 0.05), and Grade level (β = 0.198, p < 0.05) positively predicted students' science achievement. On the other hand, source (β = 0.041, p > 0.05) and development (β = 0.024, p > 0.05) did not predict students' science achievement. The five predictors (Source, uncertainty, development, justification and grade level) taken together explained 13% of the variance in science achievement F (5, 208) = 6.194, p < 0.05. This shows that the regression model was a good fit for the data. Discussion The descriptive statistics on the influence of grade level on the development of science epistemological beliefs have indicated that the participants of the current study generally had epistemic competence about the nature of knowledge and knowing. For each of the dimensions (i.e source, certainty, development and justification) the students obtained mean values that were above the midpoints. This might mean that the sampled students generally adapted their epistemic cognition to match their learning environment in all the dimensions of the epistemological beliefs. The effect sizes as shown by Cohen's d values indicated that there was a stronger relationship between the development of epistemological beliefs (source and development) and grade level. There were also statistically significant differences between grade 9 and 12 in epistemological beliefs in terms of source of scientific knowledge, certainty of scientific knowledge and development of scientific knowledge, but no statistically significant difference between grade 9 and 12 with regard to justification. The finding showed that grade level was related to source, certainty and development dimensions. This could be interpreted with caution in two ways: Since this study was carried in the context of learning environment, it is plausible to make interpretations in the light of the classroom context but with restraint. The characteristics of the learning environment are likely to have shaped the current epistemological beliefs. Previous studies have shown that the pedagogic environment in which learners are exposed contributes directly or indirectly to the development of epistemological beliefs (Schommer, 1993;Cano, 2005;Schommer-Aikins & Easter, 2006). Consequently, as students experienced the pedagogic environment, their epistemological beliefs are likely to have evolved. Secondly, since there were age differences between grade 9 and 12 students, this may have also contributed to the difference in students' epistemological beliefs. Previous studies have shown that age and level of education predict epistemic change (Schommer-Aikins et al., 2000;Schuyten, 2005). A finding of this study also indicates that grade 12 students showed more epistemic adaptiveness than grade 9 with regard to justification, however, this was not statistically significant. This finding was unexpected; However, it might be attributed to pedagogic experiences that the students were being exposed to. At grade 9 level in the Kenyan system of education, the students are being introduced to the nature of science in different domains of Biology, Chemistry and Physics. At the introductory level in these domains of science, the students are taken through the requirement that claims to scientific knowledge need to be justified through the process of experimentation (Kenya Institute of Education, 2002). This introduction of grade 9 students to the practical aspects of science could have contributed to their epistemic adaptiveness being proximal to grade 12 at this stage. This also supports previous findings that domains of epistemological beliefs have a nonsymmetrical developmental trajectory. These findings are consonant with the findings of Schuyten (2005) with regard to certainty of knowledge; Fatma (2009) with regard to source, certainty and development of scientific knowledge; Findings of Aydemir et al. (2013) with regard to source and certainty of scientific knowledge; and the findings of Roya and Abdorreza (2014) and Shaakumeni (2019) with regard to 'certainty of knowledge'. The finding of the current study departs from the findings of Aydemir et al. (2013) with respect to 'justification' and 'development' of scientific knowledge which indicated that students' epistemological beliefs become less adaptive with an increase in grade level. The digression of this finding might be related to the specific classroom context in which this study was undertaken and the Turkish cultural context of the study (Buehl & Alexander, 2006;Greene et al., 2016;Muis et al., 2006). The finding on the relationship between science epistemological beliefs and 'science achievement' indicated that certainty and justification dimensions of the epistemological beliefs were significant predictors of science achievement. On the other hand, source and development were not significant predictors of science achievement. This means that the students' beliefs in uncertainty and beliefs in the significance of experiments in justification of knowledge claims positively predicted science achievement. The finding also means that the students' beliefs about where scientific knowledge comes from and their beliefs on whether science is evolving or not did not predict their science achievement. Even though certainty and justification predicted science achievement, the correlational and cross-sectional nature of the study precludes making strong inferences. The findings are therefore carefully interpreted in relation to students' curricular experience in the Kenyan secondary school context in two ways: First, there are usually no multiple choice questions in examinations that focus the students' thinking on one correct answer as is the case in primary schools. Secondly, the science curriculum at the secondary school level emphasises data collection, making observations and making claims using evidence. It is possible that this kind of pedagogic setting contributed to epistemic competence in this dimension. The non-predictive ability of source and development in this study was unexpected; It had been anticipated that epistemic adaptiveness would be associated with science achievement. The present findings indicate that relations between epistemological beliefs and achievement may be more complex than anticipated. This could also be interpreted in the light of independence of development of dimensions of epistemological beliefs as elucidated by Schommer (1990). Students may believe that scientific knowledge resides in authorities and at the same time believe that science is an evolving subject. This finding is in concurrence with other studies which have not revealed positive relationships with all the dimensions of epistemological beliefs. For example, Shaakumeni (2019) which revealed that 'certainty' and 'justification' positively predicted science achievement whereas 'source' and 'development' negatively predicted science achievement. Schommer (1990) found that 'certain knowledge' predicted appropriate absolute conclusions (achievement). The current findings depart from other studies. For instance, Ricco et al. (2010) found out that the dimension of 'knowledge as developing' made significant contributions to science grades (achievement). Schuyten (2005) found that the dimension of 'development of knowledge' was positively related to students' science grades (achievement). The findings of the relationship of the dimensions (certainty and justification) and science achievement in this study further support the earlier findings that epistemic competence contributes to higher academic achievement. The findings also support what extant literature indicates that different dimensions of epistemological beliefs correlate differently with academic achievement. This could be attributed to the differences in cultural and classroom contexts where the studies are done. The differences in positive relationships between different dimensions and science achievement further support the multidimensional and asymmetrical or asynchronous developmental characteristic of epistemological beliefs that have been documented in literature. Conclusions and recommendations The findings of the study point to the following conclusions: First, the science epistemological beliefs of the students in grade 12 were more adaptive than grade 9. Secondly, epistemological beliefs of the domains of certainty and justification were predictors of science achievement. The study recommends the following for practice and research: First, for practice, there is a need for teachers to deliberately create learning environment experiences or contexts that engender epistemic competence. Barger, Perez, Canelas, and Linnenbrink-Garcia (2018) found out that constructivist learning environment can shape epistemic beliefs and serve as a way of fostering epistemic change. This can be done by providing for knowledge construction out of active, sensual and perceptive experiences of the learner (Kim, 2005). In a constructivist learning environment, the learners are engaged by teachers on inquiry activities in an effort to explore phenomena, construct and reconstruct models in the light of results of scientific investigations (Peffer & Ramezani, 2019). On the other hand, since there was a relationship between grade level and epistemic change, there is need to provide appropriate learning experiences within the grades to enable the learners to adapt their thinking to the norms of the classroom context at that time. Secondly, in terms of research, more studies need to be done on specific science disciplines. Despite the fact that all science disciplines share certain commonalities in the path of knowledge generation, there are intra-discipline variations that can engender different pathways of epistemic development. These variations can only be elicited at the point of research. In the same vein, more multi-method studies based on longitudinal research designs need to be done to build a more comprehensive picture of relationship between science epistemological beliefs and grade level and science achievement. Limitations The utilisation of self-report measures that focus the learners on particular aspects of epistemological beliefs may have revealed more epistemic competence than would not have happened in the case of other instruments like interviews. The sampling was also purposively done in two co-educational schools and this limits the generalisability of the findings to the wider populations within the county. Generalisations to wider populations would also require more extensive studies across different schools. Lastly, the correlational and cross-sectional design of this study prevents making strong inferences and only allows for a restrained interpretation of the findings as has been alluded to earlier. Disclosure statement The author declares no competing interests in this study.
2021-05-07T00:03:16.327Z
2021-03-04T00:00:00.000
{ "year": 2022, "sha1": "f1863b2077cde5768babdc2cfa850676e05c9a51", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20004508.2021.1892917?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "fd9ec386d04588030f81c2194acc4b9b40b32864", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
257705354
pes2o/s2orc
v3-fos-license
Rare Benign Tumors and Tumor-like Lesions of the Hand without Skin Damage—Clinical, Imagistic and Histopathological Diagnosis, Retrospective Study Background: The broad spectrum of diagnoses and clinical features of hand tumors and the absence of pathognomonic signs often lead to an inaccurate or delayed diagnosis. However, only a few reports have comprehensively referenced the diagnosis and clinical features of hand tumors. This study intends to highlight the clinical, imaging and histological characteristics of uncommon hand tumors or tumor-like lesions. Methods: In this retrospective study, we report a series of 80 patients diagnosed with rare hand tumors and tumor-like lesions without skin damage. Age, gender, tumor location, imaging examinations and clinical and laboratory findings were analyzed. The histopathological exam established the final diagnosis. Surgery was indicated and performed in all cases. Results: This study included: neurofibroma, glomus tumor, lipoma, schwannoma, epidermal inclusion cyst and idiopathic tenosynovitis with “rice bodies.” We have described the clinical, imagistic and histopathological particularities of these tumors. Surgical management included the complete removal of tumors, with no recurrence recorded within two years and overall high patient satisfaction. The most common findings were lipomas and the rarest neurofibromas. Conclusions: To optimize the care of hand tumors and reduce diagnostic and treatment errors, knowledge of hand tumor types and their clinical and laboratory characteristics is necessary for every surgeon. Introduction The hands, a symbol of action and activity, are highly sophisticated and specialized body parts. The hand receives, holds, gives, expresses communion and prays. Both hands represent only 2% of the total body surface area and only 1.2% of the total body weight [1]. However, hand tumors account for 15% of all soft tissue tumors [1,2]. Moreover, 95% of the soft tissue tumors on the hand are benign and tumor-like lesions [2,3]. However, benign tumors and tumor-like lesions of the hand have a low incidence compared to other anatomical sites, some of them being included in the category of rare tumors; therefore, diagnosis and surgical treatment require good knowledge and skills [3,4]. A delayed diagnosis often results in delayed and consequently difficult treatment, both for the patient and the surgeon [1,5]. With tumor growth, there is significant tissue destruction that often requires a more complex surgical approach [1,3]. This often results in increased morbidity, Of the 80 patients included in the study, 50 were female (62.5%) and 30 were male (37.5%). The following diagnoses were made: neurofibroma (2 cases/2.5%), glomus tumor (15 cases/18.75%), lipoma (30 cases/37.5%), schwannoma (22 cases/27.5%), epidermal inclusion cyst (3 cases/3.75%) and idiopathic tenosynovitis with rice bodies (8 cases/10%) ( Figure 1). The epidemiological data and tumor history are listed in Table 1. Clinical and macroscopic aspects of the tumors in the study group are listed in Table 2. The epidemiological data and tumor history are listed in Table 1. Clinical and macroscopic aspects of the tumors in the study group are listed in Table 2. The findings of the imaging investigations of the tumors in the study group are listed in Table 3. Histopathological and immunohistochemical features of the tumors in the study group are listed in Table 4. Neurofibroma First described by Von Recklinghausen, neurofibromas are benign peripheral nerve sheath tumors most commonly associated with neurofibromatosis [10]. Common sites include fingers and toes and typically develop asymptomatically as slowly enlarging soft lesions in females and in the second or third decade of life [13][14][15]. Subungual localization is very rare, with only 11 cases reported in the literature, most often with nail deformation [10]. In the absence of noisy symptomatology or pathognomonic signs, this rare type of tumor located in the nail bed is difficult to diagnose; thus, creating a differential diagnosis is essential (glomus tumors, subungual hemangiomas). Glomus tumors can be excluded in the absence of pain. A plain, two-view X-ray may reveal a possible bone imprinting (distal phalanx) and exclude bone tumors' presence. Nail bed neurofibroma was diagnosed in 2 of the 80 studied cases. In both cases, the patients were female, with nail bed involvement of the second digit (D2) in one case and the volar aspect of the second phalanx of the fifth digit (P2D5) in the other case ( Figure 2). The two patients were 42 and 51 years old, respectively. The tumor took a progression of 2 years and 3.5 years, respectively. In both cases, the patients did not report any clinical signs. Plain X-ray and ultrasonography were performed, revealing the presence of a hyperechoic nodular tumor with a polycyclic profile and low Doppler signal. In one of the cases, bone imprinting at the level of the distal phalanx was observed. No biopsy was performed. In both cases, surgery was performed under the Walant technique. After the surgical removal of the tumor, the excised piece was examined histopathologically and immunohistochemically ( Figure 3). The two patients were 42 and 51 years old, respectively. The tumor took a progression of 2 years and 3.5 years, respectively. In both cases, the patients did not report any clinical signs. Plain X-ray and ultrasonography were performed, revealing the presence of a hyperechoic nodular tumor with a polycyclic profile and low Doppler signal. In one of the cases, bone imprinting at the level of the distal phalanx was observed. No biopsy was performed. In both cases, surgery was performed under the Walant technique. After the surgical removal of the tumor, the excised piece was examined histopathologically and immunohistochemically ( Figure 3). Immunohistochemical staining for S100 protein ( Figure 4) and CD34 was positive. The differential diagnoses considered lipoma, schwannoma or giant cell tumor of synovial staining after the sheath. Tumor consistency and mobility on the deep planes were not suggestive of bone tumor; radiologic findings excluded this diagnosis. The The two patients were 42 and 51 years old, respectively. The tumor took a progression of 2 years and 3.5 years, respectively. In both cases, the patients did not report any clinical signs. Plain X-ray and ultrasonography were performed, revealing the presence of a hyperechoic nodular tumor with a polycyclic profile and low Doppler signal. In one of the cases, bone imprinting at the level of the distal phalanx was observed. No biopsy was performed. In both cases, surgery was performed under the Walant technique. After the surgical removal of the tumor, the excised piece was examined histopathologically and immunohistochemically ( Figure 3). Immunohistochemical staining for S100 protein ( Figure 4) and CD34 was positive. The differential diagnoses considered lipoma, schwannoma or giant cell tumor of synovial staining after the sheath. Tumor consistency and mobility on the deep planes were not suggestive of bone tumor; radiologic findings excluded this diagnosis. The Immunohistochemical staining for S100 protein ( Figure 4) and CD34 was positive. The differential diagnoses considered lipoma, schwannoma or giant cell tumor of synovial staining after the sheath. Tumor consistency and mobility on the deep planes were not suggestive of bone tumor; radiologic findings excluded this diagnosis. The immediate and long-term postoperative results were among the best, with the resumption of total activity and satisfactory aesthetic appearance in the case of nail bed neurofibroma. No recurrences were recorded within two years. immediate and long-term postoperative results were among the best, with the resumption of total activity and satisfactory aesthetic appearance in the case of nail bed neurofibroma. No recurrences were recorded within two years. Glomus Tumors Glomus tumors are uncommon benign tumors involving the glomus body, an apparatus involved in the thermoregulation of cutaneous microvascularization. Location on the volar aspect of the hand is reported in only 10% of all cases with glomus tumor of the hand [16]. Glomus tumors present as a classic triad of severe pain, point tenderness, and cold sensitivity and have a relatively short history of disease. In the study group, we recorded 13 female patients and 2 males with a history of disease of 6 months to 3 years. High-intensity pain to the touch and the positive clinical tests triad (Love's test, cold sensitivity test, Hildreth's test) were recorded in all cases. The imaging examination included face and profile X-rays and ultrasound (in 10 of the 15 cases). The remaining five patients could not tolerate being touched with the ultrasound probe. MRI was not performed in any of the cases for economic reasons. Surgery was indicated in all cases and was performed under Walant anesthesia. For lesions with nail bed involvement, surgical management involved complete tumor ablation by initially removing a portion of the nail blade/plate, nail bed incision, nail bed suturing, and nail plate repositioning for protection. In all the other cases, complete tumor ablation was performed without preoperative biopsy ( Figure 4). Histopathological and immunohistochemical examinations confirmed the diagnosis ( Figure 5). Glomus Tumors Glomus tumors are uncommon benign tumors involving the glomus body, an apparatus involved in the thermoregulation of cutaneous microvascularization. Location on the volar aspect of the hand is reported in only 10% of all cases with glomus tumor of the hand [16]. Glomus tumors present as a classic triad of severe pain, point tenderness, and cold sensitivity and have a relatively short history of disease. In the study group, we recorded 13 female patients and 2 males with a history of disease of 6 months to 3 years. High-intensity pain to the touch and the positive clinical tests triad (Love's test, cold sensitivity test, Hildreth's test) were recorded in all cases. The imaging examination included face and profile X-rays and ultrasound (in 10 of the 15 cases). The remaining five patients could not tolerate being touched with the ultrasound probe. MRI was not performed in any of the cases for economic reasons. Surgery was indicated in all cases and was performed under Walant anesthesia. For lesions with nail bed involvement, surgical management involved complete tumor ablation by initially removing a portion of the nail blade/plate, nail bed incision, nail bed suturing, and nail plate repositioning for protection. In all the other cases, complete tumor ablation was performed without preoperative biopsy ( Figure 4). Histopathological and immunohistochemical examinations confirmed the diagnosis ( Figure 5). No intraoperative complications, incidents or accidents were recorded. All patients enjoyed a fast and good quality recovery, without motor or sensory sequels, with maximum patient satisfaction. No intraoperative complications, incidents or accidents were recorded. All patients enjoyed a fast and good quality recovery, without motor or sensory sequels, with maximum patient satisfaction. Schwannoma Schwannomas comprise about 5% of all benign soft tissue lesions, accounting for 27.5% of our study patients. Schwannomas can occur in people of any age and have no gender predisposition. Schwannomas usually manifest as solitary, slow-growing, encapsulated, painless lumps that persist long before diagnosis, leading to a more challenging treatment [17]. As the tumor grows and gradually compresses the nerve, pain, paresthesia, and other symptoms may appear. We report 22 cases of schwannomas with different anatomical sites and nerve involvement. Did not record any patients with type 2 neurofibromatosis association or multiple lesions. The following clinical signs supported the diagnosis: slow-growing tumor mass located along a peripheral nerve (large nerve trunk or even common or collateral digital nerves) of relatively hard consistency, sometimes painful spontaneously or especially on palpation, with a positive Tinel sign, mobile along the plane perpendicular to the nerve course, a suggestive sign of the presence of a schwannoma. We recorded a female/male ratio of 19:3 with an average age of 54.59 years. We identified eight cases on the digital collateral nerves of the long fingers, four on the digital collateral nerves of the thumb, six on the common digital nerves and four cases on the palmar median nerve. Imaging investigations consisted of two-view X-rays (to rule out a bone tumor), which revealed a minimal bone impression in two cases. Ultrasonography confirmed a nerve truck tumor in 11 of the 22 cases. In 14 cases, the MRI examination (which was not performed for all cases due to economic reasons) confirmed the diagnosis of schwannoma. Surgery was indicated in all cases and performed under local anesthesia (Walant) in patients with Schwannoma Schwannomas comprise about 5% of all benign soft tissue lesions, accounting for 27.5% of our study patients. Schwannomas can occur in people of any age and have no gender predisposition. Schwannomas usually manifest as solitary, slow-growing, encapsulated, painless lumps that persist long before diagnosis, leading to a more challenging treatment [17]. As the tumor grows and gradually compresses the nerve, pain, paresthesia, and other symptoms may appear. We report 22 cases of schwannomas with different anatomical sites and nerve involvement. Did not record any patients with type 2 neurofibromatosis association or multiple lesions. The following clinical signs supported the diagnosis: slow-growing tumor mass located along a peripheral nerve (large nerve trunk or even common or collateral digital nerves) of relatively hard consistency, sometimes painful spontaneously or especially on palpation, with a positive Tinel sign, mobile along the plane perpendicular to the nerve course, a suggestive sign of the presence of a schwannoma. We recorded a female/male ratio of 19:3 with an average age of 54.59 years. We identified eight cases on the digital collateral nerves of the long fingers, four on the digital collateral nerves of the thumb, six on the common digital nerves and four cases on the palmar median nerve. Imaging investigations consisted of two-view X-rays (to rule out a bone tumor), which revealed a minimal bone impression in two cases. Ultrasonography confirmed a nerve truck tumor in 11 of the 22 cases. In 14 cases, the MRI examination (which was not performed for all cases due to economic reasons) confirmed the diagnosis of schwannoma. Surgery was indicated in all cases and performed under local anesthesia (Walant) in patients with finger tumor localization and under loco-regional anesthesia (axillary block) for palmar lesions with median nerve involvement. Under the operating microscope, the tumor was enucleated, and the nerve fibers were kept intact to avoid postoperative neurological complications (sensitivity disorders) ( Figure 6). finger tumor localization and under loco-regional anesthesia (axillary block) for palmar lesions with median nerve involvement. Under the operating microscope, the tumor was enucleated, and the nerve fibers were kept intact to avoid postoperative neurological complications (sensitivity disorders) ( Figure 6). Histopathological examination confirmed the diagnosis of schwannoma, its characteristic features being detected: the presence of two areas of cellularity: Antoni A (compact hypercellularity) and Antoni B (myxoid hypocellularity). Immunohistochemical determinations were made for 15 out of 22 patients. In all cases, S100, CD 34 and collagen IV were positive, suggestive of the diagnosis of schwannoma; therefore, the diagnosis of peripheral malignant nerve trunk/sheath tumor was excluded (Figure 7). Histopathological examination confirmed the diagnosis of schwannoma, its characteristic features being detected: the presence of two areas of cellularity: Antoni A (compact hypercellularity) and Antoni B (myxoid hypocellularity). Immunohistochemical determinations were made for 15 out of 22 patients. In all cases, S100, CD 34 and collagen IV were positive, suggestive of the diagnosis of schwannoma; therefore, the diagnosis of peripheral malignant nerve trunk/sheath tumor was excluded (Figure 7). finger tumor localization and under loco-regional anesthesia (axillary block) for palmar lesions with median nerve involvement. Under the operating microscope, the tumor was enucleated, and the nerve fibers were kept intact to avoid postoperative neurological complications (sensitivity disorders) ( Figure 6). Histopathological examination confirmed the diagnosis of schwannoma, its characteristic features being detected: the presence of two areas of cellularity: Antoni A (compact hypercellularity) and Antoni B (myxoid hypocellularity). Immunohistochemical determinations were made for 15 out of 22 patients. In all cases, S100, CD 34 and collagen IV were positive, suggestive of the diagnosis of schwannoma; therefore, the diagnosis of peripheral malignant nerve trunk/sheath tumor was excluded (Figure 7). The 2PD and SW tests were within normal range. No recurrence was recorded in any of the cases 2 years following surgery. Patient satisfaction was maximum in all cases, according to the MHQ scale. Lipoma Although lipoma is the most common form of benign soft tissue tumor, hand localization is rare, accounting for approximately 1% of the tumors in this region [4]. In our study group, out of the 80 patients included, 30 (37.5%) were diagnosed with lipoma, with a mean age of 53.06 years; 56.66% were male patients. We identified lipomas with different hand localizations; 13.3% of the cases were reported on the dorsal aspect of the hand, a very rare localization. Development typically begins with an initial insidious growth period followed by a prolonged and latent maintenance state (between 1 and 8 years). Hand lipomas are often asymptomatic and only come to clinical attention once they grow large enough to induce mechanical impairment or if they are of cosmetic concern. (Figure 8). The 2PD and SW tests were within normal range. No recurrence was recorded in any of the cases 2 years following surgery. Patient satisfaction was maximum in all cases, according to the MHQ scale. Lipoma Although lipoma is the most common form of benign soft tissue tumor, hand localization is rare, accounting for approximately 1% of the tumors in this region [4]. In our study group, out of the 80 patients included, 30 (37.5%) were diagnosed with lipoma, with a mean age of 53.06 years; 56.66% were male patients. We identified lipomas with different hand localizations; 13.3% of the cases were reported on the dorsal aspect of the hand, a very rare localization. Development typically begins with an initial insidious growth period followed by a prolonged and latent maintenance state (between 1 and 8 years). Hand lipomas are often asymptomatic and only come to clinical attention once they grow large enough to induce mechanical impairment or if they are of cosmetic concern. (Figure 8). Ultrasonography was performed in all study cases and identified a well-defined homogeneous hyperechoic mass, thus confirming the diagnosis. MRI was performed on lipomas larger than 5 cm (13 cases) to detect possible malignancy signs. In all cases, complete lipoma ablation was performed. A histopathological examination confirmed the diagnosis. We did not identify any malignant component in any of our cases. The functional results were satisfactory in all cases, with full socio-professional reintegration and maximum patient satisfaction. No relapses were reported two years following surgery. Epidermal Inclusion Cyst Epidermal inclusion cysts are painless, benign, slow-growing soft tissue tumors that often occur months to years after a traumatic event [18]. We report three cases, males, with a mean age of 45 years, manual workers (in agriculture), with a prolonged and latent tumor maintenance state of 4 to 12 years. No patient reported any association between a history of trauma and the development of the tumor, nor did we detect the papillomavirus. The inclusion cysts in these patients were located in the palm and the volar aspect of the proximal phalanx of the middle finger. The patients presented to the clinic with a painless (spontaneously and on palpation) slow-growing tumor mass, relatively immobile on the deep planes. Sensory symptoms were absent. In one case, the large tumor size and the mid-palmar localization induced a reduced grip and pinch strength. The Posh sign was negative in all cases (differential diagnosis of lipoma). The radiological examination did not reveal osteoarticular changes or bone impressions but a translucent mass in the soft parts. MRI was performed for a precise diagnosis, showing the cystic nature of the mass and its intimate relations with the neighboring structures ( Figure 10). Ultrasonography was performed in all study cases and identified a well-defined homogeneous hyperechoic mass, thus confirming the diagnosis. MRI was performed on lipomas larger than 5 cm (13 cases) to detect possible malignancy signs. In all cases, complete lipoma ablation was performed. A histopathological examination confirmed the diagnosis. We did not identify any malignant component in any of our cases. The functional results were satisfactory in all cases, with full socio-professional reintegration and maximum patient satisfaction. No relapses were reported two years following surgery. Epidermal Inclusion Cyst Epidermal inclusion cysts are painless, benign, slow-growing soft tissue tumors that often occur months to years after a traumatic event [18]. We report three cases, males, with a mean age of 45 years, manual workers (in agriculture), with a prolonged and latent tumor maintenance state of 4 to 12 years. No patient reported any association between a history of trauma and the development of the tumor, nor did we detect the papillomavirus. The inclusion cysts in these patients were located in the palm and the volar aspect of the proximal phalanx of the middle finger. The patients presented to the clinic with a painless (spontaneously and on palpation) slow-growing tumor mass, relatively immobile on the deep planes. Sensory symptoms were absent. In one case, the large tumor size and the mid-palmar localization induced a reduced grip and pinch strength. The Posh sign was negative in all cases (differential diagnosis of lipoma). The radiological examination did not reveal osteoarticular changes or bone impressions but a translucent mass in the soft parts. MRI was performed for a precise diagnosis, showing the cystic nature of the mass and its intimate relations with the neighboring structures ( Figure 10). Surgery was performed under loco-regional anesthesia (axillary block). A wide surgical excision area was created through incisions parallel to palmar flexion creases (to avoid retractile scars), thus allowing the wide dissection and complete removal of the cysts (Figure 11). Histopathological diagnosis of epidermal inclusion cyst diagnosed squamous epithelium and pericystic lymphoplasmacytic and basophilic infiltrate. The cyst encapsulated loose keratin material disposed of in a lamellar fashion (Figure 12). Diagnostics 2023, 13, x FOR PEER REVIEW 12 o Surgery was performed under loco-regional anesthesia (axillary block). A w surgical excision area was created through incisions parallel to palmar flexion creases avoid retractile scars), thus allowing the wide dissection and complete removal of cysts ( Figure 11). Surgery was performed under loco-regional anesthesia (axillary block). A wide surgical excision area was created through incisions parallel to palmar flexion creases (to avoid retractile scars), thus allowing the wide dissection and complete removal of the cysts ( Figure 11). Figure 11. Epidermal inclusion cyst (A) preoperative aspect, (B) intraoperative aspect, (C) postoperative aspect (ten days after surgery). Figure 11. Epidermal inclusion cyst (A) preoperative aspect, (B) intraoperative aspect, (C) postoperative aspect (ten days after surgery). Diagnostics 2023, 13, x FOR PEER REVIEW 13 Histopathological diagnosis of epidermal inclusion cyst diagnosed squa epithelium and pericystic lymphoplasmacytic and basophilic infiltrate. The encapsulated loose keratin material disposed of in a lamellar fashion (Figure 12). Postoperatively, no motor or sensory sequels were recorded in the study pat and they enjoyed full socio-professional reintegration. Idiopathic Tenosynovitis with "Rice Bodies" Tenosynovitis with "rice bodies" unrelated to rheumatic diseases (rheum arthritis, systemic lupus erythematosus, seronegative arthritis, etc.) is rarely report the literature, especially in large series [19]. The present study reports eight cas idiopathic tenosynovitis with "rice bodies", accounting for 26.66% of the total 80 pa included in the study. Of these, 37.5% were female patients. The patients were investigated for rheu diseases, with negative results. In all cases, the mass was located on the volar asp the hand. X-ray examination reported no pathological findings. Ultrasonogr described a well-defined small mass with mixed (liquid and solid) content in two ca intimate contact with the flexors of the fourth and fifth fingers. The cystic nature mass was established, but further clinical findings suggestive of tenosynovitis wit bodies are required to confirm the diagnosis; otherwise, an epidermal inclusion c considered. MRI scan revealed heterogeneous masses with hyposignal on T2-wei sequence and iso-and hypersignal on T1-weighted sequence. Inside, a multitude o areas with hyposignal on T2-weighted sequence was described. Intraoperatively observed a pseudotumor with a relatively thick wall containing numerous formatio the form of yellow-white coins, "rice bodies" (Figure 13). Postoperatively, no motor or sensory sequels were recorded in the study patients, and they enjoyed full socio-professional reintegration. Idiopathic Tenosynovitis with "Rice Bodies" Tenosynovitis with "rice bodies" unrelated to rheumatic diseases (rheumatoid arthritis, systemic lupus erythematosus, seronegative arthritis, etc.) is rarely reported in the literature, especially in large series [19]. The present study reports eight cases of idiopathic tenosynovitis with "rice bodies", accounting for 26.66% of the total 80 patients included in the study. Of these, 37.5% were female patients. The patients were investigated for rheumatic diseases, with negative results. In all cases, the mass was located on the volar aspect of the hand. X-ray examination reported no pathological findings. Ultrasonography described a well-defined small mass with mixed (liquid and solid) content in two cases in intimate contact with the flexors of the fourth and fifth fingers. The cystic nature of the mass was established, but further clinical findings suggestive of tenosynovitis with rice bodies are required to confirm the diagnosis; otherwise, an epidermal inclusion cyst is considered. MRI scan revealed heterogeneous masses with hyposignal on T2-weighted sequence and iso-and hypersignal on T1-weighted sequence. Inside, a multitude of tiny areas with hyposignal on T2-weighted sequence was described. Intraoperatively, we observed a pseudotumor with a relatively thick wall containing numerous formations in the form of yellow-white coins, "rice bodies" (Figure 13). In all cases, the complete removal of the pseudotumors was performed under an axillary block ( Figure 14). In all cases, the complete removal of the pseudotumors was performed under axillary block ( Figure 14). In all cases, the complete removal of the pseudotumors was performed under an axillary block ( Figure 14). We did not record any intraoperative incidents or immediate or late postoperative complications. Histopathological examination of the exeresis piece showed the presence of fibrin organized in the form of rice bodies, with an acidophilic amorphous center delimited by a thin fibrous layer. Aspects of proliferative synovitis with hyperplastic and hypertrophic synovial cells, as well as rich lymphoplasmacytic infiltrate, were also identified. Secretions were collected from the cyst content, and Ziehl-Nielsen stains were used to detect the presence of Koch bacilli or fungi, with negative results. Samples were collected to diagnose possible rheumatic or immune diseases or tuberculosis, with negative results, thus confirming the idiopathic character of the lesions. Postoperatively, from a functional point of view, the results were satisfactory, with full socio-professional reintegration. No recurrences were recorded within two years. Discussion Soft tissue tumors without skin involvement are relatively frequent in some anatomical regions but rare or very rare in the hand. Neurofibromas, benign tumors of nerve origin, account for approximately 5% of soft tissue tumors [10,12]. However, their location at the hand level is rare [14]. Nail bed location is extremely rare, with only 11 such cases being reported in the literature [10]. The clinical findings are nonspecific in these cases, and pathognomonic clinical signs are absent [14]. Good knowledge of the various types of hand tumors can avoid diagnostic errors or delays, as even MRI findings are nonspecific (impossible to differentiate from schwannoma). To confirm the diagnostic, histopathological and immunohistochemical examinations are required. Immunohistochemical examinations found positive S100 protein, with a much higher intensity in schwannomas. The treatment is surgical, consisting of complete tumor ablation [10,12,14]. The outcome of surgical treatment is excellent, without relapses, when the tumor is completely removed [10]. Although schwannoma is the most commonly diagnosed tumor of peripheral nerve origin, accounting for approximately 5% of all soft tissue tumors, its location in the hand is rare [20]. Our 22 cases reported in a series of 80 patients with soft tissue tumors make up the largest group reported up to now in the literature. The absence of specific clinical signs is characteristic of this type of tumor, but certain features can differentiate them from other tumors [2,21]. Tumor mobility in the transverse plane, with its significant limitation in the longitudinal plane, can guide the diagnosis [2]. Although the Tinel sign is present in 5% to 75% of schwannoma cases, in our study group, it was present in 100% [2,22]. The preoperative presence of neurological signs is suggestive of an unfavorable prognosis [2]. MRI examination can make the diagnosis of schwannoma in 90% [22,23]. However, no imaging study has a specificity above 95% in its diagnosis [23]. Not infrequently, its presence in the hand is not differentiated from lipoma or another tumor-like formation. Surgical enucleation of the tumor is the gold standard of treatment, thus preserving the nerve fibers and, at the same time, avoiding recurrence [4]. The present study did not record any postoperative neurological complications and recurrences within 2 years. The glomus tumor accounted for 1% to 4% of all hand tumors, with 65% of them located in the subungual region [16,24,25]. It can be solitary or multiple and is more commonly diagnosed in women, as the present study confirms [16,25]. Our report of 15 cases of glomus tumors of the hand in a group of 80 patients is also one of the largest cases reviewed in the literature. Unlike the other tumor types, glomus tumor is diagnosed with a hallmark symptomatic triad identified by Love's pin test, Hildreth's test, and cold sensitivity test [16]. The cold sensitivity test has a specificity of 100%, the most relevant and severe symptom of glomus tumors being intense pain on touch. However, cases of misdiagnosis have been reported [24,25]. MRI examination can be useful in making the diagnosis but has no relevant features for this type of tumor [26]. In the current study, MRI was not performed for economic reasons, and only a few patients could tolerate USG due to intense pain at the touch. Diagnostic certainty is obtained by immunohistochemistry reports positive for α-SMA, MSA and h-caldesmon in proportions of 99%, 95% and 87% of the cases. In glomus tumors, immunostaining for S100 is negative in the vast majority of cases [27]. Surgical outcomes are good, with high patient satisfaction and the absence of symptoms. Although lipoma is the most common benign tumor, finger lipomas are rare, with a reported incidence of 1% [4,28]. Of the 30 reported lipoma cases in our 80-patient study group, 17 were lipomas of the phalanx and long fingers, with males being most commonly affected; a traumatic event is invoked only in 9 cases. Just like the previous tumors, the clinical signs are not specific. The differential diagnosis includes any of the tumor categories detailed in the current study, as well as many other types of soft tissue tumors or tumor-like lesions [29,30]. The Posh sign can guide the clinical diagnosis, as it is always positive [4]. MRI examination has an essential role in diagnosing lipoma, with a predictability of 94%, and, most importantly, it can diagnose a possible malignant component of the tumor [4]. The recurrence rate of lipomas (5%) is strictly related to partial tumor removal [4,29]. In our study group, no relapse within two years was recorded. Epidermal inclusion cyst is a fairly common lesion, accounting for approximately 90% of cystic tumors [18,31]. Just like the other tumors included in the study, epidermal inclusion cyst is rarely found on glabrous skin and especially in the palm [18]. In the literature, relatively few cases of this type of injury are reported, especially of the hand [32]. Most frequently, the presence of an inclusion cyst is related to trauma or the presence of HPV [18]. In the three cases reported by us, none of these two were confirmed by patient history and laboratory tests. Being a painless, slow-growing tumor, it can reach relatively large sizes (4.5 cm in one of our study cases) when located in the palm. Lack of specific symptoms and pathognomonic clinical signs creates a challenging differential diagnosis, including diagnostics like lipoma, schwannoma, giant cell tumor, other types of cysts, etc. [32][33][34]. MRI is recommended to guide the diagnosis [18]. In our study, MRI was performed in only one of the cases with palmar location (for economic reasons); in the other cases, ultrasonography was the only imagistic examination performed [32]. Surgery aimed at the complete ablation of the cyst. Histopathology confirmed the diagnosis and identified stratified squamous epithelium in the cyst wall and large amounts of keratin [18]. In 1895 Reise was the first to describe "rice bodies" present in the joints of patients with tuberculosis [11,35]. Later, their presence was also associated with other diseases (seronegative arthritis, lupus erythematosus, rheumatoid arthritis, etc.), primarily affecting the bursae and joints [11,35]. The presence of tenosynovitis with "rice bodies" unrelated to any other rheumatic condition in an idiopathic context is rarely reported in the literature [11]. The literature reports include a series of reviews with 1 to 9 cases (Sugano-1 case, Forse-5 cases, Yamamoto-9 cases, Cegarra and Reda-1 case each, etc.) [11,35]. Our study reports a series of 8 cases of tenosynovitis with rice bodies. In the absence of specific symptoms, the differential diagnosis includes almost any type of tumor, especially in the absence of rheumatic diseases, tuberculosis, presence of HPV (which would guide the diagnosis toward an epidermal inclusion cyst) [11]. As in the case of any other type of tumor, neurological symptoms can appear when the tumor is enlarged. MRI examination can guide the diagnosis by identifying "rice bodies" [36,37]. Intraoperative findings are characteristic, describing a large mass of rice bodies in the context of tuberculosis or rheumatic diseases; idiopathic cases are very rare, and therefore, they are not being considered. We recorded rice body sizes ranging between 1.5 and 4 mm. Surgical treatment's immediate and long-term results were good, requiring recovery through physiotherapy when the large-sized tumor-like lesion required large incisions and dissections. No relapse was reported at 2 years. Even after surgery, the patients were investigated in order to detect any associated disease that could explain the presence of "rice bodies", but in all cases, no pathologic association was found [11]. Conclusions Bringing together significant groups of rare tumors and tumor-like lesions of the hand, as well as highlighting their clinical and imaging features, in a single study greatly helps the physician to accurately assess the diagnosis-the definitive one being confirmed through the histopathological exam. Recognizing particular signs of these rare tumors may help avoid the iatrogenic lesions (e.g., knowing the mobilization direction characteristics of the schwannoma may be of use to avoid a biopsy with nerve fiber damage). The imaging exam has an important role in guiding the diagnosis, the most consequential being the MRI exam. The simple knowledge of the potential existence of the rare tumor-like lesion of the hand (e.g., an idiopathic tenosynovitis with "rice bodies" or a large epidermal inclusion cyst without traumatic or HPV infection context) may explain MRI aspects in a clinical context. The acknowledgement of the variety of rare tumors and tumor-like lesions of the hand is extremely important in establishing an early and accurate diagnosis, which leads to obtaining good therapeutic outcomes. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
2023-03-24T15:02:16.672Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "27bb3b4225ae9243469f84c8c7488d6158e432ee", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/6/1204/pdf?version=1679487508", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebaef52760f240ad436c2660c734815e5c6f476a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119547625
pes2o/s2orc
v3-fos-license
Preliminary Xpert® HPV testing results from a large study of women living with HIV in Rwanda. The Albert Einstein College of Medicine of Yeshiva University in New York, Rwanda Military Hospital and the University of Rwanda have established a research partnership, which has received funding from the US National Institutes of Health for investigation of HPV carcinogenesis, diagnosis and treatment. This includes two major studies, on cervical cancer screening and on HPV in men who have sex with men. The partnership aims to build research and laboratory capacity within Rwanda, including establishing a centre of excellence in HPV-related research. It has already led to development of further south-to-south collaborations. This paper presents preliminary results from a cervical cancer screening study of women living with HIV in Rwanda. Background The Albert einstein College of Medicine of Yeshiva University in New York, Rwanda Military Hospital and the University of Rwanda have established a research partnership, which has received funding from the US National institutes of Health for investigation of HPv carcinogenesis, diagnosis and treatment. This includes two major studies, on cervical cancer screening and on HPv in men who have sex with men. The partnership aims to build research and laboratory capacity within Rwanda, including establishing a centre of excellence in HPv-related research. it has already led to development of further south-to-south collaborations. Study description This presentation outlined preliminary findings of research aimed at comparing the clinical performance of different screening methods and biomarkers for triage of screen-positive women living with Hiv infection in Rwanda. The full protocol [1] envisages recruiting a convenience sample of >5000 consenting women, aged 30-54 years and living with Hiv, to complete an administered short risk-factor questionnaire and be screened for high-risk human papillomavirus (hrHPv) using the Xpert® HPv assay (Cepheid, Sunnyvale, CA, USA), unaided visual inspection after acetic acid (viA) and aided viA using the enhanced visual Assessment (evA) system (Mobile ODT, Tel Aviv, israel). Reasons for selecting the Xpert® HPv assay for evaluation include its fast (1 hour) turnaround, ease of use, scalability, capacity for cloud-based monitoring and already high penetrance in lower-and middleincome countries. women found to be screen-positive for hrHPv and/or unaided viA undergo colposcopy, including the collection of two cervical specimens prior to a four-quadrant microbiopsy protocol. The colposcopy-collected specimens are tested by dual immunocytochemical staining for p16 iNK4a and Ki-67 (CiNtec® PLUS Cytology, ventana, Tucson, AZ, USA) and for e6 or e7 oncoprotein for eight hrHPv genotypes (HPv16, 18, 31, 33, 35, 45, 52 and 58) using the next-generation Av Avantage™ hrHPv e6/e7 test (Arbor vita Corporation, Freemont, CA, USA). women with a local pathology diagnosis of cervical intraepithelial neoplasia grade 2 (CiN2) or more severe (CiN2+) or pathology review diagnosis of CiN grade three or more severe (CiN3+) will receive treatment. in the full study, the clinical performance and cost-effectiveness (e.g. sensitivity, specificity and predictive values) of different screening strategies and algorithms will be evaluated. Preliminary results and conclusions Among 4806 women living with Hiv, we found an overall prevalence of hrHPv of 26.5%, with positivity for different HPv types across the five Xpert® assay channels as shown in Table 1. while overall prevalence of hrHPv declined with age (P trend <0.0001), this was not found for HPv16, nor for viA positivity, as illustrated in Figures 1 and 2. Compared with earlier studies that enrolled women in 2005 [2] and 2010 [3], we found lower age-specific prevalence of hrHPv, as shown in Figure 3. This may perhaps be due to improvements in Hiv management and care. As expected, highest rates of viA positivity were found in women testing positive for HPv16 infection, followed by HPv18/45, as illustrated in Figure 4. The study is continuing and results will be compared with the clinical endpoint (CiN2+) when all pathology results become available.
2019-04-19T13:02:33.440Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "e24d4226f9b4b91242cb9985ad4b3fcf3ccf5b6c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1016/s2055-6640(20)30065-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7aa7c9b3392f92ddea0ea003689004d6ea482f70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221354333
pes2o/s2orc
v3-fos-license
Effects of GNSS Receiver Tuning on the PLL Tracking Jitter Estimation in the Presence of Ionospheric Scintillation Ionospheric scintillation is an interference characterized by rapid and random fluctuations in radio frequency signals when passing through irregularities in the ionosphere. It can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers, thus increasing positioning errors. Receivers with different tracking loop bandwidths and coherent integration times perform differently under scintillation. This study investigates the effects of GNSS receiver tracking loop tuning on scintillation monitoring and Phase Locked Loop (PLL) tracking jitter estimation using simulated GNSS data. The variation of carrier to noise density ratio (C/N0) under scintillation with different tracking loop settings is also studied. The results show that receiver tuning has a minor effect on scintillation indices calculation. The levels of C/N0 are also similar for different PLL bandwidths and integration times. Additionally, the tracking jitter is estimated by theoretical equations and verified using the relationship with the PLL discriminator output noise, which is calculated using the post‐correlation measurements. Novel approaches are further proposed to calculate 1‐s scintillation index, which enables to compute the tracking jitter at a rate of 1 s. It is found that 1‐s tracking jitter can successfully represent the signal fluctuations levels caused by scintillation. This work is valuable for developing scintillation sensitive tracking error models and is also of great significance for GNSS receiver design to mitigate scintillation effects. Introduction Ionospheric scintillation refers to the rapid and random fluctuations in radio frequency signal amplitude and phase. It occurs when signals traverse through ionospheric irregularities. The probability of scintillation occurrence is strongly related to solar and geomagnetic activity (Basu et al., 1988). Aarons (1982) suggested that scintillation is more frequent in equatorial regions and auroral to polar regions. Scintillation is one of the most challenging error sources for Global Navigation Satellite System (GNSS). Because of scintillation, GNSS signals become noisier, thus receiver performance is degraded. The effects of scintillation on receiver take place notably at the tracking stage (Sreeja et al., 2011). On the one hand, the rapid fluctuations caused by scintillation in the incoming carrier phase will increase the Phase Locked Loop (PLL) tracking errors, which lead to a higher probability of cycle slip (Humphreys et al., 2010); when the increased tracking error exceeds the pull-in range of the discriminator, a loss of phase lock may occur (Kaplan & Hegarty, 2017). On the other hand, the amplitude fluctuations caused by scintillation may decrease the carrier to noise density ratio (C/N0); when it is lower than the receiver threshold, the Delay Locked Loop (DLL) will declare a loss of lock event, which causes signal loss. The increased probability of losing lock and a large tracking error will contribute to an increase in code and carrier phase measurement errors, leading to a degraded GNSS positioning accuracy (Guo, Aquino, Veettil, et al., 2019;Pi et al., 2017). Much effort has been made to analyze and model the effects of ionospheric scintillation on GNSS receivers. Knight and Finn (1998) studied scintillation stochastic models and developed an approach to evaluate the scintillation effects on Global Positioning System (GPS). They pointed out that the levels of scintillation required to cause signal loss depends on the receiver tracking loop bandwidth, filter order, and discriminator type. In the study of Hegarty et al. (2001), a scintillation signal model was developed and GPS scintillation signals were generated. It was found that the typical noncoherent DLLs are relatively robust to signal fluctuations caused by scintillation, while PLLs are generally more susceptible to scintillation effects. To calculate the scintillation induced tracking error, Conker et al. (2003) proposed the PLL and the DLL tracking error variance equations to calculate tracking errors caused by different error sources. The 1-min scintillation indices and 1-min averaged C/N0 were input to those equations in the study to evaluate the tracking error variances caused by scintillation. An extended model was developed by Moraes et al. (2014) to describe the effects of the scintillation on GPS receivers. By processing scintillation data recorded in Brazil, the tracking error variance was estimated and receiver performance was analyzed. Vadakke Veettil et al. (2018) built statistical models to estimate the standard deviation of the receiver PLL tracking errors as a function of scintillation levels exploiting the models in Conker et al. (2003) and Moraes et al. (2014) using the scintillation data recorded over 4 years at low and high latitudes. Guo, Aquino, and Vadakke (2019) analyzed the signal intensity fading due to scintillation. The results showed that signal fading caused by scintillation can increase the receiver tracking error. The effects of receiver PLL tracking loop bandwidth tuning on scintillation monitoring have been discussed in the literature. Orus-Perez and Prieto-Cerdeira (2011) compared the scintillation indices observed by two different commercial scintillation monitoring receivers, that is, Novatel GSV4004B and Septentrio PolaRxS. Results showed that both receivers present comparable performance in scintillation monitoring. Rougerie et al. (2016) studied the effects of GNSS receiver PLL bandwidth tuning on scintillation indices calculation. They suggested that receiver tuning has almost no effect on S4 and Phi60 calculation. The cut-off frequency can affect the shape of the phase spectrum. However, these analyses only focus on scintillation indices estimation with different receivers or receiver tracking loop parameters. The effects of receiver tuning on tracking jitter estimation under scintillation are not considered. To address the above limitation, a hardware simulator is used to generate GNSS signals affected by different levels of scintillation. An ionospheric scintillation monitoring receiver (ISMR) is connected to the simulator to record the GPS scintillation data. The scintillation indices and C/N0 observed for different tracking loop parameters are studied. The PLL tracking jitter is estimated using the theoretical models with receiver tuning. Discriminator output noise levels are calculated to verify the values of the estimated tracking jitters. This study focuses on (1) understanding the effects of receiver tuning on scintillation monitoring and tracking jitter estimation under scintillation; (2) verifying the relationship between the tracking jitters estimated by theoretical models and calculated by the PLL discriminator output. Novel approaches are also proposed to calculate the 1-s scintillation indices, which enables to estimate the jitter at 1-s intervals. Scintillation data processing and scintillation indices calculation are introduced next, followed by the description of PLL tracking loop models and tracking jitter estimation methods under scintillation. The experimental set up and scintillation data processing flow are then introduced. Results and discussions are provided next. Conclusions and remarks are given at the end. Ionospheric Scintillation The S4 and Phi60 indices are used to characterize amplitude and phase scintillation intensity, respectively, which relate to fluctuations in amplitude and phase of radio frequency signals caused by ionospheric irregularities. S4 is defined as the standard deviation of the detrended signal intensity normalized by its mean over 60 s (Van Dierendonck et al., 1993;Van Dierendonck, 1999): where P det is the detrended signal intensity and ⟨·⟩ denotes arithmetic mean. The intensity detrending is accomplished by applying a sixth-order low pass Butterworth filter to the measured intensity P, which is calculated by 10.1029/2019SW002362 Space Weather where I corr and Q corr are the 50 Hz post-correlation In-phase and Quadra-phase measurements, respectively. Readers interested in the intensity detrending process may refer to Van Dierendonck et al. (1993) and Van Dierendonck and Arbesser-Rastburg (2004) for more details. As the ambient noise in the receiver also causes signal intensity fluctuation, the S4 total calculated by Equation 1 needs to be corrected to remove the ambient noise contribution. According to Van Dierendonck et al. (1993), the ambient noise contribution is given by where c/n 0 is fractional form of C/N0, calculated by c/n 0 = 10 0.1*C/N0 . Thus, the corrected S4 is calculated as (Van Dierendonck et al., 1993) The corrected S4 is employed to carry out the analysis in this study. Phi60 is defined as the standard deviation of the detrended carrier phase measurements in 60 s, given by (Van Dierendonck, 1999) where φ det is the detrended carrier phase. The detrending of the carrier phase measurements is realized by passing the high frequency carrier phase measurements through a sixth-order high pass Butterworth filter with a cut-off frequency of 0.1 Hz (Van Dierendonck et al., 1993; and reference therein). Power spectral density (PSD) of the detrended carrier phase measurements in 1 min is calculated to characterize phase scintillation in the frequency domain. The temporal power spectrum of the detrended carrier phase is given by (Rino, 1979) where f 0 is the frequency corresponding to the outer scale size of irregularities. p is the opposite of the slope of the line, that is fitted to the PSD in log-log axes over 0.1 to 25 Hz. T is the energy spectral Space Weather density at 1 Hz, called spectral strength. When f ≫ f 0 , which is usually the practical case, S φ (f) ≈ Tf −p . More details regarding the spectrum analysis of scintillation can be found in Yeh and Liu (1982) and Strangeways (2009). According to Rino (1979), the carrier phase variance is estimated by integrating Equation 6 over all the frequencies. Thus, in the practical case, the phase variance is calculated by (Aquino et al., 2007;Conker et al., 2003;Rino, 1979) Performing Equation 7, the following equation is obtained (Aquino et al., 2007): where r = 1 − p. Equation 8 presents the relationship among σ φ , p, and T. If the PSD is calculated using 1-min detrended carrier measurements, σ φ becomes the approximate of Phi60. PLL Tracking Loop Models and Tracking Jitter Under Scintillation The PLL tracking loop is implemented in coherent GNSS receivers to match the local replica of the received signal and to provide carrier phase measurements. A simplified linear PLL tracking loop model is demonstrated in Figure 1 (Gardner, 2005;Razavi et al., 2008;Van Dierendonck, 1996). The discriminator measures the tracking error Δφ, which is the difference between the incoming phase φ i and the receiver locally duplicated phase φ o . The output of the discriminator, denoted as δ φ , is filtered by the loop filter F(s). The output of F(s) is then input to Number Controlled Oscillator to provide an updated φ o , which feeds back to the tracking loop and matches with the incoming phase again (Braasch & Van Dierendonck, 1999;Kaplan & Hegarty, 2017). The performance of the tracking loop is indicated by the tracking jitter, i.e., the standard deviation of the tracking error Δφ. The PLL tracking jitter under scintillation is given by (Conker et al., 2003;Knight & Finn, 1998) where σ 2 T is the thermal noise component taking the amplitude scintillation into consideration. σ pha is the phase scintillation induced tracking jitter. θ A is the tacking jitter due to oscillator noise. Under amplitude scintillation, the thermal noise is enhanced in the tracking loop in relation to S4 levels. According to Conker et al. (2003), the tracking jitter thermal noise component under scintillation is evaluated as where B n and η are the PLL bandwidth and coherent integration time, which are essential parameters selected by the designer of the receiver tracking loop. The selection of B n and η can affect the stability and sensitivity of the PLL in response to noise (Gardner, 2005). Equation 10 shows that σ 2 T is a function of S4 and C/N0, as well as PLL B n and η. It should be noted that the C/N0 here is the 60-s averaged value 10.1029/2019SW002362 Space Weather corresponding to the interval in which the scintillation index is calculated. When S4 is 0, Equation 10 becomes the standard thermal noise tracking error for carrier phase tracking, given by (Kaplan & Hegarty, 2017) The phase scintillation induced tracking jitter can be calculated through (Knight & Finn, 1998) ; 1 <p<2k; where k is the PLL loop order. f n is the loop natural frequency. θ A is due to the instability of the oscillator in both the receiver and satellite clocks. As the GNSS satellites use a stable atomic clock, the satellite oscillator noise is ignorable. The receiver oscillator noise is constant for an individual receiver. According to Irsigler and Eissfeller (2002), for a third-order PLL, θ 2 A is calculated by where f L is the carrier frequency. h −2 , h −1 , and h 0 are clock parameters determined by the type of oscillator. The ISMR used in this study is implemented with an oven-controlled crystal oscillator. According to the clock parameters given by Irsigler and Eissfeller (2002), θ A is calculated as a function of B n on GPS L1 band, as Figure 2 shows. θ A is seen to be much higher when B n = 5 Hz compared with 10 and 15 Hz. Equation 9 has been used in previous studies to evaluate the effect of scintillation on receiver tracking loops (Aquino et al., 2009;Sreeja et al., 2011;Strangeways et al., 2011;Vani et al., 2019). From Equations 9, 10, and 12, it can be known that the rate of scintillation indices determines the rate of the estimated tracking jitters. The ISMR used in this analysis applies an arctangent discriminator. Thus, the PLL discriminator output is calculated as (Kaplan & Hegarty, 2017) and the standard deviation of discriminator output noise is given by On the other hand, according to Razavi et al. (2008), the discriminator output noise is considered as the sum of thermal noise σ 2 w and correlated noise σ 2 c , given by In this study, σ 2 c is mainly due to the oscillator noise and scintillation effects. From Equation 16, σ c is calculated by Space Weather It should be noted that the thermal noise in Equation 16 is the noise in the output of the discriminator, which is propagated through the closed-loop noise time-update function to the thermal noise in the tracking error variances, given by (Groves, 2013;Van Dierendonck et al., 1992) With Equations 16, 17, and 18, the tracking error can be estimated using the discriminator output by Equation 19 offers an alternative way to estimate the PLL carrier phase tracking jitter. It can help to validate the values of the tracking jitter estimated using Equation 9. Experimental Setup and Data Processing Flow The experimental setup to generate scintillation data and data processing flow is described in this section. As Figure 3 shows, a Spirent GNSS signal simulator-GSS8000-available at the University of Nottingham is used to generate GPS L1 signals. A user command file (.ucd file) is activated in the simulation to superimpose scintillation effects on the signals. The .ucd file used in this study was generated from a 2-hr real life scintillation data set collected at Presidente Prudente, a low latitude scintillation monitoring station near the geomagnetic equator in Brazil. A Septentrio PolaRxS Pro receiver is connected to the GSS8000 simulator to record the simulated GPS L1 data. The PolaRxS receiver is a multifrequency multiconstellation receiver dedicated to ionospheric monitoring (www.septentrio.com/en/support/polarx/polarxs). The receiver can output 50 Hz I corr and Q corr measurements and carrier phase data. The tracking loop bandwidth and the coherent integration time of the receiver can be configured by the user through a friendly graphical user interface. In this 10.1029/2019SW002362 Space Weather analysis, a total number of five simulations is carried out by configuring different B n and η. The configurations of the tracking loop for each case are summarized in Table 1. It is worth mentioning that Case 3 has the same parameters as the default receiver configuration. Clean data are also generated in Case 5 for comparison. Additionally, when the .ucd file is activated in the simulation, the same scintillation scenario is applied to all the visible satellites. Therefore, only one satellite is selected for the analysis in this study. The 50 Hz I corr and Q corr measurements logged by the receiver in each case are used to calculate the signal intensity, which is then detrended to calculate S4 in the analysis. The 50 Hz carrier phase measurements are detrended directly and Phi60, p, T are calculated thereafter. The PLL tracking jitter is finally estimated, and comparisons are made for different tracking loop parameters. It should be noted that the C/N0 values logged by the receiver are used to carry out the analysis in this work. Scintillation Index Calculation With Receiver Tuning The calculated amplitude and phase scintillation indices under different tracking loop bandwidths and coherent integration times are initially presented in this section. Figure 4 shows the variation of S4 and Phi60 in relation to time. From the figure, both S4 and Phi60 are seen to increase between the 5th and 20th minute and the 40th and 60th minute. The strongest amplitude scintillation occurs at the 50th minute when S4 is around 0.62, while the largest Phi60 is observed at the 14th minute, with a value of around 0.50 rad. Additionally, the S4 indices calculated with different B n and η have few differences, which indicates that both loop bandwidth and integration time have almost no effect on S4 calculation. This agrees with the conclusions in Rougerie et al. (2016). Meanwhile, there are slight differences in Phi60 when B n and η are different, especially when Phi60 is higher than 0.2 rad. In Case 5 when scintillation effects are not added in the signal simulation, both S4 and Phi60 show extremely low levels. The phase scintillation spectral slope p is then calculated for different tracking loop bandwidths and integration times, as shown in Figure 5. When the signal is affected by scintillation, the values of p are generally higher than 2. Increases in p values can be seen with the increase in scintillation levels between the 40th and 60th minute. When η is configured to 10 ms, the p values of tracking loop with B n of 5, 10, and 15 Hz are close, which means that the tracking loop bandwidth has low impacts on the p value estimation. By contrast, when integration time is increased to 20 ms, obvious higher values are seen, especially when scintillation levels are not strong between the 70th and 115th minute. Besides, p values are lower than 2 throughout in Case 5, which further suggests that the presence of scintillation can increase the p value. The averaged p values for each case are also summarized in Table 2. The effect of scintillation on the shape of the PSD curves and the calculation of p values will be explained in detail next. To further understand the effects of ionospheric scintillation and receiver tuning on signal carrier phase spectrum characteristics, the PSD of the detrended carrier phase measurements is studied. Figure 6 gives an example of the calculated PSD curves of two different minutes with receiver default settings, i.e., B n = 15 Hz and η = 10 ms. The linear fitting functions of the PSD curves over 0.1 to 25 Hz in log-log axes are also included in the figure. As the figure shows, when there is no scintillation on the 116th minute, the phase power density curve, in pink, is relatively flat between 0.1 and 5 Hz. A large part of spectral energy is at high frequency. By contrast, when phase scintillation is present, as on the 14th minute, the PSD curve (in blue) obviously shifts upward resulting in an increased spectral energy. The spectral energy between 0.1 and 5 Hz also increases significantly, thus the curve has a steeper slope and a larger p value. It Space Weather should be noted that for both curves, the energy of the phase spectrum remains at a low level when the frequency is lower than 0.1 Hz. This is due to the effect of the high pass filter, which uses a cut-off frequency of 0.1 Hz. Figure 5 shows that the p values are not significantly influenced by the change in PLL bandwidth. However, the increase in integration time can correspondingly increase the values of p. To analyze the impact of η on the calculation of p, the PSD curves on the 116th minute, when Phi60 is around 0.02 rad, and on the 14th minute, with Phi60 around 0.50 rad, are calculated with B n set to 15 Hz and η set to 10 and 20 ms, respectively, as shown in Figure 7. When the scintillation effect is minor in the left panel, the PSD curves of both cases at low frequency are quite similar, while at high frequency, the spectrum energy is reduced by increasing the integration time from 10 to 20 ms. This results in a steeper slope of the fitted line between 0.1 and 25 Hz. However, on the right panel, the PSD curves of different integration times present similar patterns over all the frequency range, thus the slopes are close. C/N0 Variation Under Scintillation C/N0 is a crucial indicator of the signal strength and quality. According to Equation 11, the decrease in C/N0 can increase the thermal noise induced tracking jitter. In the presence of scintillation, particularly amplitude scintillation, C/N0 is attenuated to different extents due to the signal intensity fading (Seo et al., 2009), which further increases the tracking jitter. In this section, the C/N0 variation under scintillation with receiver tracking loop tuning is studied. Figure 8 presents the receiver output C/N0 in relation to time with different tracking loop bandwidths and integration times. In the case of clean data, the C/N0 values, marked by magenta pentagrams, increase gradually as a function of time, which is due to the increase in satellite elevation, while when scintillation is present, downward tendencies are seen during scintillation occurrence. Thus, it can be concluded that amplitude scintillation can decrease the signal C/N0 levels to different extents. Additionally, C/N0 values in the figure are quite close to each other when the PLL is configured with different B n and η, which means that receiver tuning has minor effects on signal C/N0 computation. It should be noted that the C/N0 values used here are the 1-min averaged value logged by the receiver. C/N0 deviation between Case 3 and 5 can be calculated by In these two cases, both tracking loops refer to receiver default configurations, so that the difference in C/N0 is mainly caused by scintillation effects. Figure 9 shows the C/N0 deviations as a function of S4. It can be seen that when the scintillation level is weak, i.e., S4 < 0.3, the deviations are distributed near 0. However, with the increase in S4, the C/N0 deviation tends to increase, which further indicates the attenuation effects of amplitude scintillation on C/N0. A statistical model can be established to describe the relationship between CN0 decreases and S4 levels if there are enough samples. This will be part of future study. As the 1-s C/N0 values are also available from the PolaRxS receiver outputs, the variation of C/N0 within 60 s, i.e., one scintillation event period, is analyzed. Figure 10 presents an example of the 1-s C/N0 output in the 51st minute, when strong scintillation occurs, with S4 reaching 0.63. As the figure shows, the C/N0 values of different tracking loop parameters under scintillation effects are quite similar, which again confirms that receiver tracking loop tuning would not significantly influence C/N0 calculation under scintillation. C/N0 values of the scintillation affected signals fluctuate dramatically within 1 min. The highest C/N0 is more than 50 dBHz, which is 10.1029/2019SW002362 Space Weather much higher than the average value which is around 46 dBHz. Meanwhile, the C/N0 can be as low as almost 35 dBHz, which is far lower than the nominal values and will probably cause serious receiver tracking problems. On the contrary, in the case when clean data is simulated, the C/N0 remains relative stable around 46 dBHz over the whole minute. Thus, it can be seen that using 1-min averaged C/N0 to represent the signal strength and quality under scintillation is not accurate enough. The signal intensity may change significantly within 1 min. Estimated PLL Tracking Jitters The previous sections analyzed how the receiver tracking loop tuning affects scintillation indices calculation. In this section, the estimation of the PLL tracking jitter using Equation 9 with different tracking loop bandwidths and integration times is presented. It should be noted that, as θ A is considered to be constant once the tracking loop bandwidth is defined, only the analyses of σ pha and σ T are presented in this section. The evaluated PLL σ pha and σ T under scintillation are shown in Figure 11. For comparison, the tracking jitter of clean signals (Case 5) is also included. It can be seen from the figure that σ pha is generally at low levels in all cases, except for one epoch at the 14th minute when Phi60 suddenly jumps to 0.5 rad and σ pha exceeds 0.06 rad in the first four cases. Thus, it might be concluded that phase scintillation will not cause serious tracking problems for most PLL settings when Phi60 < 0.4 rad. A sudden increase of Phi60 may cause extremely large tracking errors. This conclusion needs to be further verified by carrying out more experiments with stronger phase scintillation events. Additionally, when the receiver PLL bandwidth increases from 5 to 15 Hz, σ pha decreases gradually, although the difference is slight. The tracking loop when B n is set to 5 Hz has the largest σ pha . This indicates that the tracking loop with lower bandwidth is more susceptible to phase scintillation, which is in agreement with the conclusions by Knight and Finn (1998). Furthermore, it can be seen from the figure that in general, the increase in tracking loop integration time can slightly decrease the phase induced tracking jitter, as the σ pha in Case 4 is slightly lower than that of Case 3 where η is set to 10 ms. The estimated σ T in each case is shown in Figure 12. With different loop bandwidths, the thermal noise jitter is generally at different levels, but following similar patterns. Obvious increases in σ T can be seen when scintillation becomes stronger. The PLL with B n of 5 Hz presents the smallest jitter over the whole period, even 10.1029/2019SW002362 Space Weather lower than that of the clean data of Case 5, while the largest values are seen on the tracking loop with B n = 15 Hz. This is reasonable as σ T is proportional to B n , as defined in Equation 11. Additionally, it can be seen that the values of σ T in Cases 3 and 4 are quite close, which means that the integration time has a minor effect on the tracking loop jitter. As a result, a smaller B n will decrease the value of σ T . But this has an opposite effect for phase scintillation induced tracking jitter and oscillator noise, because both will increase with the decrease in B n . Therefore, there is a trade-off when selecting the PLL bandwidth in order to maximize the receiver performance in the presence of scintillation. PLL Tracking Jitter Validation Using the Discriminator Output High frequency I corr and Q corr measurements are not output by most commercial and generic receivers. Therefore, Equation 9 is commonly used to estimate the effect of scintillation on the PLL tracking performance. Equation 19 offers an alternative way to estimate the PLL carrier phase tracking jitter. In this study, the ISMR is capable of outputting 50 Hz I corr and Q corr data in the predetection integration process, which makes it possible to verify the values of tracking jitter estimated by Equation 9 using discriminator output errors. Figure 13 first presents the variation of σ δφ calculated using Equation 15 in relation to time. As the figure shows, σ δφ increases to different extents in each case during the scintillation period. The largest value is seen on the 14th minute when both S4 and Phi60 are at high levels. Additionally, σ δφ in Case 1 is generally higher when scintillation occurs, followed by that in Cases 2 and 3, which means the increase in B n can decrease the total discriminator output noise variance and improve the receiver tracking performance in the presence of scintillation. Furthermore, σ δφ in Case 4 is slightly higher than that in Case 3, indicating that a larger η may decrease the tracking loop jitter. This needs to be further verified with more scintillation events. To validate the estimated tracking jitter using Equation 9, σ Δφ_discri is calculated using the discriminator output noise. Figure 14 shows the variation of σ Δφ and σ Δφ_discri with receiver tuning. The correlation coefficients along with the Root-Mean-Square (RMS) of the difference between σ Δφ and σ Δφ_discri are calculated and summarized in Table 3. It can be seen that for all the four cases, σ Δφ follows the pattern of σ Δφ_discri , although there are slight biases between σ Δφ and σ Δφ_discri . The total estimated tracking jitters σ Δφ in Cases 1, 2, and 3 are generally higher than σ Δφ_discri over the whole period, except in Case 1 when σ Δφ_discri overtakes σ Δφ during the scintillation occurrence. The RMSs of the differences for Cases 2 and 3 are less than 0.01 rad. In Case 4, σ Δφ is seen lower than σ Δφ_discri during all the period. The RMS of the difference is 0.0142 rad in this case, indicating that the increase in PLL integration time could result in underestimating the tracking jitter using Equation 9. Additionally, σ Δφ and σ Δφ_discri are seen to be highly correlated for these four cases. Obvious increases are seen during the scintillation occurrence. This means both of σ Δφ and σ Δφ_discri are sensitive to scintillation effects. Moreover, the estimated tracking jitter σ Δφ is seen to be more stable than σ Δφ_discri in all the cases. The explanation may be that when estimating the tracking jitter, Equation 10 uses the 1-min averaged C/N0, therefore not accounting for the C/N0 fluctuations within 1 min in the calculation, while, for the calculation of σ Δφ_discri based on I corr and Q corr , every measurement is taken into account. The variation of C/N0 within 1 min is thus reflected. In order to further verify the relationship between σ Δφ and σ Δφ_discri at a rate of 1 s, the 1-s tracking jitter and discriminator noise are calculated. The 1-s σ δφ is easy to obtain by using Equation 15 at 1 s intervals, whereas, to estimate tracking jitter σ Δφ using Equation 9 at a 10.1029/2019SW002362 Space Weather rate of 1 s, 1-s scintillation indices are required. Therefore, the following approaches are proposed in this study to calculate the 1-s scintillation indices: 1. The 1-s amplitude scintillation index, denoted as S4 − , is calculated using the same data processing method for the calculation of S4, except that the normalization is performed within 1 s. Then, σ T is computed using Equation 10 with the S4 − and 1-s C/N0 output by the receiver. 2. With 50 Hz carrier phase measurements, there are not enough number of samples to calculate the phase PSD in 1 s, thus the p and T indices for phase scintillation cannot be evaluated according to the approaches described in section 2. According to Aquino et al. (2007), the 1-hr averaged spectral slope p can lead to values of T in close agreement with those estimated by 1-min discrete of p. Additionally, the 1-s σ φ can be estimated using 50 Hz detrended carrier measurements. Thus, with the hourly averaged p, 1-s T can be estimated using Equation 8. The 1-s σ pha is consequently calculated using the estimated p and T at a rate of 1 s. Figure 15 shows the 1-s amplitude scintillation index in the 51st minute of Case 3. It can be seen from the top panel that the signal intensity and C/N0 fluctuate significantly within the minute. The difference between the maximum and minimum C/N0 can be as large as 15 dBHz, indicating the severe effects of amplitude scintillation on signal quality. The 1-s amplitude scintillation index S4 − in the bottom panel shows the levels of signal fluctuations in each second. When the signal intensity suffers from dramatic variations, e.g., on the 33rd, 35th, 51st second, S4 − increases correspondingly. The largest S4 − and minimum C/N0 both occur on the 51st second with S4 − = 0.62 and C/N0 = 35.5 dBHz. Figure 16 shows the histograms of 1-s S4 − samples as a function of 1-m S4 when it equals to 0.3, 0.4, 0.5, and 0.6 in Case 3. It can be seen that for all these four levels of S4, the 1-s amplitude scintillation index with S4 − = 0.1 has the largest frequency. When S4 is 0.5 or 0.6, most values of S4 − are less than 0.4, which indicates that even when S4 shows the moderate levels of scintillation, there are large parts of weak scintillation within the S4 calculation period. The 1-s phase scintillation σ φ and T as well as the variation of the detrended carrier phase measurements in the 51st minute of Case 3 are presented in Figure 17. The detrended carrier phase in the top Space Weather panel is seen to vary rapidly, leading to the increases in both 1-s σ φ and T indices. σ φ and T present good correlation within this minute. The largest σ φ and T indices are both seen on the 54th second. The 1-s scintillation indices are used to further calculate the 1-s tracking jitters, which is shown thereafter. The variations of the 1-s σ Δφ and σ Δφ_discri in the 51st minute of Case 3 are shown in Figure 18. As it can be seen from the figure, tracking jitter shows more fluctuations compared with Figure 14-this is because that using 1-s scintillation indices and C/N0 to calculate the tracking jitters can reflect more details of the signal fluctuations under scintillation. The values of σ Δφ and σ Δφ_discri are seen to be closer and to follow similar patterns. In Case 4, when B n is 15 Hz and η is 20 ms, σ Δφ is seen to be less than σ Δφ_discri over the corresponding period of time, which is in agreement with the results shown in Figure 14. The increases in σ Δφ and σ Δφ_discri are directly related to the increase of signal fluctuation, which is also represented by the scintillation indices in Figures 15 and 17. In Cases 2, 3, and 4, the largest σ Δφ and σ Δφ_discri is seen on the 51st second, when exactly the largest S4 − and minimum C/N0 occurs. However, in Case 1, the largest σ Δφ and σ Δφ_discri is seen on the 54th second, when strong phase scintillation occurs. This further verifies that tracking loop with lower bandwidth is more susceptible to phase scintillation. The RMS of the difference and correlation coefficients between σ Δφ and σ Δφ_discri for each case shown in Figure 18 are summarized in Table 4. The theoretical tracking jitter σ Δφ in Cases 2 and 3 presents the relatively close values to σ Δφ_discri , with RMS of the difference less than 0.01 rad. Additionally, σ Δφ is seen to highly correlate with σ Δφ_discri for all the cases, which means that both σ Δφ and σ Δφ_discri are sensitive to the tracking errors caused by scintillation. Thus, it can be concluded that when high frequency post-correlation I corr and Q corr measurements are available, the value of σ Δφ_discri can be used to verify the values of tracking jitters estimated using Equation 9 under scintillation. Moreover, the 1-s scintillation indices and tracking jitters can successfully represent the signal fluctuation under scintillation and tracking loop performance. More details of the signal distortion caused by scintillation can be reflected compared with the tracking jitter calculated every 1 min. However, there are still differences between σ Δφ and σ Δφ_discri , especially when PLL bandwidth is set to 5 Hz. Therefore, more advanced models need to be Space Weather developed to better account for scintillation effects on the GNSS receiver tracking loops. This will be the focus of the future work. Conclusions and Remarks This work analyzes the effects of receiver tracking loop tuning on ionospheric scintillation monitoring and PLL tracking jitter estimation. A hardware signal simulator is used to generate GPS scintillation data, which is recorded by a Septentrio ISMR. The following conclusions can be drawn based on the analysis: 1. By investigating the effects of receiver tuning on scintillation monitoring, it is found that S4 is less affected by both loop bandwidth and integration time, while for Phi60, very slight differences are seen with different loop configurations. When scintillation effects are minor, a large part of the spectral power is at high frequency, while, in the presence of scintillation, the spectral energy between 0.1 and 5 Hz increases significantly. Therefore, the phase PSD curve has a steeper slope. Additionally, the results show that p values are not related to the PLL bandwidth. However, the increase in PLL tracking loop integration time can correspondingly increase the values of p. 2. In the study of C/N0 variation under scintillation, C/N0 present quite similar values when PLL is configured with different tracking loop parameters, which leads to conclude that receiver tuning has minor effects on signal C/N0 computation under scintillation. By comparing the C/N0 differences between scintillation affected and clean signals, it is found that C/N0 deviation tends to increase with the increase in S4. A model can be established to describe the relationship between C/N0 and S4 levels with more scintillation data samples. This will be part of future study. The variation of 1-s C/N0 within 60 s is analyzed under strong amplitude scintillation. C/N0 values are seen to fluctuate dramatically. Thus, the conclusion can be drawn that using 1-min averaged C/N0 to represent the signal strength and quality under scintillation is not accurate enough as the signal may change significantly within 1 min. 3. By studying the estimated tracking jitter with receiver tracking loop tuning, it is found that the increase in PLL bandwidth can decrease the levels of σ pha . This indicates that the tracking loop with lower bandwidth is more susceptible to phase scintillation. However, σ T increases with the increase in B n . Therefore, there is a trade-off when selecting the tracking loop bandwidth in order to maximize the receiver performance in the presence of scintillation. 4. By comparing the 1-min theoretically estimated tracking jitter σ Δφ and the tracking jitter σ Δφ_discri estimated using the discriminator output noise, it is seen that σ Δφ is approximately close to σ Δφ_discri although there are slight biases. In order to calculate tracking jitter at a rate of 1 s, new approaches are proposed to estimate the tracking jitter at 1-s intervals. The results show that 1-s scintillation indices and tracking jitters can successfully represent the signal fluctuation under scintillation and tracking loop performance. More details of the signal distortion caused by scintillation can be reflected compared with the tracking jitter calculated every 1 min. Additionally, the values of 1-s σ Δφ are seen to approximately match σ Δφ_discri , except when B n = 5 Hz. This may be due to the fact that the theoretical equations in Conker et al. (2003) are not performing well with low loop bandwidth. More advanced models need to be developed to better account for scintillation effects on the GNSS receiver tracking loops. This will be the focus of the future work.
2020-03-19T20:07:37.418Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "ad011973ca8a018b258d691488486ee04adb3a3c", "oa_license": "CCBY", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2019SW002362", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "489d95bf4105de7df6dc01dd44b828af47fe07d7", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
239501093
pes2o/s2orc
v3-fos-license
Predictors of hydrocephalus after lateral ventricular tumor resection The aim of this study was to identify the predictors of postoperative hydrocephalus in patients with lateral ventricular tumors (LVTs) and to guide the management of perioperative hydrocephalus. We performed a retrospective analysis of patients who received LVT resection at the Department of Neurosurgery, Zhongnan Hospital of Wuhan University between January 2011 and March 2021. Patients were divided between a prophylactic external ventricular drainage (EVD) group and a non-prophylactic EVD group. We analyzed the non-prophylactic EVD group to identify predictors of acute postoperative hydrocephalus. We analyzed all enrolled patients to determine predictors of postoperative ventriculoperitoneal shunt placement. A total of 97 patients were included in this study. EVD was performed in 23 patients with postoperative acute obstructive hydrocephalus, nine patients with communicative hydrocephalus, and two patients with isolated hydrocephalus. Logistic regression analysis showed that tumor anterior invasion of the ventricle (P = 0.020) and postoperative hemorrhage (P = 0.004) were independent risk factors for postoperative acute obstructive hydrocephalus, while a malignant tumor (P = 0.004) was an independent risk factor for a postoperative ventriculoperitoneal shunt. In conclusion, anterior invasion of the lateral ventricle and postoperative hemorrhage are independent risk factors for acute obstructive hydrocephalus after LVT resection. Patients with malignant tumors have a greater risk of shunt dependence after LVT resection. Introduction Ventricular tumors refer to lesions originating in related ventricular structures or secondary ventricular neoplasms originating in periventricular tissues and most of the neoplasms (more than 2/3) invading into the ventricle (5). Resection is the main treatment for ventricular tumors (19). Tumor resection can restore the cerebrospinal uid (CSF) circulation pathway, with some lateral ventricular tumor (LVT) patients showing relief of intracranial hypertension symptoms after resection. However, patients can also develop acute or persistent hydrocephalus after resection, with a requirement for CSF drainage (10). Postoperative hydrocephalus includes acute obstructive hydrocephalus, communicative hydrocephalus, and isolated hydrocephalus. Many studies have reported an association between supratentorial ventricular tumors and hydrocephalus, especially in pediatric patients. By contrast, the relationship between supratentorial ventricular tumors and hydrocephalus remains unclear. Nevertheless, studies without use of regression analysis suggest that hydrocephalus after LVT resection may be related to the surgical approach, tumor location, degree of resection, and displacement of hemostatic material (6,7,14). Thus, the aim of the present study was to identify the predicative factors for hydrocephalus after LVT resection using regression analysis, to help identify patients at high risk of postoperative acute hydrocephalus and shunt dependence. Materials And Methods Study population and data collection From January 2011 to February 2021, a total of 762 patients who were >18 years old and previously diagnosed with PFTs by outpatient computerized tomography(CT) or magnetic resonance imaging (MRI) were admitted to the Neurosurgery Department, Zhongnan Hospital of Wuhan University. Exclusion criteria were patients with non-resectable tumors, patients presenting for biopsy, patients who had a ventriculoperitoneal shunt (VP)-shunt or endoscopic third ventriculostomy (ETV) performed before resection, and patients who were found not to have a tumor postoperatively. Clinical information was recorded for all enrolled with a follow-up duration ranging from 90 days to 6 years. Clinical data and radiological records collected were obtained from the hospital's electronic database. We recorded information on sex, age, tumor location, tumor size, pathological results, presence of preoperative or postoperative hydrocephalus, resection range, postoperative hemorrhage, perioperative external ventricular drainage (EVD), ETV, and VP-shunts. Diagnostic criteria for hydrocephalus were symptoms of cranial hypertension and imaging results indicating Evans' index ≥30% (12). The criteria for preoperative and postoperative EVD were acute hydrocephalus with cranial hypertension symptoms and radiographic diagnosis(8). All patients with postoperative acute hydrocephalus received EVD in our department. The time for drainage removal was 14 days. The cretial for drainage removal were: (i) the patient was in a stable condition, with increasing of the drainage height over a few days followed by closing of the drainage for at least 12 h; and (ii) the CT scan was negative for 24 h before drainage removal. If the drainage was di cult to remove, we performed a VP-shunt. Patients with EVD caused by postoperative cerebrospinal uid (CSF) leakage or subcutaneous effusion were excluded from this study. The criteria for a post-resection VP-shunt were EVD weaning failure, symptomatic chronic hydrocephalus, or an isolated ventricle requiring permanent draniage, all the received cases were excluded with intracranial infection. On the basis of preoperative MRI, we classi ed LVTs as either anterior invasion tumors (i.e., tumor invading the anterior part of the lateral ventricle ( Tumor size was calculated using the longest axis of the maximum cross-sectional area of the tumor on MRI. The degree of tumor resection was determined by MRI or CT within 72 h after surgery(16). Postoperative hemorrrhage was con rmed by postoperative CT. Statistical analysis Data were analyzed using statistical software (IBM SPSS Statistics v22 and v24). The Student's t-test was used for comparisons between the two groups. Binary parameters were analyzed with the chi-square test. Multivariate logistic regression analysis was performed to nd independent predictors of EVD placement. Odds ratios (OR) and 95% con dence intervals were calculated to assess the impact of the variables. P < 0.05 was considered statistically signi cant. Patient characteristics Ninety-seven of the 112 patients were enrolled, with 15 patients excluded due to inclusion/exclusion criteria, including six patients without resection, ve patients with loss of postoperative follow-up, two patients with multiple intracranial tumors, one patient with a biopsy, one patient with a pre-resection VPshunt, two patients with simultaneous invasion of subtentorial part, 2 cases with a postoperative diagnosis of brain abscess. The average age of the enrolled patients was 42.2 year (range, 6-79 years), with 47 male patients (48.5%) and 50 females patients (51.5%). Pathological results showed 32 patients with meningiomas, 21 patients with central neurocytomas, 18 patients with gliomas, 11 patients with ependymomas, 5 patients with choroid plexus papillomas, 3 patients with hemangiomas, 2 patients with germinoma, 5 patients with other pathological results. Among these patients, 31 received prophylactic EVD before or during surgery, 10 of the 66 patients without prophylactic EVD developed acute hydrocephalus and received EVD after tumor resection, and 11 of all patients received post-operative VP-shunt. The tumor pathology and location data for the patients are shown in Table 1. In multivariate analysis, anterior invasion (OR=24.71), and postoperative hemorrhage (OR=43.47) were independent risk factors of postoperative EVD due to acute hydrocephalus. (Table 3) Predictors of a postoperative VP-shunt Eleven (11.3%) of all patients received a VP-shunt postoperatively, including two (2.1%) patients with a prophylactic EVD. The mean implantation time was 62.4 days (range, 14-191 days). There were ve patients with obstructive hydrocephalus, four patients with communicative hydrocephalus, and two patients with isolated hydrocephalus. Two patients with tumors located at the occipital angle of the lateral ventricle received a VP-shunt for post-resection isolated hydrocephalus. It was di cult to achieve complete intraoperative resection for the patient with pathological nding of glioblastoma identi ed at obstructive hydrocephalus pre-resection. Indeed, postoperative MRI revealed that the obstruction was not optimally removed because of the occupying effect and adhesion of the residual tumor. Because this patient continued to show intracranial hypertension, a VP-shunt was inserted. The second patient showed a meningioma with complete intraoperative resection. However, postoperative MRI showed obstruction caused by cerebral tissue adhesion accompanied by intracranial hypertension (Figure 3). Declarations Funding: This work was supported by the Technical Innovation Special Task of Hubei Province of China (grant number 2018ACA139). Con icts of interest: Chengda Zhang Ph.D. has nothing to disclose, Lingli Ge Ph.D. has nothing to disclose, Tingbao Zhang has nothing to disclose,, Zhengwei Li Ph.D. has nothing to disclose, Jincao Chen Prof., Ph.D., MD has nothing to disclose. The authors disclosed receipt of nancial support for the research, authorship, and/or publication of this article. and regulate intracranial pressure. Drainage placement can also prevent intracranial hypertension caused by acute hydrocephalus. Intraoperative EVD placement is most convenient for surgeons because the tube can be placed through the surgical channel. By contrast, for patients without prophylactic EVD who develop acute hydrocephalus after resection, the ventricular puncture site for EVD placement is selected in the emergency unit, which increases the risks of brain injury and other morbidities. Nevertheless, there are still risks of intraoperative EVD placement, including intracranial infection, increased risk of CSF leakage, and potential risk of excessive drainage. The aim of the present study was to identify patients at high-risk of acute hydrocephalus after LVT surgery for guidance of prophylactic EVD placement and to analyse the risk factors for post-resection VP-shunt placement. To the best of our knowledge ,this regression analysis is the rst to identify the characteristics of tumor location and other risk factors for hydrocephalus after LVT resection. We found that tumor invasion of the anterior part of the ventricle was an independent risk factor for postoperative EVD caused by acute symptomatic hydrocephalus. Deling et al. reported that hydrocephalus tends to develop after an LVT resection in which the tumor basement is located at the lateral ventricular wall, dorsal thalamus, choroid plexus,or third ventricle (near the foramen of Monro)(6), although statistical con rmation was not performed. Anatomically, the posterior internal choroid artery expands radially through the foramen of Monro and is the main blood supply vessel of the anterior part of the ventricle(21). Dring resection of tumors located in or invading the anterior part of the lateral ventricle, damage to these branching vessels may increase brain tissue swelling around the midbrain aqueduct after surgery, thereby narrowing the CSF pathway. For tumors invading the anterior ventricle wall or the aqueduct of the lateral ventricle, postoperative tissue adhesion may cause obstruction(22). Ktari, O et al. reported that postoperative obstructive hydrocephalus can be caused by displacement of intraventricular hemostatic materials and the in ammatory reaction associated with Gelfoam residue, with a surrounding marked giant cell reaction with underlying brosis, thrombosis of small super cial vessels, and reactive microglial (14). In the present study, postoperative hemorrhage was also an independent risk factor for postoperative EVD caused by acute symptomatic hydrocephalus. Postoperative hemorrhage is a serious complication of LVT surgery. which typically manifests as intraventricular hemorrhage, while approximately 50% of intraventricular hemorrhage patients develop hydrocephalus (3,11). Importantly, blood can stimulates the production of CSF(22), while the mass effect of the hematoma can obstruct the CSF pathway and cause symptoms of intracranial hypertension, which requires emergency CSF drainage (4,9). We found that incomplete resection was not a risk factor for postoperative EVD. Clinically, complete tumor resection is the main surgical goal. However, some LVTs are di cult to completely resect because of their extensive blood supply, unclear boundaries, or tight adhesion with normal brain tissue. To protect normal brain tissue and blood vessels and avoid severe postoperative intracranial edema and intracranial hypertension, our typical surgical goal is to achieve decompression and improve CSF circulation(18). Although residual tumors are a cause of recurrence, slow-growing tumors do not generally cause acute intraranial hypertension. For patients with incomplete resection, regular follow-up and review are required. If necessary, a secondary surgery or VP-shunt placement can be performed. In the present study, presence of a malignant tumor was the only independent risk factor for VP-shunt placement after LVT surgery. Of the eleven patients with a VP-shunt, ten had a malignant tumor. For patients with subtentorial ventricle tumors, pediatric patients have a higher incidence of malignant tumors (e.g., medulloblastoma) and a higher rate of post-resection hydrocephalus (1,13,15,24). The types and corresponding basements of supratentorial LVTs tend to differ with age. For example, choroid plexus papilloma, ependymoma, and central neurocytoma mainly occur in pediatric and juvenile people, and are mostly benign. By contrast, meningiomas and gliomas are most common in adults. Malignant tumors may impair CSF absorption because of leptomeningeal metastases at the subarachnoid level and the high CSF protein content produced by disseminated tumor cells (2,17,20). The high invasiveness of malignant tumors makes them di cult to resect. Thus, they can rapidly relapse after surgery to produce a mass effect and cause obstructive hydrocephalus. Interestingly, patients with radiation-induced brain atrophy can exhibit mildly elevated CSF pressure because of impaired CSF ow and reduced reabsorption caused by brosis of the arachnoid granulations(23). A VP-shunt is an alternative treatment for recurrent malignant LVT with symptomatic hydrocephalus. In the present study, two patients with LVTs located at the occipital angle of the lateral ventricle developed isolated hydrocephalus after resection-one patient had a glioblastoma that was di cult to completely resect. while the other patient with complete resection showed brain tissue still adherence and postoperative obstruction. Ma et al. reported that excessive CSF loss by ventricular drainage can cause intracranial hemorrhage and ventricular wall adhesion, increasing the risk of localized hydrocephalus(18). However, the meningioma patient in that study did not receive EVD during surgery. Based on preoperative and postoperative MRI ndings, we considered that this was related to the ventricle morphology around the tumor. The tumor with a large preoperative volume expanded the local ventricle and surrounding brain tissues, while the ventricular opening around the tumor was relatively narrow. After removal of the mass effect caused by the tumor, the surrounding brain tissue collapsed. The wide basement of the tumor resulted in a large surgical area in the ventricle, which aggravated postoperative peritumor brain tissue edema, leading to compression and adhesion of the narrow part of the ventricle and development of isolated hydrocephalus. For such wrapped tumors, we suggest timely postoperative imaging examination and enhanced dehydration treatment. We also recommend that the distal end of the shunt tube be placed across the ventricle stenosis, with particular attention paid to postoperative management of EVD to maintain ideal intracranial pressure. Limitation There are some limitations to our study. First, because our postoperative follow-up time varied from 3 months to 6 years, it remains unclear whether patients with a short follow-up time would develop hydrocephalus. This may have caused bias in our results. Second, because ETV was only performed in few patients with LVT in our center, evaluation of the utility of ETV was limited. Finally, there are differences in surgical procedures and perioperative management between different medical centers, which may in uence our statistical ndings. Further prospective studies with larger samples are required to con rm our ndings. Conclusion Anterior invasion of LVT and postoperative hemorrhage play a critical role in development of postresection acute hydrocephalus. Intraoperative placement of EVD and proper management of intracranial pressure are recommended for tumors invading the anterior part of the lateral ventricle. Patients with malignant LVTs were more likely to receive a post-resection VP-shunt. Tumors wrapped by the ventricles are more likely to develop isolated hydrocephalus after surgery. These ndings may help in identifying patients at risk of developing hydrocephalus after LVT surgery and the preoperative communication. The red box shows the anterior part of lateral ventricle. Page 16/17 The pre-resection axial T2-image shows the right lateral ventricular tumor, B. The post-resection axial T2image shows the isolated hydrocephalus.
2021-10-23T15:15:44.907Z
2021-10-21T00:00:00.000
{ "year": 2021, "sha1": "21d07f7f786971f0df1266d077a91540a5b35cae", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-936871/v1.pdf?c=1636473672000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "700b1a6e7f9cf0294a38e8bba7271a8d6fb10e1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
231900583
pes2o/s2orc
v3-fos-license
Effect of Monosodium Glutamate on Saltiness and Palatability Ratings of Low-Salt Solutions in Japanese Adults According to Their Early Salt Exposure or Salty Taste Preference Using umami can help reduce excessive salt intake, which contributes to cardiovascular disease. Differences in salt-exposed environment at birth and preference for the salty taste might affect the sense of taste. Focusing on these two differences, we investigated the effect of monosodium L-glutamate (MSG) on the saltiness and palatability of low-salt solutions. Japanese participants (64 men, 497 women, aged 19–86 years) tasted 0.3%, 0.6%, and 0.9% NaCl solutions with or without 0.3% MSG to evaluate saltiness and palatability. They were also asked about their birthplace, personal salty preference, and family salty preference. Adding MSG enhanced saltiness, especially in the 0.3% NaCl solution, while the effect was attenuated in the 0.6% and 0.9% NaCl solutions. Palatability was rated higher with MSG than without MSG for each NaCl solution, with a peak value for the 0.3% NaCl solution with MSG. There was no difference in the effect of umami ingredients on palatability between the average salt intake by the regional block at birth and salty preference (all p > 0.05). Thus, adding an appropriate amount of umami ingredients can facilitate salt reduction in diet while maintaining palatability regardless of the salt-exposed environment in early childhood or salty preference. Introduction The importance of sodium reduction as a practical prevention measure for cardiovascular disease is widely known [1,2]. However, salt intake in almost all countries exceeds the World Health Organization's (WHO's) recommendation of <5 g/day [3]. High salt intake has been reported as a leading dietary risk factor in over 3 million deaths and 70 million disability-adjusted life years in 2017 [4]. Salt reduction is an urgent issue worldwide [5]. In Japan, the National Health and Nutrition Survey in 2019 reported that the salt intake of Japanese adult men was 10.9 g/day and that of adult women was 9.3 g/day [6]. These amounts are approximately twice the amount recommended by the WHO. Since the 1950s, stroke has been the leading cause of death in Japan [7]. Tomonari et al., in their investigation of the mutual relationship among salt intake, blood pressure, and stroke mortality in 12 regions of Japan, reported that salt intake was an independent factor for stroke mortality [8]. Salt intake has been found to be higher in Tohoku than in other regions of Japan; indeed, in 1980, it accounted for the highest salt intake (15.8 g/day, Table 1). The intake of miso and pickles has also been reported as higher in Tohoku than in other regions [9]. 1980,1990, and 2000 for 12 regional blocks in Japan along with the number of participants born in each regional block. Kanto 11.0 L 4 the born-before-the 1980s group (N) 308 the born-in-the 1990s group (N) 141 the born-in-the 2000s group (N) 112 †: Regional blocks categorized by the National Nutrition Survey in Japan, and are listed in order from North to South. ‡: Participants are classified into three levels according to the average salt intake by regional block at birth (L: low <12.6 g, M: middle ≥12.6 and <13.4 g, H: high ≥13.4 g). §: Number of participants born in each regional block before 1989. ¶: Number of participants born in each regional block during 1990-1999. † †: Number of participants born in each regional block during 2000-2001. Takachi et al. reported that self-reported taste preferences for miso soup were significantly associated with 24-h urinary sodium excretion and daily salt intake [10]. Food preferences established in early childhood continue into later life [11]. Salt preference is affected by the dietary habits of pregnant mothers and experiences with food during the first year of life [11,12]. Therefore, those who experienced high salt exposure during early childhood may have a high salty taste preference. Moreover, salty taste preference can also be influenced by the family's salt use habits [13,14]. Many studies have reported that the use of glutamate is an effective way to reduce salt intake while maintaining the palatability of food [15,16]. Soup is a common dish worldwide, and miso soup is a daily food in Japan. It has been reported that glutamates such as monosodium glutamate (MSG) and calcium diglutamate (CDG) enhance the palatability of low-salt soups when used as a solvent, such as in clear soup, pumpkin soup, and chicken broth, and contribute to salt reduction [17][18][19]. Umami taste perception varies significantly among individuals. The differences in sensitivity can result from genetic variations in taste receptors [20], familiarity with umami [21,22], or hormonal levels [23]. Furthermore, umami taste perception can be enhanced by repeated exposure [20,24]. The interaction between umami and salt perception, however, remains unclear. The effect of using MSG on low-salt solutions can be influenced by the salt exposure environment involving early childhood and family salty preference. However, no studies have investigated the relationship of the effect of umami in low-salt solutions with salt exposure environment in early childhood and salty taste preference. In this study, we focused on the differences between salt exposure environment in early childhood and current salty taste preference, and aimed to investigate the effect of adding sodium glutamate on the saltiness and palatability of low-salt solutions. Study Design This study was conducted from July 2017 to November 2018 in Japan. Sensory evaluations were performed and a questionnaire survey was administered at eight universities and 11 health seminars in 13 prefectures of Japan (Aomori, Miyagi, Tokyo, Chiba, Saitama, Kanagawa, Shizuoka, Nara, Hiroshima, Fukuoka, Nagasaki, Kagoshima, and Okinawa). The study protocol and material have been described in detail elsewhere [25]. These experiments were approved by the Research Ethics Committee at Fukuoka Women's University (No. 2016-31) for students and the Research Ethics Review Committee at Nara Women's University (No. 18-02) for attendees of health seminars. All participants provided signed informed consent prior to the study. The approved experiments were registered in the University Hospital Medical Information Network Clinical Trials Registry (UMIN000035280 and UMIN000035289). Participants The participants were 259 students from eight universities, and 392 attendees from 11 health seminars. From a total of 651 participants, we excluded 20 participants who had a taste disorder, eight participants who did not answer the question on taste disorder in the questionnaire, and 40 participants who did not complete the sensory evaluation test. In addition, we excluded 14 participants who did not answer the questionnaire, and eight participants who were not from Japan. Ultimately, 561 participants (64 men and 497 women) were included in the analysis. The flow chart of the participants is shown in Figure 1. Sensory Evaluation Previously, we examined the saltiness, umami, and palatability of 48 aqueous solutions containing eight different concentrations of NaCl (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9%) and six different concentrations of MSG (0.1, 0.2, 0.3, 0.4, 0.5, and 0.6%) by sensory evaluation among female students and teachers [26]. Based on the results of the previous examination, six samples were prepared, which included 0.3%, 0.6%, and 0.9% NaCl solutions with or without 0.3% MSG. Saltiness and palatability ratings were assessed using a visual analogue scale (VAS) [27]. The VAS scale represented a minimum rating at the left end (not at all salty or extremely unpalatable) and a maximum rating at the right end (extremely salty or palatable) for each sample. Each solution was tested twice, with the order of the samples being changed after 15 min or more. The participants were asked to rinse their mouths with water before and after each sample evaluation. Questionnaire Survey Using a questionnaire, the participants were inquired about their individual characteristics of sex, age (years), smoking habit (current, former, and never), use of medication (yes and no), and birthplace. In addition, they were inquired about the degree of personal salty taste preference (very light, light, middle, strong, and very strong), degree of family salty taste preference (very light, light, middle, strong, and very strong), and degree of salt reduction efforts (always, sometimes, rarely, and never). Average Salt Intake by Regional Block at Birth In the present study, the average salt intake by regional block at birth was defined as the environmental indicator of salt exposure in early childhood. The average salt intake was obtained from the results of the National Nutrition Survey from 1980 to 2000, which has been conducted every year since 1946. There were no data on the average salt intake by regional block prior to 1979. We adopted the data of salt intake in 1980, as the salt intake might have been higher before 1980 than after 1980, according to annual changes in the salt intake of the Japanese from 1972 to 1980 [9]. Participants were divided into three groups based on generations: the born-in-the-2000s group (from 19 to 20 years old), the born-in-the-1990s group (from 21 to 30 years old), and the born-before-the-1980s group (over 31 years old). After that, the average salt intake data for 2000, 1990, and 1980 was used for the born-in-the-2000s group, the born-in-1990s group, and the born-before-1980s group, respectively. The birthplaces were also divided into 12 regional blocks according to the National Nutrition Survey in Japan (Table 1). Eventually, the averages of salt intake by regional block at birth in 1980, 1990, and 2000 were obtained for the 12 regional blocks. Statistical Analysis The average salt intake by regional block at birth was coded using the tertile as low (less than 12.6 g/day), middle (over 12.6 g/day and less than 13.4 g/day), and high (over 13.4 g/day). Moreover, degrees of personal and family salty taste preference were categorized into three groups: light (very light and light), middle, and strong (very strong and strong). The first and second taste evaluations of the saltiness and palatability of the six solutions were averaged. Chi-square tests were used to analyze the difference between sex and age groups. The normality of data distribution was tested using the Shapiro-Wilk test, and the distribution was not normal. Normal distribution methods were used in this study, because the statistical power was set such that VAS ratings were not concentrated around either extreme of the scale [28]. Repeated measures analysis of variance (ANOVA) was used to compare saltiness or palatability ratings of the six solutions. Then, pairwise comparisons were made using Tukey's honestly significance difference procedure. In addition, analysis of covariance (ANCOVA) was employed to compare the saltiness or palatability ratings of each group based on sex, age group, salt intake by regional block at birth, personal salty taste preference, and family salty taste preference for the six solutions adjusted for sex, age group (19-20 years, 21-40 years, and ≥41 years), smoking habit (current, former, and never), salt reduction efforts (always, sometimes, and rarely/never), and use of medicine (yes and no). Statistical significance was defined as p < 0.05. Statistical analysis was conducted using JMP statistical software (JMP Pro 15.1.0, SAS Institute Inc., Cary, NC, USA). Table 2 presents the characteristics of the participants by sex and age groups. The majority of the participants were female (88.6%) and never smokers (92.5%). The percentage of participants who were over 41 years old was 54.2% (mean ±SD: 45.45 ± 23.99). The percentage of participants with a high salt intake by regional block at birth was higher for the over 41-year-old age group than for the other age groups. Regarding salt reduction efforts, 72.5% participants answered that they engaged in such efforts "always" or "sometimes" (approximately 50% of the over 41-year-old age group). Thirty-four percent of the participants were taking medications, and this was particularly noticeable in the over 41-year-old age group (87.4%). Participants are classified into three levels according to the average salt intake by 12 regional blocks in 1980, 1990, and 2000 from the National Nutrition Survey in Japan (low: <12.6 g, middle: ≥12.6 and <13.4 g, and high: ≥13.4 g). ‡: p for the comparisons between men and women, §: p for the comparisons between the three age groups (19-20 years, 21-40 years, and ≥41 years) with the use of chi-square tests. Saltiness The means and standard errors (SEs) of saltiness VAS ratings by sex, age groups, the three levels of salt intake by regional block at birth, and the three levels of salty taste preference are shown in Table 3. Repeated measures ANOVA results showed significant differences in the saltiness ratings of the six solutions among all groups based on sex, age groups, the three levels of salt intake by regional block at birth, and the three levels of salty taste preference (all p < 0.001). The higher the NaCl concentration, the higher was the saltiness rating. The post-hoc test results showed that the 0.3% NaCl solutions with MSG showed significantly higher saltiness ratings than those without MSG in all groups (all p < 0.05). The 0.6% NaCl solution with MSG showed a significantly higher rating than the solution without MSG in the 21-40-years-old age group (p < 0.05). ANCOVA results showed that there were significantly different ratings for each NaCl solution with and without MSG among the three age groups, except for the rating for the 0.3% NaCl solution alone (0.3% NaCl alone, p = 0.304; other solutions, p < 0.001), with its lowest ratings being in the over 41-years-old age group. As for salt intake by regional block at birth, there was a significantly different rating for the 0.9% NaCl solution with MSG among the three groups (p = 0.008), and the saltiness ratings of the low-level group were higher than those of the other groups for all solutions. Additionally, with regard to personal and family salty taste preference, there were significantly different ratings for the 0.3% NaCl solution without MSG among the three groups (personal salty taste preference, p = 0.002; family salty taste preference, p = 0.028), and the ratings of the strong groups for the 0.3% NaCl solution without MSG were lower than those of other groups. Values are means ± standard errors (SEs) One underline is for the highest ratings in the 0.3%, 0.6%, and 0.9% NaCl solutions alone. Double underline is for the highest ratings in the 0.3%, 0.6%, and 0.9% NaCl solutions with MSG. The significant differences between each rating are indicated by alphabetic superscripts. A rating is significantly different from others that have different superscript letters according to Tukey's test (p < 0.05). †: Participants are classified into three levels according to the average salt intake by 12 regional blocks in 1980, 1990, and 2000 from the National Nutrition Survey in Japan (low: <12.6 g, middle: ≥12.6 and <13.4 g, and high: ≥13.4 g). ‡: p for repeated measures ANOVA. §: p for ANCOVA adjusted for sex, age group (19-20 years, 21-40 years, and ≥41 years), smoking habit (current, former, and never), salt reduction efforts (always, sometimes, and rarely/never), and use of medicine (yes and no). Palatability The means and SEs of palatability VAS ratings by sex, age groups, the three levels of salt intake by regional block at birth, and the three levels of salty taste preference are shown in Table 4. Repeated measures ANOVA results showed significant differences in the average ratings of the six solutions among all groups based on sex, age groups, the three levels of salt intake by regional block at birth, and the three levels of salty taste preference (all p < 0.001). The results of the post-hoc test showed that the 0.3%, 0.6%, and 0.9% NaCl solutions with MSG showed significantly higher ratings for palatability than those without MSG among all groups (all p < 0.05), except for the rating of the 0.9% NaCl solution in the male group (p = 0.069). Regardless of age groups, salt intake by regional block at birth, and salty taste preference, adding MSG significantly enhanced the solutions' palatability. The palatability ratings of the 0.3% NaCl solution with MSG were almost twice as high as those of the 0.3% NaCl solution without MSG in all groups. Moreover, the palatability ratings of the 0.3% NaCl solution with MSG were the highest among the six solutions in the female group, while there was no significant difference between the rating of the 0.3% NaCl solution with MSG and 0.6% NaCl solution with MSG in the male group, the three age groups, the three levels of salt intake by regional block at birth, and the three levels of salty taste preference. ANCOVA results showed significantly different ratings for palatability among age groups for the 0.6% NaCl solution without MSG (p = 0.040), with the rating of the over 41-years-old age group being lower than that of the other two groups. There were significantly different ratings for palatability among the three levels of personal salty taste preference for the 0.9% NaCl solution with MSG (p = 0.007), with the rating of the light group being lower than that of the other groups. Additionally, peak ratings of palatability were expressed for the 0.3% NaCl solutions with MSG, and there were no significant differences in palatability ratings among the solutions having peak ratings in each group (sex, p = 0.273; age groups, p = 0.147; salt intake by regional block at birth, p = 0.642; personal salty taste preference, p = 0.624; family salty taste preference, and p = 0.989). Among the 0.3%, 0.6%, and 0.9% NaCl solutions without MSG, the 0.6% NaCl solution had the highest palatability rating in all groups except for the male group, which had a peak rating for the 0.9% NaCl solution. Among the 0.3%, 0.6%, and 0.9% NaCl solutions with MSG, the 0.3% NaCl solution had the highest rating across all groups. Moreover, all solutions with MSG had peak ratings at a lower NaCl concentration than the solutions without MSG, regardless of sex, age, levels of salt exposure environment in early childhood, and salty taste preference. Values are means ± standard errors (SEs). One underline is for the highest ratings in the 0.3%, 0.6%, and 0.9% NaCl solutions alone. Double underline is for the highest ratings in the 0.3%, 0.6%, and 0.9% NaCl solutions with MSG. The significant differences between each rating are indicated by alphabetic superscripts. A rating is significantly different from others that have different superscript letters according to Tukey's test (p < 0.05). †: Participants are classified into three levels according to the average salt intake by 12 regional blocks in 1980, 1990, and 2000 from the National Nutrition Survey in Japan (low: <12.6 g, middle: ≥12.6 and <13.4 g, and high: ≥13.4 g). ‡: p for repeated measures ANOVA. §: p for ANCOVA adjusted for sex, age group (19-20 years, 21-40 years, and ≥41 years), smoking habit (current, former, and never), salt reduction efforts (always, sometimes, and rarely/never), and use of medicine (yes and no). Discussion This study demonstrated that MSG enhanced the palatability of low-salt solutions, regardless of sex, age, salt intake by regional block at birth, and salty taste preference. Additionally, the 0.3% NaCl solution with MSG showed peak values of palatability ratings regardless of sex, age, salt intake by regional block at birth, and salty taste preference. This is the first study to investigate the effect of MSG, that is, umami ingredients, on low-salt solutions while considering the difference between the salt exposure environment in early childhood and current salty taste preference. In a previous study, we presented the results of 584 participants evaluating six solutions (0.3%, 0.6%, and 0.9% NaCl solutions with or without 0.3% MSG); the results suggested that MSG enhanced the palatability of low-salt solutions regardless of sex, age, region, smoking habit, two hours of fasting, and medication [25]. The present study investigated the effect of MSG on low-sodium solutions, based on the variables of sex, age, levels of salt exposure environment in early childhood, and on salty taste preferences to contribute to the generalization of the effects of MSG. In this study, it was shown that saltiness ratings depended on NaCl concentration of the solution, while palatability ratings were independent of NaCl concentration and got the peak value at a lower NaCl concentration (0.3% NaCl with 0.3% MSG) than the general concentration (around 1.0% NaCl) at which Japanese individuals consumed soup. In the previous studies investigating the interaction of NaCl and umami (MSG or CDG) in different types of soups, it was shown that the saltiness depended on NaCl concentration of the solution, while palatability ratings appeared parabolic wherein the peaks were observed at a different, medium-salty NaCl concentration [17][18][19]. Our results were broadly consistent with these previous reports. A significant enhancing effect was observed for saltiness by adding MSG in the 0.3% NaCl solution regardless of sex, age, salt intake by regional block at birth, and salty taste preference. Although the detailed mechanism has not been clarified, it is known that saltiness sensitivity decreases with aging [29,30]. Barragan et al. conducted an evaluation test on participants aged 18-80 years, and reported that the 37-50-year-old and 51-80year-old groups had a significantly reduced salty taste compared to the 18-36-year-old group [31]. In this study, the saltiness ratings of the over 41-years-old age group were lower than those of the other two younger age groups, a trend consistent with previous studies [29][30][31]. For the level of salt intake by regional block at birth, the saltiness ratings of the low-level group for the 0.9% NaCl solution with MSG were higher than those of the other groups. This means that those who were exposed to low levels of salt in early childhood were more sensitive to salty taste than those exposed to high salt levels. These results support previous studies reporting that taste preferences established under the influence of food experiences in early childhood continue through life [11,12]. In our previous study, MSG enhanced ratings of the 0.3%, 0.6%, and 0.9% NaCl solutions on palatability, and the 0.3% NaCl solution with MSG showed the highest palatability ratings. In addition, we suggested that it might be possible for the 0.3% NaCl solution with MSG (Na: 0.391 g/mL) to have a reduction of approximately 60% sodium compared to the 0.9% NaCl solution with MSG (Na: 0.155 g/mL) without a loss of palatability [25]. In the present study, MSG significantly enhanced the palatability ratings of the 0.3%, 0.6%, and 0.9% NaCl solutions among all groups, with the exception of the 0.9% NaCl solution in male. Moreover, the rating for the 0.3% NaCl solution with MSG showed the highest palatability regardless of sex, age, salt intake by regional block at birth, and salty taste preference. The enhanced effect of MSG was the same in the over 41-year-old age group with a weakened salty perception as in the other two younger age groups. As for palatability in males, the 0.9% NaCl solution obtained the peak rating among the three solutions of NaCl without MSG, and this rating was higher than that given by female and groups based on other classifications. While the sense of palatability in males might be low, the peak rating for palatability shifted from the 0.9% to 0.3% NaCl solution due to the addition of MSG. This indicates that adding MSG might be effective in reducing salt intake in males. In this study, we investigated the influence of salt exposure environment in early childhood on the palatability using MSG on low-salt solutions. Our results showed that early childhood salt exposure did not affect enhancement of palatability using MSG in low-salt solutions. Kobayashi et al. have shown that sensitivity to umami taste is largely dependent on familiarity with umami taste [21]. As the Japanese participants had extensive experience with MSG in Japanese food, they might have been sensitive to the effect of umami ingredients on palatability. We also assessed the influence of the degree of salty taste preference of individuals and families on sensory evaluation results. Regarding palatability, the ratings of the 0.3% NaCl solution with MSG showed a peak in all groups regardless of the degree of salty taste preference. Comparing the peak ratings among the strong, middle, and light groups, no significant difference was found. Uechi et al. examined urinary sodium excretion among participants in 47 prefectures of Japan and reported no domestic fluctuation [32]. In 1980, there was a difference in salt intake of about 5 g/day by regional block in Japan, from the highest (Tohoku: 15.8 g/day) salt-consuming region to the lowest (Kinki-1: 10.9 g/day) ( Table 1). However, in 2018, the difference was only 1.6 g/day (the highest salt-consuming region, Tohoku: 11.1 g/day and the lowest salt-consuming region, Hokkaido/ Shikoku: 9.5 g/day), indicating that the regional differences in salt intake are becoming smaller [33]. This may be due to the influence of social development and the westernization of the Japanese diet [32]. The members of the WHO have committed to reducing the salt intake by 30% by 2025 [34]. Sustainable Development Goal Number 3 states that premature mortality from non-communicable diseases will be reduced to one-third by 2030 [35]. Each country has set out policies for the food service industry and processed food manufacturers to reduce the salt content of their products. In the United Kingdom, the Salt Reduction Program has been led by the Department of Health since 2003. This program has reduced salt intake from 9.5 g/day in 2000-2001 to 8.1 g/day in 2008 [36]. In 2018, the Spanish Agency for Consumer Affairs, Food and Nutrition released the Collaboration Plan for the Improvement of the Composition of Food and Beverages, which committed to reducing salt content in various food categories [37]. In Japan, the Health Japan 21 (second term) program aims at curtailing salt intake to 8 g/day [38]; however, there is no policy for the food industry regarding salt reduction. Although average salt intake in Japan has decreased due to medical and administrative population policy approaches, it has remained almost unchanged since 2015. The National Health and Nutrition Survey in 2019 investigated the intention to improve dietary habits, and more than 30% of the respondents answered they did not intend to improve their dietary habits even if they were consuming over 8 g/day salt [6]. In addition to a population approach, incorporating an environmental approach, such as working with the food industry to reduce the amount of sodium in their products, is also important in managing salt intake [39,40]. The Japanese Society of Hypertension has begun to work on an environmental approach toward reducing salt intake, such as including a certification for food items with a low salt content [41]. Moreover, 13 academic societies in Japan have formed a consortium, and a certification system for healthy and nutritional meal patterns (common name: Smart Meal) was launched in December 2019 in Japan. Smart Meal aims at restricting salt intake to 3.0-3.5 g/meal [42], which is expected to bring in environmental benefits as well. Wallace et al. estimated the effect of using glutamates to substitute the amount of sodium among certain food groups in America, and reported that doing so could have a modest effect on the salt intake of the whole American population [43]. Some studies on animal models reported that adding MSG was associated with higher energy intake and obesity, while clinical and epidemiological studies have been inconsistent regarding a relationship between MSG consumption and energy intake and obesity [44]: Masic et al. reported adding MSG increased immediate appetite but reduced subsequent test meal intake [45], and He et al. reported MSG consumption was positively associated with BMI [46]. The effect of MSG on appetite is currently unclear, thus the effect of salt reduction using MSG would be expected to be more beneficial even considering the effect on appetite. Utilizing low-salt soups using umami, regardless of sex, age, salt exposure environment in early childhood, and current salty taste preference, will be useful to work toward an environmental approach to reducing salt intake. This study had a few limitations. First, the participants of this study were registered students being trained as dietitians and health seminar participants and hence, likely to have been a highly health conscious group, which might have influenced the results. In addition, the proportion of female participants was high, and the number of people in each of the 12 regional blocks was unequal. In order to further generalize the data, it will be necessary to conduct a survey of populations with a wide range of characteristics. Second, in this study, the salt exposure environment in early childhood was defined as the average salt intake by regional block at birth in 1980, 1990, and 2000. However, the actual salt intake of each individual might differ. Furthermore, the average salt intakes by regional block at birth in 1980, 1990, and 2000 were quoted from the results of the National Nutrition Survey, but the sodium amount calculation method differs in each survey [9]. However, when examining food composition data, it is desirable that using better technologies, a more optimal method for obtaining an accurate intake is selected, as the old data cannot accurately calculate the current intake. This is a drawback when long-term investigations are conducted. Finally, we applied the data for the year 1980 to participants born before 1980 because no data on salt intake at birth were available before the year of 1980. The Japanese salt intake in 1980 was 12.9 g/day, while in 1975 it was 13.5 g/day [9]. Therefore, the salt intake by regional block before 1980 was also considered higher than in 1980. Conclusions There was no difference in the effect of umami ingredients on palatability between average salt intake by the regional block at birth and salty taste preference. These findings suggest that adding an appropriate amount of umami ingredients can facilitate salt reduction while maintaining palatability, regardless of early childhood salt exposure environment and current salty preference. If an environment is created in which umami is effectively utilized to reduce salt intake, it could be useful in the prevention and management of hypertension, and might contribute to a reduction in the incidence and mortality of cardiovascular disease as well.
2021-02-13T06:16:37.176Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "87a3929c0be5fd3c7eab3a9adb3dbd678664a502", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/2/577/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2aa8d27328dfdca09f4ae9a9662e2e26b11171d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204378158
pes2o/s2orc
v3-fos-license
FAIR PERFORMANCE APPRAISAL SYSTEM AND EMPLOYEE SATISFACTION: THE MEDIATING ROLE OF TRUST IN SUPERVISOR AND PERCEIVED ORGANIZATIONAL POLITICS The purpose of this study is to test the relationship between the fair Performance Appraisal system (FPAS) dimensions and employee satisfaction dimensions. Furthermore, this research also seeks to test the mediating effect of trust in supervisor and perceived organizational politics in the association between the fair performance appraisal and employee satisfaction. The total of 406 respondents data were collected from the banking sector of Pakistan through a convenient sampling technique. The Structural Equation Modelling (SEM) was employed to test the mediating effect of trust in supervisor and the perceived organizational politics in the association between the fair performance appraisal and employee satisfaction. The study reveals that trust in supervisor has partially mediated between the fair performance appraisal dimensions and employee satisfaction dimension whereas perceived organizational politics has fully mediated between the fair performance appraisal dimensions and employee satisfaction dimension. Previous studies have tested the performance appraisal system relationship with employee satisfaction. Whereas this research, according to the authors, is the first study, which has tested the trust in supervisor and perceived organizational politics in six different mediating models. The current study has been a pioneer in examining the mediating relationship between air performance appraisal system and employee satisfaction in the context of different appraisal techniques, appraisal process and implementation. During the performance appraisal evaluation process, organizations should monitor different aspects of employee satisfaction in order to foster a positive fair appraisal system for employees. The study framework is significant in practically testing other contexts for comparison. The managerial implications are based on the present findings to improve the appraisal system or way of evaluation in the organization. Introduction In the region of Pakistan, the management practices are restricted to solely management activity of paper shuffling. Relatively large native or public companies are dominated by bureaucracy and performance appraisal management system (Danish et al., 2014). In such a setting, conventional practices are extremely over hawk-eyed, and performance appraisal strategies are seen to be terribly inexperienced or unconcerned. Few studies on this subject determine the effectiveness of the appraisal system in the Pakistani context (Anjum et al., 2018). Firstly, this research tries to analyse the relationship between the perception of the PA dimensions and employee satisfaction. Secondly, It has been examined the mediating role of trust and politics between the employee and his supervisor in this relationship. An overview of the political issues, which exist in the performance appraisal system has been highlighted by Spence and Keeping (2011). It has been observed that many employees who perform well are at times side-lined by those employees who are having political affiliations, or they have direct contacts with the senior management team. Employee's satisfaction has been influenced by many factors (e.g. Levy et al., 2017;Rajnoha and Lorincová, 2015). One of the factors is perceived organizational politics (Rahman et al., 2011) and second is trust in supervisor particularly in faired performance appraisal system. Both perceived organizational politics and trust in supervisor that could influence the level of employee satisfaction and these should be managed and considered by the management of Commercial banks in order to sustain the overall organizational effectiveness. A study conducted by Saleem (2015) has also approved the above statement that the influence of trust in supervisor and perceived organizational politics are much prevalent in the organizational system as it has been observed that employees often are not given rewards based on their performance. In fact, they are given rewards based on their contacts, affiliation with supervisor and political connection. Over the last decades' researchers like, DeNisi and Murphy (2017) have asserted that appraisal reactions play a crucial role in the development of favorable job and organizational outcomes and enhance motivation to increase performance (Palaiologos et al., 2011). Of all the appraisal reactions, satisfaction has been the most frequently studied (Keeping and Levy, 2000). According to Lai Wan (2007), satisfaction is an important goal for organizations to reach, as it has been shown that profitability, productivity, employee retention and customer satisfaction are linked to employees' satisfaction. The higher satisfaction will be created through motivated employees, and it will in turn positively influence organizational performance. For a reason, satisfaction with the aspects of the appraisal process has been regarded as one of the most important reactions to PA (Jawahar and Liu, 2016). Besides, the appraisal system's effectiveness is depended not only on its technical characteristics but also on the general organizational and administrative framework, as the PA system is being not just a distributive activity, but it correlates with all the organization's other activities (Ye et al., 2012). Many theoretical models (Kurtessis et al., 2017) have examined the perceived organizational politics with outcomes or to quit the job. This research is looking towards the new paradigm regarding perceived organizational politics towards employee satisfaction in a fair appraisal system. The investigations of employee satisfaction, perceived organizational politics and trust in supervisors have become a contemporary topic in the context of management practices and organizational development such as employee's performance evaluation (Jawahar and Liu, 2016). In other words, the area of management is highly concerned about the performance and satisfaction of employees while little has been explored about the impact of faired appraisal practice, supervisor trust and organizational politics. In this regard, it is evident that this improves institutional stability and sustainability. The enterprise sustainability in emerging economies has been enhanced by micro training and social capital (Haque et al., 2019). Hence, it is important to investigate the fair performance appraisal as it also has a close linked with enterprise sustainability. Pakistan has unique banking history indeed commencing from scratch at the time of Independence in the Asian region. However, after its emergence, it has faced lots of problems but currently it has a broad banking sector operation. The numbers of scheduled banks operating in Pakistan are 33 by the end of June 2018. The branch network position of Scheduled Banks operating in Pakistan is 13,692 according to State Bank of Pakistan record 2018. This research is based on 9 commercial banks (HBL, Bank Alfalah, Faysal Bank, Meezan Bank, Askari Bank, Muslim Commercial Bank, Al Habib Bank, Standard Charted Bank and United Bank Limited (UBL)) operating in Karachi, Pakistan. Different appraisal techniques or methods are used to evaluate the performance of employees in these banks such as Forced Distribution Methods / Bell Curve Method, Graphic Rating Scale Method, traditional or modern approach (goal setting), Management by objective and periodical reviews. In regard to fair performance appraisal process practiced among the employees has become vital as they intend to be treated well and be perceived fairly by their employers (Javed and Tariq, 2015). These employees tend to be more committed, supported, effective and efficient in performing their tasks (Javed and Tariq, 2015), which subsequently enhance their employees' satisfaction. Nevertheless, there is no conclusive evidence of comparative analysis to investigate the research problem. Additionally, the 9 commercials banks considered in this study have two common features that are the perceived organizational politics and trust in supervisor those have been evident in the banking sector irrespective of the method or technique used in appraisal system. Hence, the present study aims to investigate the fair performance appraisal system dimensions towards employees' satisfaction dimensions and the mediating role of trust in supervisor parallel to perceived organizational politics in the Banking sector. Literature Review According to Asiabar et al. (2013), the methods to give the rating to employee performance to evaluate how well they are working are called Performance Appraisal Method. Depending on the organization, it can be used to evaluate the performance of employees based on their skills, productivity level, effectiveness, quality etc. Like in Education system, students are evaluated based on their marks in quizzes, assignment, activities, projects and examination and on the basis of these the students are being rewarded with the grade. Similar to this, organization can also evaluate their employees through different appraisal methods and then rewards their operatives based on their rating. It is not only applicable to large scale organization but it also can be used in factories, trading companies and in small firms as well. Employee satisfaction with the role of PA plays a vital role in their long-term efficiency. A hostile reaction toward the PA can affect the entire PA system although if it is built neutrally and impartially (Buchner, 2007;Corbie-Smith et al., 2011). Aureli and Salvatori (2012) think that the neutrality of a PA system depends not only on the validity and reliability of the performance appraisal balance but also on the employees' reaction. However, the question arises on employees' reaction to the performance appraisal has been given very little attention (Luthans et al., 2008). In context to results of other research work, a reaction that is getting attention among researchers is the employees' assessment of the PA (Boachie-Mensah and Dogbe, 2011). There are 3 divisions of satisfaction towards PA. Firstly, it is the Satisfaction with Ratings; consistent with that higher ratings elicit positive reactions toward the appraisal and are associated with satisfaction with the appraisal method (Jordan and Jordan, 1993). Secondly, Satisfaction with Rater; here, the determinative role that supervisors have to reassure positive outcomes as distinct, they are primarily the employees' appraisers and supply feedback for their performance (Milkovich and Boudreau, 1997). Thirdly, another part of satisfaction is that of Satisfaction with Appraisal Feedback; Feedback is crucial thanks to its potential influence on people's response to ratings (Kluger and DeNisi, 1996). Reviewers argue that performance feedback increases job satisfaction and motivation and lots of decision-making and career development models embody a circuit action that people learn on the premise of receiving feedback on their performance. It has been assessed that there is a significant relationship between employee satisfaction and trust in the supervisor (Miao et al., 2013). In this way, it becomes apparent that there is a strong association between employees (team members) and supervisors' supports are usually attained from the senior management team of the organization. This also hints towards organizational commitment, which is a psychological bond between workers, employment and the workplace (Haque and Aston, 2016;Haque et al. 2018). Nevertheless, it is also a known fact that it is crucial to examine the integrative efforts on employee satisfaction. From the perspective of the intra-organizational perceptive, it is not currently clear whether the social relationship between employees and the supervisor tends to have an effect on the growth of the organization and performance appraisal system or not. Moreover, it has also been assessed that social setting of the organization is highly significant in the context and thus social support and social atmosphere created by team members tend to have a critical influence on the employee performance of individuals (employees). Numerous studies have been analyzed by using the four different types of justice without considering trust in supervisor as a mediator (Camerman et al., 2007). The Distributive and procedural-based justice system has been empirically revealed to be connected to trust in the business. In addition to this, interactional-based justice system has been directly related to the notion of trust in supervisors. Although it is apparent, studies have exhibited the perceived supervisory and support and partially mediated between the interactional justice system and trust. In this concern, the researchers or scholars have neither separated interactional justice system into its two different components (interpersonal justice and informational justice), nor revealed distributive and procedural-based justice (Nassar and Zaitouni, 2015). According to the above discussion following hypotheses are generated: (2011) have successfully given an overview of the political issues, which exist in the performance appraisal system. It has been observed that many employees who perform well are at times by those employees who are having political affiliations or they have direct contacts with the senior management team. A study conducted by Saleem (2015) has also approved the above statement that the influence of politics is much common in the organizational system as it has been observed that employees are often not given rewards based on their performance. In fact, they are given rewards based on their contacts and political influences. Although there is no evidence that perceptions of performance appraisal politics have an effect on job attitudes negatively, many have found organizational politics to be a predictor of job satisfaction (Cropanzano et al., 1997, Ferris et al., 1992. Thus, the proposed model incorporates associate degree and extends longitudinal meditational chain. Fair performance appraisal system dimensions'(Rater Confidence, setting performance expectations, providing feedback, clarifying expectations and Treatment by supervisor) expectations are negatively related to perceived organizational politics; politics, in turn, is predicted to own a negative result on employee satisfaction through its adverse influence on satisfaction subsidiary empirical proof. There is substantial proof that additional negative work attitudes are related to lower satisfaction. Thus, the proposed model incorporates projected positive relationships of employee' satisfaction to a performance appraisal dimension that captures each in-role and extra-role performance, negative relationships of fair performance appraisal dimensions with the worker satisfaction dimensions (mediated through perceived organizational politics), and positive relationships of trust in supervisor to employee satisfaction by a path meditational chain through politics and trust. According to the above statement following hypotheses are generated: Spence and Keeping H4: Perceived organizational politics mediates the relationship between FPAS and Reactions towards last PMS rating H5: Perceived organizational politics mediates the relationship between FPAS and reactions towards supervisors H6: Perceived organizational politics mediates the relationship between FPAS dimension and Reactions to the PMS Research Framework The research framework is based on the organizational justice theory. The dependent variable used in the study is employee satisfaction dimensions, and the independent variable is fair performance appraisal system dimensions with two mediating variables i.e. perceived organizational politics and trust in supervisor. Subsequently, organizational justice theory has been reviewed along with developing the theoretical framework of the study and its conceptual framework. The literature study has managed to reveal that a fair performance appraisal-based satisfaction is apparently the satisfaction of employees due to the performance appraisal system. In this regard, Giles and Mossholder (1990) and Levy and Williams (2004) have further stated that the performance appraisal satisfaction has a significant impact on the productivity and efficiency of employees or human resource. In this regard, Murphy and Cleveland (1991) have stated that there is an intensive analysis based on critical factors, which tend to contribute towards the performance satisfaction. However, it has been a notable fact that there is still a lack of empirical studies that can identify a significant and positive relationship with performance appraisal system dimensions (Setting performance expectations, rater confidence, clarifying expectations, providing feedback and treatment by supervisor) and employee satisfaction dimensions (reaction towards last PM rating, reaction towards supervisor and reaction towards PMS). For this reason, the following research is determined to investigate how trust in supervisor and perceived organizational politics mediate between employee satisfaction dimensions related to the perceived fairness of performance-based appraisal system dimensions (Setting performance expectations, rater confidence, clarifying expectations, providing feedback and treatment by supervisor). The conceptualized model has been presented in figure 1. In order to test the mediating role of trust in supervisor and perceived organizational politics with employee satisfaction, the data have been collected from the Banking sector of Karachi. In this regard, convenient sampling technique has been used, and data have been collected only from those employees who have been working for a particular organization for at least a year. This is done because employees need some time to understand the HR policies and the work and they usually make their perception after some time. Similarly, the satisfaction of employees towards the appraisal system also comes with time. Therefore, the data have been collected only from those employees who have worked for a particular organization. The structural equation modelling has been used as the statistical technique. Since there were three dependent variables and two mediators in the model, therefore, six different structural models have been employed to test the mediating effect of one mediator on one dependent variable at a time in their relationship with multiple independent variables. Results, Data Analysis and Discussion Results of the analysis have shown that trust in supervisor completely mediates the relationship between rater confidence and reaction towards last performance rating. It partially mediates the relationship between setting performance expectations, treatment by supervisor, providing feedback and reaction towards last performance rating, whereas, perceived organizational politics completely mediates the relationship between clarifying expectations and reaction towards last performance rating, which partially mediates the relationship between rater's confidence, setting performance expectations, treatment by supervisor, providing feedback and reaction towards last performance rating. The five separate mediating models are examined because it is not possible to simultaneously test the individual mediating hypotheses by examining the overall model presented in Figure 1 (Kline, 2005). See table 2 for the overall mediating results of six models. Each Hypothesis results are discussed in table 2. dimensions (Rater Confidence, Setting Performance Expectations, Clarifying Expectations, providing feedback and Treatment by supervisor) and the RS, whereas, TIS does not mediate the relationship between clarifying expectations and RS. Hypothesis 1 collectively explores the mediation relationships between the FPAS dimensions and the RTP. From the above results, it is found that TIS partially mediates the relationship between three out of five dimensions (Rater confidence, Setting Performance Expectations, Clarifying Expectations and Providing feedback,) and the RTP, whereas, TIS does not mediate the relationship between clarifying expectations and RTP. Full mediation is observed between Treatment by supervisor and RTP. Hypothesis 4 collectively reveals the mediating relationships between the FPAS dimensions and the RL. From the above results, it is found that Perceived Organizational Politics (POP) partially mediates the relationship between four out of five dimensions (Rater Confidence, Setting Performance Expectations, clarifying Expectations, Providing feedback and Treatment by supervisor) and the RL. Whereas, POP fully mediates the relationship between clarifying expectations and RL. Hypothesis 5 collectively tests the mediating relationships between the FPAS dimensions and the RS. From the above results, it is found that POP partially mediates the relationship between four out of five dimensions (Rater Confidence, Setting Performance Expectations, Expectations, Providing feedback and Treatment by supervisor) and the RS. Whereas, POP fully mediates the relationship between clarifying expectations and RS. Hypothesis 6 collectively tests the mediating relationships between the FPAS dimensions and the RTP. From the above results, it is found that POP partially mediates the relationship between two out of five dimensions (Rater Confidence, providing feedback) and the RTP, whereas, TIS fully mediates the relationship between setting performance expectations, clarifying expectations, Treatment by supervisor and RTP. Conclusion and Managerial Implications According to this research, 9 banks are targeted, and it is concluded that the satisfaction of employees related to fair performance appraisal system or any other issue varies from branch to branch. It has been observed that if there is any dissatisfaction of employees related to fair appraisal performance process in any 1 branch; all other branches will have the similar response. Satisfaction of employees is necessary but fair performance appraisal system has a vital role in the success of organization and employees too. If the fair performance appraisal evaluation of employees is not accurate or biases it will be the cause of demotivation of employees. Performance Appraisal should not be confidential; employees should be informed about their mistakes and positive points so that they improve their mistakes and these encourage them to know their positive points. It is concluded that employee's response towards appraisal of performance is positive. It authenticates the proposed framework's potentiality for being utilized as an instrument for management. The conclusion has grounded on the after-effects of the five dimensions estimating the fair appraisal evaluation and the three response dimensions have used to check the level of satisfaction with the system of appraisal. This study has made a significant contribution to the emerging body of research on fair performance appraisal system. Firstly, it confirms the effect that FPAS has on employee satisfaction such as last PMS Performance Rating Reactions and the Reactions towards Supervisor and Reactions to the PMS. Also, it has taken into consideration new variables including perceived organizational politics and Trust in supervisor as mediators. The objective is to develop an indepth understanding of fair performance appraisal system towards employees and managers in organizations through the creation of a new model adopted from Greenberg (2011). Secondly, this research has determined that the antecedents in the model are relevant in Pakistani organizations context for understanding and developing a fair appraisal system for employees and managers. Thirdly, its developed measures based on the new model can be used to change the perception of employees and employers towards the system of Performance appraisal. The results have established that a fair performance appraisal system has been indirectly associated with employee satisfaction; instead, it is directly related to employee satisfaction through perceived organizational politics and trust in supervisor. These outcomes are reliable with the prediction of organizational justice theory, which states that fair performance appraisal systems encourage strength, and in turn, ultimately affect employee's satisfaction through their trust in supervisor and perceived organizational politics. The outcome of this research depends upon the dimension of employee's satisfaction towards different dimensions of the fair appraisal system. Scale of employee's satisfaction has been considered an important input towards evaluating the fair system of appraisal. Measuring the fair system of appraisal has been proposed to have various components. Keeping and Levy (2000) suggest that employee satisfaction towards fair appraisal has been considered as the most important factor by so many researchers, as acceptance depends on this variable. As well as organizations will also apply the fair system in accordance with the validity of an appraisal itself (Jawahar and Liu, 2015). The study of Ibeogu and Ozturen (2015) indicates that feelings of unfairness and dissatisfaction in the process of fair appraisal and inequity perception in evaluating may "fate" failure in the system of appraisal. DeNisi and Murphy (2017) also emphasize the responses of employee satisfaction those are always important, and sometimes opposite response may defect the most carefully build fair appraisal system. To develop a new appraisal process for evaluating and procedures for organizations, employee feedback and satisfaction play a vital role in its future amendments (Kim and Holzer, 2016). According to this study, it highly recommends that importance of employee feedback and level of satisfaction towards fair appraisal system can be used by different organizations for the assessment purpose. Whereas, employee satisfaction has been the most useful scale for appraisal feedback, the study of Imran et al. (2018) states that it has been defined in different ways that are frequently confounded and inconsistent by the so many additional theories in the variable (Keeping and Levy, 2000). In Pakistan, the management system is being restricted to the older versions of HR practices, and every employee is rewarded equally regardless of the performance. Most of the organizations either public or private are not focusing on employee productivity and skill level. They are just concerned with the work that has to be done. This is a bureaucratic nature of management science (Danish et al., 2014). In Pakistan's context very limited researches have been taken place on the effectiveness of the appraisal system, and because of this fair performance system, it is limited to very limited organizations in Pakistan. Most of the organizations have no idea about the importance of the appraisal system; thus productivity is minimal as well in organizations (Ducharme et al., 2005). Future studies, however, should replicate these findings in different settings in order to compare results from different job contexts. Regarding the impact of fair performance appraisal on employee satisfaction, the findings of this study reveal that four out of the Five (rater confidence, setting performance expectations, treatment by supervisor, providing feedback and clarifying expectations) are directly or indirectly associated with reaction towards supervisor and PMS. In accordance with Keeping and Spence (2011), the results have shown that rater's confidence, providing feedback and setting performance expectations are significantly related to reaction towards last performance rating; and finally, the clarifying expectations and treatment by the supervisor are related to the reaction towards PMS. Furthermore, in line with, Dello et al. (2017), it is found that fair performance appraisal system dimensions such as rater confidence, setting performance expectation and providing feedback are significant determinants of employee satisfaction. Similarly to what happened with perceived organizational politics, employee satisfaction mediates the relationship between fair performance appraisal dimensions and reacts towards the last PM rating. Thus, it is likely that individuals high in both rater confidence and setting performance expectations develop a feeling of satisfaction easily with the work itself and, consequently, they develop an effective satisfaction with the organization. The result regarding rater confidence is consistent with previous findings, indicating the positive reaction related to appraisal system and employee satisfaction (Jundt et al., 2015). As Lau et al. (2017) suggest, organizational politics may affect satisfaction, but the culture is also a factor affecting the fair performance appraisal process. The results of the present study suggest that managers of organization departments should consider the fact that different aspects of supervisor role may play a key part in the development of employee satisfaction during the early career of first-line officers. Thus, concerning human resource practices, managers should design a clear channel of communication to help them to become aware of performance appraisal process that might prevent every level employee from developing a sense of politics to the organization so that they may avoid them. Banking organizations, as well as other public organizations, could handle this performance appraisal process by increasing the availability of interaction with supervisors and improving their ability to change organizational culture in an organizational setting. This would mean introducing human resource practices that would ensure, for example, the employees have realistic expectations (i.e. the information they receive before appraisal is accurate and comprehensive), those expectations are met, and the junior employees feel satisfied and supported by the firm. The following research consists of various limitations such as to find the mediating effect of overall trust in supervisor and perceived organizational politics between Perceived Fair Appraisal system and employee satisfaction. In future perceived organizational politics dimensions including 9 items must be measured to view the individual effects of mediation. There are other various factors including Employee performance, Employee involvement, Employee satisfaction and motivation etc, can be identified through the system of Perceived Fair Appraisal. Moreover, the present research emphasizes the banking sector. In future, it can be replicated by other industries such as educational, industrial and telecom sector. Modelowanie równań strukturalnych (SEM) wykorzystano do przetestowania pośredniego wpływu zaufania do przełożonego i postrzeganej polityki organizacyjnej w związku między rzetelną oceną wyników a zadowoleniem pracowników. Badanie ujawnia, że zaufanie do przełożonego miało częściowo znaczenie w relacjach między wymiarami oceny rzetelnych wyników a wymiarem satysfakcji pracowników, podczas gdy postrzegana polityka organizacyjna miała duże znaczenie w relacjach między wymiarami oceny rzetelnych wyników a wymiarem satysfakcji pracowników. Wcześniejsze badania testowały relację systemu oceny wydajności z satysfakcją pracowników. Badania te, według autorów, są pierwszym badaniem, które sprawdzało zaufanie do przełożonego i postrzeganie polityki organizacyjnej w sześciu różnych modelach mediacji. Obecne badanie jest pionierskim w badaniu związku pośredniczącego między systemem oceny wyników a satysfakcją pracowników w kontekście różnych technik oceny, procesu oceny i wdrażania. Podczas procesu oceny wyników organizacje powinny monitorować różne aspekty zadowolenia pracowników, aby wspierać pozytywny system uczciwej oceny pracowników. Ramy badania są istotne w praktycznym testowaniu innych kontekstów dla porównania. Implikacje zarządcze opierają się na obecnych odkryciach w celu ulepszenia systemu oceny lub sposobu oceny w organizacji.
2019-09-26T09:06:20.545Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "edf5f93e3059557200f05e621d21ff94f5a43b48", "oa_license": null, "oa_url": "https://doi.org/10.17512/pjms.2019.19.1.31", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5abf7c7ceeb9cb922f73186d85a82ed558ef7c75", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
233461862
pes2o/s2orc
v3-fos-license
High-Throughput LC-ESI-MS/MS Metabolomics Approach Reveals Regulation of Metabolites Related to Diverse Functions in Mature Fruit of Grafted Watermelon Grafting has been reported as a factor regulating the metabolome of a plant. Therefore, a comprehensive metabolic profile and comparative analysis of metabolites were conducted from fully mature fruit of pumpkin-grafted watermelon (PGW) and a self-rooted watermelon (SRW). Widely targeted LC-ESI-MS/MS metabolomics approach facilitated the simultaneous identification and quantification of 339 metabolites across PGW and SRW. Regardless of grafting, delta-aminolevulinic acid hydrochloride, sucrose, mannose-6-phosphate (carbohydrates), homocystine, 2-phenylglycine, s-adenosyl-L-homocysteine (amino acids and derivatives), malic, azelaic, H-butanoic acid ethyl ester-hexoside isomer 1, (organic acids), MAG (18:3) isomer1, LysoPC 16:0, LysoPC 18:2 2n isomer (lipids) p-coumaric acid, piperidine, and salicylic acid-o-glycoside (secondary metabolites) were among the dominant metabolite. Dulcitol, mono-, and disaccharide sugars were higher in PGW, while polysaccharides showed complex behavior. In PGW, most aromatic and nitrogen-rich amino acids accumulated greater than 1.5- and 1-fold, respectively. Intermediates of the tricarboxylic acid cycle (TCA), stress-related metabolites, vitamin B5, and several flavonoids were significantly more abundant in PGW. Most lipids were also significantly higher in grafted watermelon. This is the first report providing a comprehensive picture of watermelon metabolic profile and changes induced by grafting. Hence, the untargeted high-throughput LC-ESI-MS/MS metabolomics approach could be suitable to provide significant differences in metabolite contents between grafted and ungrafted plants. Introduction Watermelon (Citrullus lanatus L.) is one of the most economically important horticultural crops, with 83.7% production in Asia and 9.5% production globally out of total vegetable production. According to 2017 data, China is the leading producer of watermelon with more than half of the world's production, and around 20% of crops come from grafted plants (http://faostat.fao.org accessed on 15 December 2019). Watermelon is a rich source of metabolites such as vitamins, minerals, fiber, antioxidants, organic acids, sugars, and amino acids [1][2][3]. Plant primary and secondary metabolites play an integral role in regulating the plant's growth and development, pigmentation of flowers and fruits, flavor, defense mechanisms against diseases, and tolerance to unfavorable environmental conditions [4,5]. Secondary metabolites are distributed widely among plants and are involved in multiple biological functions, including plant protection against U.V light, pathogen attack, and male sterility [6]. In plants, amino acids, nucleotides, fatty acids, carbohydrates, at maturity affecting primary metabolism [40]. In contrast, grafting in grapes increased the amino acid contents of valine, glutamine, arginine, isoleucine, serine, threonine, and leucine [41]. Cucurbita rootstocks in watermelon induced transcriptional reprogramming of citrulline genes, thus influencing watermelon's citrulline contents [42]. Previous studies mainly focused on the effect of grafting on major sugars, few metabolites, pH, titrability, acidity, and volatile compounds, survival rates, yields, effects on disease resistance. To date, no study has reported comprehensive metabolite profiling and comparative analysis of metabolites from fully mature fruit of grafted and ungrafted watermelon. In the present study, watermelon scion Zhongyu No. 1 was grafted onto pumpkin rootstock Xi Jia Qiang Sheng (Cucurbita sp.), and ungrafted (self-rooted watermelon) watermelon was used as control. Materials and Methods The plants were grown in a plastic greenhouse from March to July 2018 in Xinxiang city, China. Watermelon (C. lanatus (Thunb.) Matsum. and Nakai var. lanatus, cultivar Zhongyu No. 1) is a diploid mini watermelon characterized by a green skin with dark green stripes and yellow flesh was grafted onto pumpkin Xi Jia Qiang Sheng (Cucurbita sp.), watermelon that was not grafted served as control. All plant materials were obtained from the Laboratory of Polyploidy Watermelon Breeding, Zhengzhou Fruit Research Institute, Chinese Academy of Agricultural Sciences. Seeds were sown in plastic trays (32 cell tray) containing peat moss, and top insertion grafting was carried out with 15 days old scion according to a previously reported method [20]. In order to achieve the diameter of the same size, scion stock was sown five days before rootstock seeds. Plants were shifted to a green plastic house, on the appearance of the third true leaf. Rows were 150 cm apart, and plantto-plant distance was maintained at 50 cm. A randomized complete block design with three replications was used. Each plot consisted of 40 plants in a single row. Plants were trained to a single stem by clipping offside branches and were supported with rope. Only one fruit was allowed to develop on each plant. Standard field management procedures such as pest and disease control, weeding, fertilizer application, and irrigation were implemented during the growing season. At the onset of flowering, the female flowers were manually pollinated on the same day, and tagging was performed to record the number of days after pollination (DAP). Three healthy watermelon fruits, uniform in shape and size from three independent plants at the mature stage (34 days after pollination) in each treatment were harvested ( Figure S1). Watermelons were harvested early morning before sunrise and shifted to the laboratory on ice. In the lab, watermelons were longitudinally cut into two pieces using the sterilized knife. In total, six fruit flesh samples were collected from the heart area (center) of the watermelon, then promptly frozen in liquid nitrogen and stored at −80 • C for extraction of metabolites. Chemicals All solvents such as methanol, acetonitrile, and methanol (Merck, Darmstadt, Germany) were HPLC-grade. Double deionized water with a Milli QULTRA purification system (Millipore, Vimodrone, Italy) was used for solutions. All original standards were acquired from Sigma-Aldrich, St. Louis, MO, USA (www.sigmaaldrich.com/united-states. html accessed on 13 August 2018). Methanol and deionized water were used as the respective solvents for preparing stock solutions and were kept at −20 • C for downstream analysis. Analytical Condition of LC-MS/MS Analyses of sample extracts were carried out using a liquid chromatography-electrospray ionization mass spectrometry (LC-ESI-MS/MS) system (HPLC, Shim-pack UFLC SHI- The ESI source operation conditions were set as ion source, turbo spray; source temperature 500 • C; ion-spray voltage (IS) 5500 V; ion source gas I (GSI), gas II(GSII), curtain gas was set at 65, 60, and 25.0 psi, respectively; the collision gas was high. In QQQ and LIT modes, 10 and 100 µmol/L polypropylene glycol solutions were used to perform the tuning of the instrument and mass calibration, respectively. QQQ scans were acquired as MRM experiments with collision gas (nitrogen) set to 5 psi. DP and CE for individual MRM transitions were performed with further declustering potential (DP) and collision energy (CE), optimization. During the elution of metabolites in the specific period, MRM transitions corresponding to each metabolite were observed within that period [50]. A quality control check was executed as an effective measure to ensure data reliability. For this reason, the QC sample was prepared by the mixture of sample extracts and inserted into every two samples to monitor the changes in repeated analyses. Qualitative and quantitative analyses of metabolites were undertaken using the methods of [51]. The qualitative analysis of primary and secondary mass spectrometry data was performed based on the self-built database MWDB (Metware Biotechnology Co., Ltd., Wuhan, China) and the publicly available metabolite databases. The interference from isotope signals; repeated signals of K + , Na + , NH4 + ions, and fragment ions derived from other larger molecules were eliminated during identification. Metabolite structure analysis was obtained by referencing existing mass spectrometry databases such as MassBank (http://www.massbank.jp accessed on 7 January 2019), KNAPSAcK (http://kanaya.naist.jp/KNApSAcK accessed on 8 January 2019), and METLIN (http://metlin.scripps.edu/index.php accessed on 8 January 2019). Statistical Analysis Important metabolic pathways were drawn using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database as a reference (https://www.genome.jp/kegg/kegg1.html accessed on 4 November 2019). Pathway illustrations and bar graphs were created using Excel, 2007. Analysis of variance was performed and tested for statistical significance using Statistics 8.1 (Statistics Software, Tallahassee, FL, USA). Tukey's honest significant difference test was used for the comparison among treatments means p < 0.05. Fold change represents the ratio of the mean of each metabolite in two treatments. Fold change was calculated by using PGW/SRW formulas. Carbon Metabolism Carbohydrates are metabolized during glycolysis and tricarboxylic acid (TCA) cycles to provide a carbon skeleton for other metabolites. Glucose is converted into pyruvate through series of enzymatic reactions in the glycolysis pathway, which further produce either acetyl-CoA or OAA through oxidation or carboxylation, respectively. TCA cycle starts with the condensation of acetyl-CoA and OAA to yield citrate, followed by conversion of citrate into iso-citrate through aconitase enzyme. In the next step, iso-citrate is dehydrogenized into α-ketoglutarate, which is further converted into succinyl-CoA by the α-ketoglutarate dehydrogenase enzyme. Oxaloacetate is produced from succinyl-CoA via the synthesis of succinate, fumarate, and malate [52]. Clear differences among the metabolites of the gluconeogenesis pathway, sugar metabolism, and tricarboxylic acid cycle (TCA) were observed on comparing mature fruit of PGW and SRW ( Figure 2). Glucose-6-phosphate, an intermediate of gluconeogenesis pathway, was significantly (p < 0.05) greater by 1.75-fold in PGW. Relative contents of two trehalose and sucrose were 1.03and 1.82-folds higher in PGW, respectively, but the difference was not significant statistically. Glucose is an essential sugar of watermelon, which was almost similar in PGW and SRW fruits. Intermediates of the tricarboxylic acid cycle such as citrate, cis-aconitate, α-ketoglutaric acid, succinate, malic acid, and fumaric acid changed in response to grafting. Malic and fumaric acid showed a 1.40-(p < 0.05) and 1.32-folds (p > 0.05) increase in PGW, while the relative abundance of citrate and succinate were higher by 0.75-and 0.88-folds in SRW. Notably, α-ketoglutaric acid displayed 2.51-folds increase (p < 0.05) in PGW, while the relative content of cis-aconitate was 1.87-folds (p > 0.05) greater in PGW. Y-axis shows changes in relative contents of metabolites associated with carbon metabolism between mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Red or green color shows the higher content of metabolites in PGW or SRW, respectively. Vertical bars represent standard error among three independent replicates. Values are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. M6P, maltose-6phosphate; G6P, glucose-6-phosphate. Carbohydrate Metabolism Among the carbohydrates, threose and delta-aminolevulinic acid hydrochloride exhibited 2.63-and 1.21-folds higher contents (p < 0.05) in mature fruit of PGW. In contrast, galacturonic acid (gala), galacturonic acid, and glucopyranuronate showed a significant (p < 0.05) decline of 0.17-, 0.15-, and 0.16-folds as a consequence of grafting in PGW, respectively. Polyols are a distinct group of carbohydrates with their role in fruit quality and texture. Fruits from the grafted plant contained a significantly (p < 0.05) higher amount of dulcitol, but the relative levels of inositol and sorbitol were more (p > 0.05) in ungrafted watermelon ( Figure 3). Arginine Metabolism Nitrogen is assimilated during the arginine pathway through a series of enzymatic reactions to convert glutamate into citrulline. In the last step, citrulline changes to arginine, which is required for development and stress conditions [53]. Nitrogen is recycled and distributed in the plant via arginine metabolism. Organic nitrogen is stored and transported in the plants in the form of arginine and citrulline because of their high carbon and nitrogen ratio [54]. The catabolism of citrulline starts with acetylation of glutamate [55] followed by phosphorylation, reduction, and transamination, yielding ornithine in a cyclic pathway by several enzymes. Subsequently, citrulline and arginine are synthesized from ornithine in a linear pathway by several coordinated enzymatic reactions [55][56][57]. Several nitrogenrich amino acids of the arginine pathway such as ornithine, citrulline, and asparagines increased by 1.09-, 1.09-, and 1.03-folds in PGW, respectively, while the amount of glutamate, glutamine, arginine, and N-acetyl glutamate decreased by 0.66-, 0.65-, 0.94-, and 0.76-folds, respectively ( Figure 4). Interestingly, the grafting effect was only significant on glutamate. . Y-axis shows the differences in relative contents of amino acids associated with citrulline metabolic pathway between mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Red or green color shows the higher content of amino acids in PGW or SRW, respectively. Vertical bars represent standard error among three independent replicates. Values are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. Amino Acid Metabolism In total, 51 amino acids and amino acid derivatives were detected, of which 21 amino acids and their derivatives showed significantly higher levels either in PGW or SRW. Grafting significantly elevated the contents of 18 amino acids and their derivatives (Supplementary Materials). Aromatic amino acids and their derivatives, including phenylalanine, tryptophan, tyrosine, N-hydroxy-L-tryptophan, N-acetyl-L-tyrosine, and hexosephenylalanine, displayed significantly (p < 0.05) greater abundance in PGW. Alanine and gamma-aminobutyric acid involved in the GABA shunt pathway were observed to be increased by 0.80-fold in SRW (Supplementary Materials). Several other amino acids, namely homoglutamic acid, cysteine, leucine, histidine, tryptophan, and valine, exhibited significantly greater abundance in PGW. Among the sulfur-containing amino acids, methionine increased significantly (p < 0.05) by 1.64-folds, whereas the content of threonine decreased significantly (p < 0.05) by 0.5-folds in PGW ( Figure 5). . Y-axis shows the differences in relative contents of amino acids between mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Vertical bars represent standard error among three independent replicates. Data are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. Glutathione Metabolism GSH is synthesized from glutamate, cysteine, and glycine by the action of two enzymes (γ-glutamylcysteine synthetase (γ-ECS) and glutathione synthase (GSHS)). γ-carboxyl group of Glu and an α-amino group of Cys are combined in the first step to form γ glutamyl-cysteine by a γ-ECS enzyme. In the next step, the GSHS enzyme carried out reaction by creating an amide bond with the α-carboxyl group of the cysteine moiety in γ-glutamylcysteine and the α-amino group of glycine to yield GSH. Then this reduced GSH is converted in oxidized GSSH form under particular circumstances such as stress or other stimuli. GSH can be directly or converted into other forms to be consumed in other chemical reactions. In the following step, cys-glyc is released, and GSH combines with a free amino acid, forming γ-glutamyl amino acid. With the removal of amino acid from γ-glutamyl amino acid 5-oxoproline is produced, and the cycle continues [58,59]. Several intermediates were detected in the metabolic pathway of glutathione and showed obvious differences between PGW and SRW. All the intermediates of this pathway were accumulated in greater abundances in PGW than SRW, including oxidized glutathione (GSSG), glutathione reduced form (GSG), 5-oxoproline, cysteine, γ-glutamyl-AA. Specifically, oxidized glutathione (GSSG) and cysteine were 1.37-and 1.26-folds (p < 0.05) greater in PGW than SRW. One intermediate glutamate was found significantly higher in SRW, while cys-glyc was stable in PGW and SRW ( Figure 6). Figure 6. Y-axis shows the differences among the relative contents of intermediates of the glutathione metabolic pathway between mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Red or green color shows the higher content of amino acids in mature fruit of PGW or SRW, respectively. Vertical bars represent standard error among three independent replicates. Data are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. Phenylpropanoid Metabolism Phenylpropanoid constitutes a large group of secondary metabolites that play a central role against stresses, provide structural support to plants, and are implicated in plant survival. Range of metabolites such as flavonoids, coumarins, hydroxycinnamic acid conjugates, and lignans are produced in the metabolic pathway of phenylpropanoid. The pathway begins with phenylalanine, which undergoes a series of reactions to produce lignans, hydroxycinnamics, ferulic and sinapic acids and their corresponding esters, benzoic acid derivatives, and pigments such as chalcones, flavonoids, and anthocyanin [7,60]. Some secondary metabolites, such as phenols, flavonoids, flavanones, flavonols, etc. that are produced during the phenylpropanoid pathway showed slight differences with the exception of p-coumaric acid between PGW and SRW. Two flavanones (naringenin chalcone, neohesperidin), one amino acid (phenylalanine), two hydroxycinnamoyl derivatives (ferulic acid, confiryl alcohol), two flavone (luteolin 7-O-glucoside and cosmosin), one quinate derivative (chlorogenic acid), and one flavanole (nicotiflorin) were present in higher abundance in PGW as compared to SRW (Figure 7). In addition, some phenylpropanoids compounds such as cinnamic acid, p-coumaric acid, p-Coumaraldehyde, coniferylaldehyde, sinapyl alcohol, flavones-c-glycosides, vitexin, isovitexin, flavones, and chrysoeriol showed substantially higher abundances in SRW than PGW. Figure 7. Y-axis shows the differences among therelative contents of intermediates of phenylpropanoid pathway between mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Red or green color shows the higher content of amino acids in PGW or SRW, respectively. Vertical bars represent standard error among three independent replicates. Data are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. However, few hydroxycinnamoyl derivatives showed significant variation in PGW, of which one derivative was higher, and three were shown to have lower content in PGW. Two flavone-c-glucoside and benzoid derivatives were abundant (p < 0.05.) in PGW compared to SRW. Grafting caused a significant decline (p < 0.05) in the content of one flavone and two polyphenols (Supplementary Materials). Linolenic Acid Metabolism and Fatty Acids Linolenic acid is produced from fatty acid metabolism, which is further catalyzed into diverse classes of compounds, including ketones, alcohols, volatiles, esters, aroma compounds, aldehydes, alkenes, acids, and other compounds [61,62]. The first step is the oxidation of linolenic acid followed by dehydration of 13-hydroperoxy-octadecatrienoic acid carried out by lipoxygenases (LOXs) and allene oxide synthase, respectively [63]. The unstable epoxide generated in the above step is then cyclized stereospecifically and converted into 12-oxo-phytodienoic acid (OPDA) by allene oxide cyclase (AOC) followed by reduction and removal of side carbon chain in series of reactions that lead to the formation of jasmonic acid [64,65]. Lipids measured in this study belong to fatty acid, lipids, glycerolipids, and glycerophospholipids classes, most of the lipids accumulated in a higher amount in mature fruit of PGW than SRW. Most of the linolenic pathway intermediates showed a greater abundance in PGW than SRW, with the exception of 13-HpOTrE(r) (Figure 8) . Y-axis shows the differences among the metabolites' relative contents linked with a linolenic acid pathway from mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Red or green color shows the higher content of amino acids in mature fruit of PGW or SRW, respectively. Vertical bars represent standard error among three independent replicates. Data are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. Figure 9. Y-axis shows the differences in relative contents of lipids between pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW). Vertical bars represent standard error among three independent replicates. Data are the mean ± SE of three replicates. Different lowercase letters indicate significant differences at p < 0.05. Secondary Metabolites Alkaloid profiles showed obvious differences between the mature fruit of PGW and SRW. A total of seven alkaloids were detected. Of which alkaloids betain and 4betahydroxy-11-O-(2 -pyrolylcarboxy) epilupinine showed a significant increase of 2.32-and 2.01-folds, while but theobromine exhibited a significant decline of 0.78-fold in PGW. Most of the hormones detected showed greater abundances in PGW. Among hormones, trans-zeatin-o-glucoside was significantly abundant (1.27-folds) in PGW, while trans-zeatinriboside and ABA showed significantly higher amounts 0.82-and 0.72-folds, respectively, in SRW. Similarly, all the vitamins were relatively higher in PGW, while only one vitamin d-pantothenic acid (vitamin B5) was significantly higher by 2.35-folds in PGW. Metabolites that were classified as others showed striking differences between PGW and SRW. Seven metabolites accumulated in a significantly higher amount, whereas five declined in response to grafting (Supplementary Materials). Discussion In this study, we compared the metabolomic profile of fully mature fruit of PGW and SRW. This study provides comprehensive metabolite profiles and graft-induced changes in the mature fruit of PGW and SRW. This is the first report on large-scale metabolic profiling in grafted and self-rooted watermelon. LC-MS, as a robust tool coupled with the MIM-EPI system, is capable of identifying a large number of metabolites from one sample [66]. We have identified 339 metabolites from the fully mature watermelon fruit of PGW and SRW. Regardless of grafting, delta-aminolevulinic acid hydrochloride, sucrose, mannose-6-phosphate were dominant carbohydrates. Previously glucose, fructose, and sucrose were regarded as major sugars. The disparity in results might be due to the limited number of carbohydrates in the previous studies [9]. Carbohydrates play a key role in regulating fruit quality and taste, plant growth, energy metabolism, tolerance against biotic and abiotic stresses [67]. In grafted watermelon, carbohydrates including threose, delta-aminolevulinic acid hydrochloride, glucose-6-phophsate, and dulcitol significantly increased, while contents of galacturonic acid (gala), galacturonic acid, and glucopyranuronate inositol, sorbitol significantly declined. Sugars such as sucrose, glucose, and trehalose showed non-significant changes. Previous studies have documented the insignificant effect of similar type rootstock on glucose and sucrose sugars. [29,55,56]. In watermelon, citric acid and malic acids are considered primary organic acids [11]. Our results suggested that malic acid, azelaic acid, and H-butanoic acid ethyl ester-hexoside isomer 1 are dominant acids. However, differences in the results may be attributed to different varieties, developmental stages, analytical techniques, and environmental factors. Most importantly, earlier studies were limited in their scope and measurement of metabolites. TCA cycle generates energy as ATP, and the biosynthesis of several metabolites is directly or indirectly driven by the TCA cycle. Intermediates of the TCA cycle such as α-ketoglutaric acid, malic, and cis-aconitate and displayed higher contents in grafted watermelon fruit, whereas citrate and succinate contents were decreased. Higher malic acid and lowercitric acid contents in grafted watermelon was in agreement with previous studies [43]. Amino acids are involved in a multitude of plant physiological events besides their role in protein synthesis. Amino acids play essential roles in plant growth and development, such as redox power, resistance against stresses, and regulation of intracellular pH [68][69][70][71][72][73][74]. Several amino acids also serve as precursors for some secondary metabolites, e.g., glucosinolates [75]. Most of the amino acids accumulated at an elevated level in PGW, while a few decreased. Specifically, amino acids such as ornithine and citrulline in the arginine pathway were found in a greater amount (p < 0.05). In contrast, arginine was lower in abundance, showing the reduced degradation of citrulline into arginine in grafted watermelon, thus increased the abundance of citrulline [76]. Nitrogen is a crucial constituent of chlorophyll, amino acids, proteins, nucleic acid and hormones [77]. In our study, increased biosynthesis of amino acids could be attributed to increasing nitrogen uptake and nitrogen use efficiency in pumpkin-grafted watermelon, as previously reported [78]. Amino acids are developmentally regulated, and reduction in a few amino acids could explain a higher rate of protein synthesis is linked with increased consumption of amino acids, leading to the deficiency of free amino acids in grafted plants [79]. Lipids act as signaling molecules, barriers to the outside environment of the cell, and excess energy is stored in the form of lipids in plants [80]. These are also suggested to accumulate in cold stress conditions and involve in plant growth [81]. With the exception of 13-HpOTrE(r), all the intermediates of the linolenic acid pathway increased substantially in grafted fruit. Additionally, free lipids such as lycospc and its derivatives, monoacylglycerol [15], and its derivatives increase several folds in response to grafting. Substantial increase in the content of total fatty acids, total lipids, and total unsaponated lipids in the roots and leaves of watermelon and tomato after grafting was recorded [82]. Major non-enzymatic antioxidants that are associated with the glutathione metabolic pathway includes glutathione, GSH, γ-glutamyl-cysteinyl-glycine. Apart from the role in the storage and transport of reduced sulfur, GSH takes part in the detoxification of reactive oxygen species (ROS), directly or indirectly [83,84]. Glutathione enhances plant tolerance to different abiotic stresses, including salinity, drought, high and low temperature, and toxic metal stress [85][86][87][88]. Apart from glutamate, all the intermediates (GSSG), glutathione reduced form (GSG), 5-oxoproline, cysteine, and γ-glutamyl-AA. Cystein and 5-oxoproline accumulated in a higher amount in grafted watermelon. The above results indicate that pumpkin rootstock might be useful against biotic and abiotic stress, as evident from the above results. Secondary metabolites such as flavonoids, flavonols, phenolic acids, polyphenols, lignin's, and tannins have been implicated in plant defense mechanisms against the pathogen, various stresses, UV radiation, fruit quality and graft union success [89][90][91][92][93][94][95][96][97]. Grafting show non-significant differences in phenylpropanoid and flavonoids contents, with only few up-regulated while most of them were recorded lower in grafted plants. Phenylpropanoid, namely kaempferol, was more abundant in the leaves of the grafted scion of watermelon [98]. Plant vitamins are essential for human health and play a significant role in plant metabolism and redox reaction and act as cofactors [99]. Previously, only ascorbic acid has been analyzed and observed to be increased by 40% in grafted watermelon. In our study, the fruit of grafted watermelon showed higher amounts of all vitamins, including ascorbic acid [37]. According to [45], modification of hormone status and water and nutrient uptake in grafted vegetables by specific rootstocks may lead to changes in cellular morphology, cell turgor, and cell-wall characteristics which, in turn, affect fruit firmness. In our study, IBA showed a significant increase in fruit of grafted watermelon, which may have influenced the fruit firmness and developments. Alkaloids such as nicotine reduced to one per cent when tobacco was grafted onto tomato rootstock [100]. Alkaloids are regarded as bioactive metabolites which are involved in plant environment interaction and combating against drought and control of human diseases [101]. Betain and 4beta-hydroxy-11-O-(2 -pyrolylcarboxy) epilupinine showed a relatively higher amount in grafted watermelon. Rootstock effect on scion characteristics might be due to transfer of some mobile substances, such as hormones or various RNA species, across the graft union or the differential absorption ability of rootstocks for nutrients [102]. The fact that the grafting-induced changes may be due to the process of grafting itself or the impact of heterografting to a different rootstock. However, future studies are needed to study the regulatory mechanism responsible for changes in metabolic contents. Conclusions The comprehensive metabolite profiling and comparative analysis of metabolites from mature fruit of pumpkin-grafted watermelon (PGW) and self-rooted watermelon (SRW) provide novel information on metabolomics profile and changes induced by graft-ing. The large-scale metabolomics analysis identified 339 metabolites in PGW and SRW, and most of the metabolites were in greater abundance in PGW. These metabolites were involved in major metabolic pathways, multiple biological functions, including plant growth, phytochemicals, and stress-related metabolites. Among the secondary metabolites, hormone and vitamin B5 and alkaloids were also increased in response to grafting in PGW. Linolenic acid and its intermediates involved in the synthesis of taste, aroma, and health-related compounds were present in a greater amount in PGW. Overall, higher accumulations of metabolites in main classes of metabolic pathways represents higher plant growth and fruit quality attributes in PGW. These results confirmed the importance of using high-throughput untargeted metabolomics approach to obtain comprehensive information regarding up and down regulation of metabolites of different metabolic pathway as affected by grafting. Furthermore, additional research work is needed to understand the functioning and interaction of these pathways, gene expression, genetic and epigenetic control of these changes induced in grafted and ungrafted plants.
2021-05-01T06:17:18.769Z
2021-04-23T00:00:00.000
{ "year": 2021, "sha1": "9f0261118c338c634232331d445bed60b4fa656b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/11/5/628/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "674d95857920c9fd45d1bb958749e49f15254c32", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
221150998
pes2o/s2orc
v3-fos-license
On the weighted average number of subgroups of ${\mathbb {Z}}_{m}\times {\mathbb {Z}}_{n}$ with $mn\leq x$ Let $\mathbb{Z}_{m}$ be the additive group of residue classes modulo $m$. For any positive integers $m$ and $n$, let $s(m,n)$ and $c(m,n)$ denote the total number of subgroups and cyclic subgroups of the group ${\mathbb{Z}}_{m}\times {\mathbb{Z}}_{n}$, respectively. Define $$ \widetilde{D}_{s}(x) = \sum_{mn\leq x}s(m,n)\log\frac{x}{mn} \quad \quad \widetilde{D}_{c}(x) = \sum_{mn\leq x}c(m,n)\log\frac{x}{mn}. $$ In this paper, we study the asymptotic behaviour of functions $\widetilde{D}_{s}(x)$ and $\widetilde{D}_{c}(x)$. Introduction and main result Let Z m be the additive group of residue classes modulo m. Let µ, τ and φ be the Möbius function, the divisor function and the Euler totient function, respectively. For any positive integers m and n, s(m, n) and c(m, n) denote the total number of subgroups and cyclic subgroups of Z m × Z n , respectively. The properties of the subgroups of the group Z m × Z n were studied by Hampejs, Holighaus, Tóth and Wiesmeyr in [1]. We recall that gcd(m, n) is the greatest common divisor of m and n. The authors deduced formulas for s(m, n) and c(m, n), using a simple elementary method. They showed that s(m, n) = Here, as usual, the symbol * denotes the Dirichlet convolution of two arithmetical functions f and g defined by (f * g)(n) = d|n f (d)g(n/d), for every positive integer n. Suppose x > 0 is a real number. Define The functions S (2) (x) and S (4) (x) represent the number of total subgroups, and cyclic subgroups of Z m × Z n , respectively, having rank two, with m, n ≤ x. W.G. Nowak and L. Tóth [5] studied the above functions and proved that where A j,r (1 ≤ j ≤ 4, 0 ≤ r ≤ 3) are computable constants. Moreover, they showed that the double Dirichlet series of the functions s(m, n) and c(m, n) can be represented by the Riemann zeta function. Later, the above error term has been improved by Tóth and Zhai [8] to O x the authors obtained two asymptotic formulas of D s (x) and D c (x) by using the method of exponential sums. They proved that where P 4 (u) and R 4 (u) are polynomials in u of degree 4 with the leading coefficients 1/(8π 2 ) and 3/(4π 4 ), respectively. Put Sui and Liu also studied the upper bound of the mean-square estimate of ∆ s (x) and ∆ c (x) and guessed that ∆ s (x), ∆ c (x) ≪ x 41/72+ε hold on average. Moreover, they conjectured that ∆ s (x), ∆ c (x) ≪ x 1/2+ε . In this paper, we study the weighted average of s(m, n) and c(m, n) with weight concerning logarithms. Let then, we have the following results. Theorem 1. Let the notation be as above. For any positive real number x > 2, we have and where P 4 (u) and R 4 (u) are polynomials in u of degree 4 with computable coefficients. Auxiliary results In order to prove our main result, we first show some necessary lemmas. and ∞ m,n=1 Proof. The proof can be found in [5,Theorem 1]. Proof. The first estimate follows immediately from [7, Theorem II.3.8]. The second and third estimates can be found in [6]. Lemma 3. We have Proof. The proof of this result can be deduced from [3, Proposition 2] when k = 1. Next we estimate I 3 (x, T ). Using Lemmas 2 and 3 we find that Combining the above results with (6), we get the desired conclusion.
2020-08-19T01:00:53.753Z
2020-08-18T00:00:00.000
{ "year": 2020, "sha1": "114c2c826f1a5a941bc531c877fd6b29cf6cb0d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "114c2c826f1a5a941bc531c877fd6b29cf6cb0d9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
256933590
pes2o/s2orc
v3-fos-license
The Potential Role Of Glial Fibrillary Acidic Protein In Evaluation of Organophosphorus-Induced Neurotoxicity: A prospective clinical study . Abstract Background: Organophosphorus compounds (OPC) poisoning leads to several neurotoxic disorders in humans. Glial Fibrillary Acidic Protein (GFAP) released in response to neuronal cell injury and has been used as a sensitive and specific indicator of several neurotoxic conditions, there is no human studies focused on the diagnostic and prognostic value of GFAP in OPC toxicity. Thus, there is a need for studying its role in OPC poisoning. Objectives: This study aimed to assess the usefulness of GFAP as early predictor of OPC related neurotoxic disorders both in acute poisoning and chronic exposure and to correlate levels of GFAP with severity of acute OPC poisoning. Methods: This is a prospective clinical study that was conducted in Poison Control Center, Ain Shams University Hospitals. The study included 4 groups, control group (23 healthy volunteers), group II acute moderate OPC patients (19 patients), group III acute severe OPC patients (25 patients), and group IV chronic group (41 farmers). All participants were subjected to measurement of GFAP, serum acetylcholine (ACh), serum pseudo cholinesterase (PChE), serum glucose, potassium, serum lactate ,lactate dehydrogenase (LDH), and serum creatine phosphokinase (CPK). Results: Serum GFAP and ACh were significantly high in all patient groups compared to the control group, but no significant difference was found between acute moderate and acute severe groups. Also serum PChE had no significant difference between patients of acute moderate and severe groups. Serum glucose, lactate, LDH and CPK were highly significant in acute severe group when compared to acute moderate group. Conclusion: Glial Fibrillary Acidic Protein, a biomarker of neurotoxicity, can be used in patients with acute and chronic OPC poisoning as early predictor of OP induced brain cell injury. Serum glucose, lactate, LDH and CPK could be used as simple tools in prediction of severity in acute OP poisoning Introduction rganophosphorus compounds represent a large and important class of environmental chemicals.They exert their toxicity through interfering with Ach neurotransmission resulting in accumulation of acetylcholine in cholinergic synapses with subsequent wide range of neurotoxic disorders (Nicolopoulou-Stamati et al., 2016;Jokanović, 2017). According to the World Health Organization, 3 million cases of pesticide exposure are estimated annually; the majorities of these exposures are caused by OPC and result in more than 250,000 fatalities ( Tripathi S 2014).In 2019, Poison Control Center, Ain Shams University Hospitals (PCC-ASUH), mortality rate estimates due to OPC poisoning represented 23.8% of overall deaths that year (Abdelhamid 2021). Organophosphate induced brain damage is a progressive damage to the brain, due to cholinergic neurons excitotoxicity resulting from OP-induced irreversible inhibition of acetylcholinesterase (AChE).This secondary neuronal damage occurs in cholinergic regions of the brain which contain dense accumulations of cholinergic neurons (Chen, 2012). Biomarker levels can predict the degree of neurotoxicity caused by OPC and identify patients who are most likely to develop long term sequelae.These patients may benefit by being targeted for rehabilitation therapy.Elevated biomarker levels may also identify patients who are at a higher risk for secondary deterioration and who would benefit from repeated imaging, monitoring, and increased surveillance (Kochanek et al., 2008). Glial fibrillary acidic protein (GFAP) has been used as a sensitive and specific indicator of several neurotoxic conditions as stroke and traumatic brain injury (TBI) and due to commercial availability; it had attained a growing attraction in clinical research (Lei et al. 2015;Bernal and Arranz, 2018). Subjects: This is a prospective clinical study that was done between the start of January 2019 and the end of February 2020 at the Poison Control Center (PCC), Ain Shams University Hospitals on 108 subjects.44 adult patients of both sex with acute OP poisoning were included.Patients with coingestion, or those with head trauma, stroke or any neurological diseases and patients who refuse to be enrolled in the study were excluded.The diagnosis of OPC poisoning was established through history of exposure to an OP agent, clinical examination and confirmed by low pseudocholinestrase level.Patients were graded using the modified Dreisbach's classification of severity (Table1). The studied subjects were divided into the following groups: Group I (Control group): 23 healthy adult volunteers not exposed to OPC.Group (II): Acute moderate OPC patients group: 19 patients moderately intoxicated by OP insecticide.Group (III): Acute severe OPC patients group: 25 patients severely intoxicated by OP insecticide.Group (IV): Chronic OPC exposure group: 41farmers chronically exposed to OPC during their work in the fields with duration not less than 5 years.They were selected from the outpatient laboratory of the PCC.The classification of the studied subjects summarized in Figure (1). Methods Detailed history was taken, and then clinical examination, was done to confirm diagnosis and to grade the severity of OP poisoning in acute intoxicated patients.Samples of venous blood were collected from each subject on admission for analysis of GFAP, serum acetylcholine (ACh), serum pseudo cholinesterase (PChE), serum glucose, potassium, serum lactate, lactate dehydrogenase (LDH), and serum creatine phosphokinase (CPK). An observation sheet was designed for acute OP intoxicated patients; it included demographic data (age, sex, and residence), intoxication data (route, mode of poisoning and delay time), and clinical data in addition to investigational data, and outcome.Patients were observed till recovery, or death. Ethical consideration A written informed consent was taken from subjects or their guardians for participation in this study.Permission was obtained from institutional ethical committee and the director of PCC-ASU hospitals.Confidentiality of records was maintained through coding numbers, Statistical analysis: In the present study, all data were statistically analyzed by SPSS software version (21.0).Results were expressed as Mean± standard deviation (SD) Statistical analysis was performed using parametric analysis (ANOVA one-way -Spearman correlation and Chi-Square Test).Significant values were P< 0.05 and highly significant at P < 0.01. Results In this study, the mean age of patients with acute OP poisoning was 31.68 ± 14.401.Among the 44 patients, 23 were males and 21 females.The majority of patients (50%) were from Cairo followed by Qalioubya 22.7% (Table 2). Oral route was the commonest (97.7%) and only one patient inhaled OPC, the mode of poisoning was mainly suicidal (95.5%).the mean delay time was 5.28±5.15hours.There was no significant relation between route, or mode and severity of OP poisoning but delay time had significant relation with severity of OP poisoning (Table 3). In chronic exposure group the 41farmers were 28-40 years age and all were males.Most of them came from Dakahlia for Agricultural Development Company and the exposure was mainly through inhalation. The most frequent clinical manifestations among acute OPP patients were small pupils (in 79.5%), vomiting (70.5%), and chest creapitations (54.5%), and the least frequent manifestation was abdominal pain (36.4%).Coma had high significant association with severity of acute OPP, also the presence of fasiculations and chest crepitations showed significant difference between acute moderate and acute severe OPP groups (Table 4). Serum GFAP and ACh exhibited a significant increase in all patient groups as compared to control group.On the other hand, there was significant low serum pseudo cholinesterase (PChE) level among patient of acute severe and acute moderate groups compared to both control and chronic groups.But no significant difference was detected between acute moderate and acute severe groups as regards serum GFAP, ACh and pseudo cholinesterase level, as shown in (table 5). Table (6): showed statistically significant low K + levels among patient of acute moderate and acute severe groups compared to both control and chronic groups but no significant difference was noted between acute moderate and acute severe groups. Furthermore, markers for systemic functions showed results of serum glucose, lactate, LDH and CPK exhibited significant increase in acute OPs groups when compared to both control and chronic groups.Also high significant differences were found between acute moderate and acute severe groups, as shown in table (6). Regarding mortality rate it was high, out of 44 cases of acute poisoning groups 18 patient died (40.9%), as shown in table (7). Discussion Organophosphorus compounds, the most widely used insecticides, may cause serious poisoning and even death especially in developing countries.There effects are primarily neurotoxic, AChE inhibition is the main mechanism of toxicity resulting in accumulation of Ach which cause overstimulation of muscarinic and nicotinic receptors with subsequent disruption of transmission of nerve impulse in both central and peripheral nervous systems (Joshi et al.,2005). In the present study the mean age for the patients was 31.68±14.40.Similar finding was reported by Ahmed et al., (2014) and Banday et al., (2015) In this study males were affected more than females.This was in agreement with many other studies (Jayawardane et al., 2012;Ahmed et al., 2014& Sumathi et al., 2014). Half of patients in the current study were from Cairo followed by Qalioubya (22.7%).This could be attributed to near distance to PCC-ASU and presence of toxicologists in other governorate. In this study majority of patients (97.7%) were exposed to the OPC through ingestion, while only one patient through inhalation.This was similar to results of Bilal et al., (2014), ÇOLAK et al., (2014), and Priyadarsini et al., (2015).This is could be due to easy ingestion of poison especially with liquid form of OPC (Coskun et al, 2015). Majority of patients in the current study (95.5%) were intoxicated intentionally.These results similar to Ali et al., (2012), Hassan and Madboly, (2013), Banday et al., (2015).It could be due to easy availability, low cost, lack of rules regarding usage and sale of these compounds. In the current study there was no significant relation between route and mode of intoxication as regard the severity of cases.In contrast Amir et al., (2020) who stated that mortality in OP toxicity, depend upon route of poisoning.This could be attributed to that almost all of acute patients in this study had the same route and mode of poisoning. In this there was significant relation between the delay time and severity of OP poisoning.Parate et al., (2016), also found significant relation between the delay time and the outcome. In the present study the most frequent clinical manifestations were small pupils in 79.5% which include pin point pupil in 56.8% and constricted pupil in 22.7% followed by vomiting in 70.5% and chest crepitation in 54.5%. This was in agreement with Banday et al., (2015) who found that miosis was the most presenting manifestation .Also in studies by Banerjee et al.,(2012Banerjee et al.,( ) & rehiman et al., (2008) ) the vomiting was the most common symptom and miosis was the most common sign in acute organophosrous intoxicated patient. There was significant relation between severity and clinical features of coma, fasciculation and chest crepitations.This was in accordance with Amir et al., (2020) who reported that development of fasciculation or impaired consciousness in organophosphrous poisoned patient carried a poor prognosis. Previous studies highlight the neurotoxic consequences of acute pesticide exposure which associated with a wide range of symptoms as well as abnormalities in nerve function and deficits in neurobehavioral performance and may be accompanied by increased risk of neurodegenerative diseases (Kamel and Hoppin, 2004). The present study showed serum mean GFAP was significantly high in all patient groups compared to the control group.Similarly, Lim et al. ( 2011) found significant increase in GFAP expression in hippocampus of albino rats exposed to chronic chlorpyrifos administration without inhibiting serum cholinesterase. Various clinical investigation in other studies clarified that level of GFAP was considered as informative biomarker for brain injury (Borg et al. 2012Akdemir et al., 2014, Di Battista et al., 2015). GFAP is a cytoplasmic filament protein released in response to neuronal cell injury, it is expressed by several cells in central nervous system (CNS) as astrocytes and ependymal cells (Akdemir et al., 2014).In the CNS, astrocytes become reactive due to trauma, infection, and chemical insults.Since GFAP is a marker protein of astrocytes, it is upregulated in many neurological diseases such as ischemic stroke, neuroinflammation, TBI, neurodegeneration and other diseases in the CNS (Li et al., 2020).Reactive astrogliosis, a process by which astrocytes respond to all CNS insults, has emerged as a pathological hallmark of CNS lesions (Sofroniew, 2009).This coincide with the findings of Liu et al. (2012) who discovered that astroglial activation, characterized by elevated GFAP protein within 24 hours and sustained for up to 7 days, preceded OP-induced brain injury. An experimental study done by Badawy et al. (2017), which demonstrated the histopathological effects of OPC on the brains of albino rats.They found longer and more numerous astrocytic processes in the brain tissues of OP-treated albino rats with high levels of GFAP compared to controls.Fodale et al., (2006) reported that accumulation of Ach in acute OP poisoning evokes the muscarinic and nicotinic receptors lead to increase in the oxidative stress and free radicals generation.This process exhausted the nervous cells and may even degenerate them or increase cell membrane permeability causing damage of blood brain barrier (BBB).Leakage of GFAP from astrocytes to the interstitial fluid then to blood may be helpful in detecting the degree of degeneration in CNS. The present study showed serum acetylcholine was significantly elevated in all patient groups compared to controls.Once acetylcholinesterase (AChE) has been inactivated by OPC, ACh accumulates throughout the nervous system, resulting in over stimulation of muscarinic and nicotinic receptors (Hundekari et al., 2012, Prabodh et al., 2012and Cupic Miladinovic et al., 2018). The present study showed significant low serum pseudo cholinesterase level (PChE) among patient of acute OPP groups compared to control and chronic groups.But no significant difference was detected between acute moderate and acute severe groups as regards pseudo cholinesterase level. Abd Alkareem M., et al., (2019) agreed with the current study as regard no significant correlation to the grade of severity.Similarly.Singh (2004) and Cherian et al. (2005) found that pseudocholinesterase level is a marker of OP exposure, however not an indicator of OP toxicity severity. In contrast, Muley etal. (2014) and Tripathi (2014), concluded that low PChE level was associated with both higher mortality and higher degree of severity. The present study showed serum glucose was significantly high in acute OP groups compared to controls.Similarly, serum glucose was significantly high in acute severe group compared to acute moderate group.Panda et al., (2014) agreed with the current study regarding the correlation between the severity of OPC and high glucose levels.At the time of admission, a patient's glycemic status may help determine the severity of OPC.Also Rao and Raju, (2016) discovered that levels of glucose greater than 200 mg/dl were reliable parameters for predicting mortality and the need for ventilator support, and that hyperglycemia on admission was correlated with the severity of the cases. Hyperglycemia induced by insecticide poisoning could be explained by elevation of counter-regulatory hormones (catecholamine and cortisol), which reduces a person's sensitivity to insulin and results in elevated blood sugar, which is further exacerbated by excessive adrenergic influence on glycogenolysis leading to hyperglycemia (Amanvermez et al., 2010). In the current study, patients in the acute severe and acute moderate groups had significantly lower K+ levels than those in the control and chronic groups; however, no significant difference was found between the acute moderate and acute severe groups. Hypokalemia and paralysis are potentially reversible medical emergencies.In addition, hypokalemia may aggravate muscular weakness due to inhibition of AChE by OP poisoning (Tripathy et al., 2018). These results were in a good agreement with Salameh et al. (2008) who found a decrease in serum potassium level in acute OPP.Hypokalemia could be attributed to sympathetic over activity in OPP or as a result of pancreatitis.Also, severe vomiting and diarrhea can lead to hypokalemia. In the current study, serum lactate was significantly high in both acute moderate and acute severe groups when compared to control group as well as chronic group.In addition, high significant difference was noted between acute moderate and acute severe groups. This was in agreement with Maignan et al. (2014), who stated that, some indicators such as toxicological history and serum lactate level, proved to be useful to distinguish between low and high acuity poisoned patients with deliberate poisoning, in order to avoid excessive morbidity. Similarity, Blood lactate levels were considered as a high-risk factor that affect the prognosis of acute OPP (Tang et al., 2016;Wu, Xie, Cheng, & Guan, 2016).Also, Arafa et al. (2017) who correlated lactate levels with severity of acute OPC poisoning.They found that strong positive correlation between serum lactate and severity of poisoning. In the present study, the mean serum LDH was significantly high in acute moderate group as well as acute severe group when compared to chronic group and controls.Also, there was highly significant difference between acute moderate and acute severe groups. This was similar to Panda et al., (2014), Coskun et al., (2015) and Gündüz et al., (2015) who found that LDH levels positively correlated with severity of poisoning and can be used as a predictor of severity and mortality. The LDH elevation in case of cholinesterase inhibitors poisoning may be attributed to muscle injury or to insecticide induced oxidative tissue injury that lead to functional impairment in cardiac and skeletal muscle, liver, kidney and red blood cells (Panda et al., 2014 andCoskun et al., 2015). Regarding results of serum CPK in different groups.A significant difference was observed in chronic group when compared to the control group.High significant differences were observed in acute moderate group and acute severe group when compared to the control group.Acute moderate group and acute severe group displayed highly significant differences when compared with the chronic group.The mean serum CPK in acute severe group was significantly high when compared with acute moderate group. These results are in good agreement with Arafa et al. (2017) who discovered a positive correlation between initial CPK levels and the severity of the OP poisoning, with CPK levels rising in tandem with severity.Similarly, a study done by Bhattacharyya et al. (2011) on acute OPP patients who found that serum CPK was significantly increased with the increase in the severity grade of OPP compared to control group. On the other hand, Khan et al., (2016) and Gündüz et al., (2015) discovered that there was no significant correlation between CPK levels and patients' severity or mortality. Regarding outcome, the mortality rate in this study was 40.9%, this high mortality rate could be due to that patients with severe OP poisoning was the largest group in acute intoxication.Mortality rate estimates in OP toxicity range from 5%-35% but severe OP toxicity had mortality rate of about 50% even in young cases (Amir et al., 2020). Conclusion The study clarified that Glial fibrillary acidic protein (GFAP), a biomarker of neurotoxicity, was significantly high in both acute OP poisoning and chronic OP exposure, and can serve as predictor of OPC related neurotoxic disorders.Glucose, lactate, LDH and CPK are easily available simple tools that can be used as markers of severity in acute OP intoxicated patients. O Received in original form: 14 September 2022 Accepted in final form: 31 January 2023 Figure ( 2 Figure (2): Strong positive correlation between serum GFAP and Acetylcholine in all patient groups. Figure ( 3 Figure (3): Strong positive correlation between serum GFAP and CPK in all patient groups Figure ( 5 Figure (5): Strong negative correlation between serum PChE and GFAP in all patient groups Figure Figure (7): Strong negative correlation between serum PChE and LDH in all patient groups. Figure ( 8 ) Figure (8): Strong negative correlation between serum PChE and CPK in all patient groups.
2023-02-17T16:10:45.804Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "14a6e2810c6ad9d804bac46e9d3512fccf886e49", "oa_license": "CCBY", "oa_url": "https://ajfm.journals.ekb.eg/article_285299_882ad6cb1494ee054e12960ef17d6523.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "46517b0e0a12406710f07ebaff92b7c6edd4a563", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
209409310
pes2o/s2orc
v3-fos-license
Force-stabilizing synergies can be retained by coordinating sensory-blocked and sensory-intact digits The present study examined the effects of selective digital deafferentation on the multi-finger synergies as a function of total force requirement and the number of digits involved in isometric pressing. 12 healthy adults participated in maximal and sub-maximal isometric pressing tasks with or without digital anesthesia to selective digits from the right hand. Our results indicate that selective anesthesia paradigm induces changes in both anesthetized (local) and non-anesthetized (non-local) digits’ performance, including: (1) decreased maximal force abilities in both local and non-local digits; (2) reduced force share during multi-finger tasks from non-local but not local digits; (3) decreased force error-making; and (4) marginally increased motor synergies. These results reinforce the contribution of somatosensory feedback in the process of maximal voluntary contraction force, motor performance, and indicate that somatosensation may play a role in optimizing secondary goals during isometric force production rather than ensuring task performance. Introduction The human hand is a redundant motor system [1] because it has more degrees of freedom than necessary to perform most manual actions [2][3][4][5]. For example, while drinking water from a glass, the lifting force distribution among individual digits is undefined since there are an infinite set of lifting forces which could be applied by individual digits to equal the weight of the glass. Coordinating the individual fingers of the hand could pose a control problem in terms of choosing some specific combination of finger forces to satisfy task constraints (like lifting the glass); one proposed solution is the notion of motor synergies [6][7]. In this parlance, motor synergies are defined as neural organizations representing co-variation of elements which can be individually controlled ("elemental variables") in order to stabilize the goal behavior ("performance variable") or some other behavior(s) considered to be important by central nervous system in a specific motor task. In the multi-finger force production scenario, for an example, element variables refer to individual force produced by each digit while PLOS performance variable is the resultant force by all digits. Multi-finger synergies have been investigated within the framework of the uncontrolled manifold (UCM) hypothesis [8][9] in a number of studies involving a variety of manual tasks underlying both healthy and pathological conditions [10]. In these studies, synergies are typically quantified based on the structure of variability in the space of elemental variables, which is decomposed into two subspaces: (1) UCM and (2) its complementary (ORT) subspace. The proportion of variability within the UCM subspace has been called 'good' since this variance does not affect the performance of the task. Correspondingly, the component of variability in ORT subspace is sometimes called 'bad' because variance within this subspace introduces changes in the performance variable [11][12]. Synergies can thus be assessed as a result of relative comparison between the two variance components, 'good' versus 'bad', in the space of elemental variables [11,[13][14]. Despite extensive study of synergies within the UCM framework, much less is known about the extent to which sensory feedback contributes toward this implementation of multi-element coordination. Multi-finger synergies can be changed by limiting access to different sensory modalities. For example, coordination across digits is influenced by varied visual feedback conditions [3,15] or becomes weaker when the palmar area of the hand is vibrated (presumably due to changes in proprioceptive acuity as a result of this stimulation) [16]. In recent studies, a nerve block procedure at the digit or wrist levels has been used to investigate synergies underlying a deafferented hand model [14,17]. However, no consensus has been found regarding the extent to which the somatosensory information affects multi-finger synergies during accurate hand motor control. Motor coordination in a redundant system likely results from both feed-forward [18] and feedback processes [19][20]. As such, determining how somatosensory information contributes to the multi-element motor coordination requires further study. A potential confound in using a deafferented hand model to study how somatosensory information affects inter-finger coordination lies in the notion of signal-dependent motor noise [21][22][23][24][25][26]. A number of studies have shown increased variability in force production as the magnitude of force production increases; crucially, however, anesthesia decreases force production ability. Deafferentation induced by local anesthesia at digit [27][28][29] or wrist levels [30][31] has been reported to result in altered force sharing patterns [14,32], weakened digital force covariation [14,17], and disturbed digital force synchronization [33]. However, because digital anesthesia reduces maximal force ability [14,30,32,[34][35][36], it is important to determine whether the aforementioned changes in structure of motor variability is resulting directly from the absence of somatosensory information, or simply from decreased force production during digital anesthesia. In the current study, we address these issues by using a previously developed deafferented hand model [14,32]-using digital anesthesia on selective digits of the test hand-and combining this with tasks evaluating force production at varied force levels. We asked subjects to perform a series of isometric force production tasks and investigated (1) the effect of deafferentation of selective digits on the force-stabilizing multi-finger synergies, and (2) the effect of force magnitude levels on synergy strength under digital anesthesia and with intact sensation. We hypothesized that selective digital deafferentation would lower force-stabilizing synergies, and that this difference will decrease with lower forces and fewer fingers explicitly involved in the task. current study. All subjects were right-handed and given an Edinburgh Handedness Inventory score of 100. No subject reported any history of neurological, musculoskeletal, vascular, metabolic disorders, and/or upper limb impairments, and none reported allergies to the anesthetic agents or preparation materials. Subjects were unaware of the research-expected results and gave written informed consent in accordance with the Declaration of Helsinki. The current research protocol was approved by the Institutional Review Board at the City University of New York and Northwell Health. Apparatus A customized isometric-force testing system was used in the current study. All subjects performed pressing isometric force production by four fingers of the right hand during the experiment. We used four Nano-17 force/torque (F/T) transducers (ATI Industrial Automation Inc, Apex, NC) to measure the individual force produced by each finger: (1) Index (I); (2) Middle (M); (3) Ring (R); and (4) Little (L). Sensors were covered with 100-grit sandpaper in order to prevent finger slippage. An acrylic glass plate with four slots (2.5 cm center to center) was fixed on the table to provide a mounting base for the sensors. Each sensor fit into one slot. Before the experiment, all four sensors were moved within the slots distally/proximally in order to accommodate a subject's individual hand shape and finger lengths. Force data were sampled at 1000 Hz and digitized using a 16-Bit analog-to-digital board (PCI-6225; National Instruments, Austin, TX). A customized program written in the National Instruments LabVIEW computing environment logged data for offline processing and displayed real-time feedback to the subjects. Experimental procedures In order to prepare for the procedure, subjects were asked to sit in front of the customized experimental set up mentioned above and face a 24" computer screen (Fig 1). A subject rested his/her right forearm horizontally in a U-shape polyethylene tube, padded with sponge to provide comfort, in a palm down position. During the experiment, the subject's right forearm was immobilized by two hook-and-loop straps inside the tube in order to maintain 45˚elbow and shoulder flexions. Before each individual experimental trial, subjects were instructed to rest their finger pads area on corresponding F/T sensor and their rest palm areas on a shape-customized clay block in order to maintain 30˚of flexion at metacarpophalangeal joints and less than 20˚of flexion at interphalangeal joints. For each subject, there were three isometric force production tasks, including two ancillary tasks: maximal voluntary contraction (MVC) and enslaving task; and one primary task: synergy task (Fig 1). The MVC task was used to evaluate fingers' maximal force abilities via maximal voluntary contraction by each of four individual fingers, i.e., index only (I), middle only (M), ring only (R), and little only (L), as well as by all four fingers together (IMRL). During each MVC trial, subjects were encouraged to press as hard as possible by using the designated finger or finger combinations on the corresponding sensor(s) within a six-second time window after a verbal 'go' signal. The subject's total force production was displayed online as a time-course yellow-cursor template on a computer screen over each trial course. Two trials were performed for each digit condition, and the trial with larger maximal force was chosen to be analyzed in the study. Results of maximal forces measured in individual-finger MVC tasks were applied to pre-determine the target forces in the enslaving tasks, while the four-finger MVC forces were used to specify each subject's target forces in synergy tasks. Experimental conditions were presented in a pseudo-randomized order across subjects. Both the enslaving and synergy tasks involved following a target force-time template displayed on a computer monitor. The enslaving task was used to determine individual finger's independence of force production, a phenomenon in which unintended force production by non-instructed fingers occurs during instructed fingers' force production of the same hand. The enslaving matrix [5] was constructed from the enslaving task and was used in a further analysis to quantify motor synergy (described later) from the synergy task. We adopted frequently used templates that adopt controlled and relatively low forces for individual-finger actions. During this task, subjects were instructed to press with one finger (I, M, R, or L) following a time-force template line displayed on the computer monitor. The template line had three straight line segments, which were based on the subject's individual finger maximal force as tested in the MVC task: a 1-sec horizontal segment equal to 0% MVC followed by a 4-sec oblique segment going up from 0 to 10% MVC ramp and ending by a 1-sec horizontal segment equal to 10% MVC. Each instructed digit performed one test trial after three practice trials. It was important that all the non-instructed fingers were required to maintain contact with the corresponding sensors during the task although subjects were told not to pay attention to the possible force exerted by them. An ongoing total force produced by all fingers was also displayed as a cursor on the screen to provide instant feedback. The primary task, the synergy task, was to investigate the multi-finger motor synergy via an uncontrolled manifold hypothesis framework (UCM) [6]. Similar to the enslaving task, the task template line in the synergy task was composed of five horizontal segments based on the subject's four finger maximal force as tested in the MVC task, starting with 0% MVC for 1 sec , ring (R) and little (L). Both task time templates and subjects' instant force production are displayed on a LED screen for subjects over each trial. There are three isometric force production tasks including two axillary tasks (MVC task and Enslaving task) and one primary task (Synergy task). Subjects are instructed to utilize either individual finger or finger combination to perform these tasks. followed by 3 sec of each of 2.5%, 5%, 7.5%, and 10% MVC. Based on different fingers' involvement, two conditions were presented in a pseudo-random order across subjects, that is one condition with all four fingers tracing the target line together (IMRL) and the other condition by adding I, M, R, and L for each force level increase in a sequence (I+M+R+L), i.e., 2.5% MVC by I, 5% MVC by I and M, 7.5% MVC by I, M, and R, and 10% MVC by all four fingers. Both conditions were designed as typical isometric pressing force production tasks to evaluate multi-digit synergies [13]. The IMRL condition was to determine force-stabilizing synergies among all sensory-blocked and sensory-intact digits as a function of target force effort, whereas the I+M+R+L condition was to further investigate the alternation of the multi-finger synergies attributable to digital involvement. Subjects were asked to perform 25 trials after five practice runs. In order to prevent fatigue, we gave at least 10-sec and 5-min rest intervals between trials and among conditions and tasks, respectively. In I+M+R+L condition, all the non-instructed fingers were required to maintain contact with the corresponding sensors. However, if subject failed to follow the instruction and produced identifiable force (> 0.5 N) by any non-instructed finger, the specific trial would be omitted and redone immediately. Subjects' forces produced by all four fingers including non-instructed fingers were reflected in the visual feedback on the computer monitor. Digital anesthesia In order to evaluate the effect digital sensory feedback absence on multi-finger motor performance and motor synergy, subjects performed the above experimental procedure repeatedly in two sessions: (1) Control and (2) Anesthesia. These sessions were presented in a pseudo-random order across subjects with at least a 2-week interval in between each session. In the anesthesia session, subjects received digital anesthesia on their right-hand index and middle fingers (Staten Island University Hospital, Staten Island, NY) (see details in [14]). The locally injected anesthetic was a mixture of 1% lidocaine and 0.5% bupivacaine (50:50), and was administered at digital nerves in the web space. Up to three sets of injections could be performed per finger until the subject reported complete numbness in that specific digit. A low dosage was used for the initial injections and gradually and incrementally added in later applications as needed (not exceeding 10 ml total per subject). This was done to block sensory but not motor nerve fibers in the injected finger [14,37]. A set of Von Frey hairs (Stoetling Co., Wood Dale, IL) was used to ensure subject's tactile sensation was successfully blocked in the injected fingers (size 6.65, 300 g), and remained intact on non-injected fingers and rest of the hand (size 2.83, 0.07g). Subjects who received three times of injections at a particular digit and did not reach complete numbness were excluded from the data collection in the present study. Data analysis Experimental variables were analyzed offline by using MATLAB (MathWorks), Excel (Miscrosoft), SPSS (IBM) and Origin (OriginLab). As we described earlier, variables quantified in the MVC and enslaving tasks were used intermediately either to establish further experimental tasks or in UCM data analysis; therefore, experimental variables in these two ancillary tasks were described but not be emphasized in our data presentation and report. In the MVC task, the maximal pressing force (F MAX ) in I, M, R, L, or IMRL was expressed in newton. In the enslaving task, force production by the instructed finger (also called master finger, i.e., I, M, R, or L) and non-instructed fingers (also called enslaving fingers) were used to compute the n×n enslaving matrix (E) for the right hand, where n equals the total number of fingers involved in the task. For example, when the master finger was I, enslaving fingers were M, R, and L. Entries in E represented the relative amount of force change in individual finger versus the total force during single-digit force production (see details in [13,38]). In the synergy task, in order to evaluate the force contribution from each finger toward the overall force production required by the task, the individual force (N) at each finger was averaged over the intermediate second for each 3-sec force level per trial. In addition, in order to evaluate a subjects' task performance, we quantified the accuracy of the overall force production relative to task-required template force by calculating the root mean square error (RMSE) for all force levels. Similarly, averaged values over the intermediate sec for each 3-sec force level per trial were reported in our results. Furthermore, in order to evaluate the motor coordination among multiple fingers, such as whether the individual fingers were coordinated to stabilize the total force (F TOT ) produced by all, we quantified the motor synergy in the framework of the UCM hypothesis. Within the UCM framework, individual finger force data (F) were converted into hypothetical commands to fingers, modes (m), as m = [E] −1 F, in which E denotes the 4×4 enslaving matrix from right hand, which was computed from the enslaving task. Thereafter, in the mode space, the total cross-trial variance (V TOT ) was calculated for each time sample based on 25 trials performed by each subject. V TOT consists of two variance components: (1) one lies in the UCM subspace (V UCM ) and (2) the other lies along the orthogonal to the UCM subspace (V ORT ). The former indicates that the individual mode cross-trial variance does not affect the total performed value of F TOT , while the latter reflects the amount of mode variance in the collected data set that leads to changes in F TOT . An index ΔV was therefore used to quantify the multi-finger synergy, which was calculated as the variance difference between two components (V UCM and V ORT ) and further normalized by the total amount of variance for each time sample: In the above equation, the total variance and its components were calculated per dimension according to the finger mode space, where the dimension of the total variance space were n and that of UCM and ORT subspaces were n-1 and one, respectively. For an example, the finger mode space is two-dimensional when I and M were involved to perform 5% MVC in I+M +R+M task condition, and n denoted in Eq 1 equals two accordingly. When ΔV > 0, more V UCM (per dimension) than V ORT was observed, reflecting a multi-finger synergy stabilizing F TOT . In contrast, ΔV = or <0 can be interpreted as an anti-synergy where individual finger forces co-vary to change F TOT rather than stabilize it. Note the motor synergy was used to be quantified in a redundant system, i.e., more element variables (finger modes) than performance variable (total force). Theoretically this means that for I+M+R+M task condition, first force level at 2.5% MVC performed only by one finger (I), ΔV cannot be calculated. However, because feedback was provided on total force throughout the procedure, we analyzed all tasks in the redundant 4-dimensional space reflecting the feedback. In statistical analyses, ΔV indices were averaged over the intermediate second for each 3-s force level per subject. Statistical analysis We performed multiple mixed-effect analysis of variances (ANOVAs) with repeated measures. All of the factors described below were within-subject factors. In order to evaluate the effect of selective digital anesthesia on individual and all digits' maximal force production, a two-way ANOVA was performed on F MAX with the factors of Session (Control versus Anesthesia) and Cond_MVC (I, M, R, L, and IMRL). In order to determine if the enslaving effect was altered after selective digital anesthesia, we performed a two-way ANOVA on master finger's E entries when I, M, R, or L were the master finger with the factors of Session and Digit (I, M, R and L). In order to identify effect of digital anesthesia on the total force distribution among all the digits in the synergy task, we performed a three-way ANOVA on the individual finger forces (in Newton) while contributing to the highest force level (such as 10% MVC) since subjects were asked to use all four fingers during this force level in both the IMRL and I+M+R+L synergy tasks. This specific 3-way ANOVA included Session, Cond Synergy (IMRL versus I+M+R+L) and Digit factors. In order to examine the subjects' performance of task accuracy before and after partial removal of somatosensory feedback in the hand, a three-way ANOVA was performed on RMSE in synergy task with factors of Session, Force-Level (four levels consisting of 2.5%, 5%, 7.5%, and 10%), and Cond Synergy. In order to investigate the absence of digital sensory feedback on multi-finger motor synergy, we performed the same 3-way ANOVA as described above on the index ΔV for IMRL and I+M+R+L synergy tasks separately with factors of Session and Force-Level (four levels for IMRL consisting of 2.5%, 5%, 7.5%, and 10% and three levels for I+M+R+L consisting of 5%, 7.5%, and 10%). The same two-way ANOVAs were also performed with the variance components: (1) V UCM and (2) force, even when they were explicitly instructed to press with fewer than 4 fingers. When the assumption of sphericity was violated, the Greenhouse-Geisser correction of degrees of freedom was used. Post hoc tests for pairwise comparisons were performed with Bonferroni adjustments when appropriate. The level of significance was taken as p < 0.05. Results All subjects successfully completed the two ancillary tasks and one primary task following instructions in both the anesthesia and control sessions. MVC task We plotted the averaged F MAX (mean ± standard error) by individual and then all fingers across subjects in both control and anesthesia sessions in Fig 2. While performing the voluntary maximal force contractions, subjects produced lower F MAX after the selective digital anesthesia procedure (main effect of Session: F [1,11] = 5.735, p < 0.001). In particular, subjects significantly reduced their maximal force production when using anesthetized fingers (I and M) as well as the non-anesthetized little finger. When using all four fingers, however, the total F MAX observed during anesthesia and control sessions were not significantly different from each other (interaction effect of Session × Cond MVC (F [4,44] = 2.813; p < 0.01; post hot comparison tests showed significant difference between sessions for conditions of I, M, and L) S1 Table. Enslaving task We presented the averaged enslaving matrix entries (E, calculated for each individual subject for anesthesia and control sessions separately) across all subjects in Table 1. As the master finger, the index finger showed the least enslaving forces by other fingers, while the ring and little fingers were the most enslaved fingers (see bold values in Table 1) (main effect of Master Finger: F [3,33] = 376.461; p < 0.001), whereas we did not observe any alternation of enslaving matrix before and after digital anesthesia (no main or interaction effect due to Session) S2 Table. Synergy task In Fig 3, we plotted the averaged individual finger forces from all subjects in both sessions for task IMRL and I+M+R+L separately. Subjects were asked to trace force template target line by using all four fingers together throughout the IMRL task; in the I+M+R+L task, subjects started with only I producing 2.5% MVC and added the next finger for each subsequent force level. Digits' involvement as shown in Fig 3 confirmed that subjects performed the tasks as instructed, that is non-instructed fingers (MRL, RL, and L at force levels of 2.5%, 5%, and 7.5% MVC, respectively) barely produced forces during the task of I+M+R+L, yet all four fingers significantly contributed to the total force in the IMRL task. At the 10% MVC force level (highest), both tasks required all four fingers' involvement. In this scenario, L was the least loaded finger among the four (main effect of digit: F [3,33] = 13.538; p < 0.001; posthoc comparison tests showed that force produced by L was significantly lower than I, M, and R, all p < 0.05). However, there was a discrepancy in total force distribution among the digits between I+M+R +L and IMRL tasks in which four fingers shared the total force in an even fashion in the I+M +R+L task but not in IMRL. For example, R and L showed almost equal contribution (Anesthesia: 49%; Control: 52%) to the total force when compared with I and M in the I+M+R+L task but showed much less contribution (Anesthesia: 34%; Control: 40%) in the IMRL task (interaction effect of Cond Synergy × Digit: F [3,33] = 7.481, p < 0.005; posthoc comparison tests showed that in IMRL, the force produced by L was significantly lower than I, M, and R, and in I+M+R+L, the force produced by M was significantly higher than I and L, all p < 0.05). Subjects reduced their L finger contribution from control to anesthesia session (I+M+R+L: from 23% to 19%; IMRL: from 12% to 8%) but retained force contributions from others in both sessions (interaction effect of Session × Digit: F [3,33] = 3.38, p < 0.05; posthoc comparison tests showed that L produced a significantly lower force in anesthesia than in control session, yet no significant difference was found between two sessions for other digits). To quantify subjects' actual force performance relative to the task-required force, we plotted the average RMSE across subjects at different force levels in both sessions for IMRL and I+M +R+L tasks separately in Fig 4. In general, subjects presented larger errors when using more digits than only a few. Higher RMSE values were observed in the IMRL rather than the I+M+R +L tasks (main effect of Cond Synergy: F [1,11] = 5.663; p < 0.05) and subjects presented higher error values when using three and four digits (7.5% and 10% MVC) than one and two digits (2.5% and 5% MVC) in the I+M+R+L task (interaction effect of Force Level × Cond Synergy: F [3,33] = 11.805, p < 0.001; posthoc comparison tests showed that in I+M+R+L task, RMSE values at two lower force levels were significantly lower than that at two higher force levels, whereas in the IMRL task, RMSE values at 2.5% and 10% MVC force levels were higher than the other two, all p < 0.05). This performance discrepancy between IMRL and I+M+R+L tasks was present only during anesthesia session; subjects in the control session presented similar errors in both synergy tasks (interaction effect of Session×Cond Synergy: F [1,11] = 4.658; p < 0.05). In Fig 5, we plotted the average time profiles of ΔV Z indices across subjects during anesthesia and control sessions for the IMRL and I+M+R+L synergy tasks. These indices were relatively high in both synergy tasks and transiently decreased when subjects moved from one force level to the next. Values of ΔV Z increased as the instructed force production increased, corresponding to more involved fingers in the I+M+R+L task but not the IMRL task, as indicated by a robust main effect of Force Level (F 3,33 = 201.04; P < 0.001). There was also a marginal Force Level × Cond Synergy interaction (F 3,33 = 2.873; P = 0.051) arising from differential effects of adding fingers versus additional force production: in the I+M+R+L task, ΔV Z increased relatively linearly as fingers were added, whereas ΔV Z plateaued as force level increased above 5% MVC in the IMRL task. ΔV Z was also higher in the IMRL task than the I +M+R+L task (main effect of Cond Synergy: F 1,11 = 57.18; P < 0.001) across force levels. We observed no main effects of anesthesia (F 1,11 = 3.17; P > 0.1). ΔV Z summarizes the relative amount of V UCM (across-trials variance which does not affect task performance) and V ORT (across-trials variance in task performance). ΔV Z increased as force production increased because V UCM increased (main effect of Force Level: F 3,33 = 28.11; P < 0.001, with V UCM at each successive force level larger than the previous one) while V ORT did not increase as much (the main effect of Force Level was significant F 3,33 = 12.33; P < 0.001 but post hoc tests showed only that V ORT at 10% was significantly larger than the other force levels). Similarly, V UCM was higher in IMRL than I+M+R+L (F 1,11 = 14.36; P = 0.003) across force levels, while V ORT was lower in IMRL than I+M+R+L at the 2.5% force level (where only one finger was instructed to press), but V ORT was similar between tasks at the other force levels (Cond Synergy × Force Level interaction: F 3,33 = 5.74; P = 0.003). While V UCM was not significantly modulated by anesthesia, V ORT was generally lower during anesthesia than control sessions (main effect of Cond Synergy: F 1,11 = 6.357; P = 0.028), although it was only significantly lower under during anesthesia than control sessions at 2.5% and 10% force levels (Session × Force Level interaction: F 3,33 = 3.05; P = 0.042) S3 Table. Synergies retained across mixed-sensory digits Discussion In the present study, we examined the effects of selective digital deafferentation on multi-finger synergies during isometric pressing as a function of total force requirement and explicit involvement of different numbers of digits. Our results quantified these effects from three aspects: (1) maximal force ability; (2) force-tracing performance; and (3) multi-finger synergies. In the introduction, we formulated two hypotheses regarding the effect of selective digital anesthesia on multi-finger synergies: first, that anesthesia would result in decreased indices of synergy, and second that this decrease would be more evident at higher levels of force production. Neither of these hypotheses, however, were supported by our results: synergies did not decrease under selective anesthesia, and we did not observe differences between anesthesia and control sessions to depend on force production level. We further discuss the roles that sensory information played in these results and interpret our findings in context of relevant literature. Somatosensory contributions to maximal force abilities The magnitude of voluntary force development relies on multiple factors, including motor unit recruitment and motor unit discharge rates [39][40]. We found decreased maximal force capacity after anesthesia, which agrees with previous findings [14,32,35] (Fig 2). These results construct a straightforward relation between maximal force ability and sensory-based contributions. Maximal voluntary force tasks require fast contractions (such as ramp contraction [41]), yet presents no explicit force goal. For this reason, MVC tasks are often assumed to be feedforward and thus it is not clear why reduced sensory feedback would lead to decreased force production capacity. One possible role for peripheral sensory signals (feedback processes) in MVC tasks is protective, so loss of sensation could make the central nervous system (CNS) decrease force output as a cautionary measure so as not to injure the periphery. Others have suggested that deafferentation results in higher levels of co-contraction, resulting in lower net forces [42]. Local sensory deficits and non-local motor effects Deafferentation-induced motor deficiency was not limited to the anesthetized (local) digits. Instead, we observed 'non-local' effects similar to our findings in earlier experiments using the same selective deafferentation model [14,32]. The non-local effects observed in the present study include a significant decrease in MVC from the little finger after sensory removal from other digits (I and M) (Fig 2). Additionally, the little finger significantly decreased its share of the force when working with together with other fingers during the synergy tasks (Fig 3). This could be counterintuitive since one might assume that a digit with intact sensation would compensate for those with reduced sensation by producing more of the force. We have previously interpreted similar findings based on the idea that integrating information from anesthetized and intact digits presents a larger challenge for the CNS [14,43]. We think another line of evidence for this interpretation is visible in the RMSE results. We quantified subjects' sub-maximal force performance based on how much force production deviated from the task-required force target; our findings showed performance is not necessarily dependent on the amount of force produced, but rather on digital involvement instead. That is, when all four fingers were involved (IMRL task), RMSE was similar across force levels. In contrast, when fewer fingers were instructed to press (I or IM), lower values of RMSE were observed (Fig 4). Further, the reduction in RMSE occurred during anesthesia but not during the control session, as indicated as an interaction effect between sessions and tasks. This is consistent with the idea task performance can be retained when only anesthetized digits are utilized, but coordinating digits with different sensory abilities presents a particular challenge. This hypothesis could be further tested by having participants perform the task in the opposite direction (begin pressing with L, then R, etc.) to disentangle the role of the index and middle fingers (which are stronger and less enslaved) from that of anesthesia in this effect. Nonetheless, the observation of anesthesia-related effects on both local and non-local motor outputs is consistent with the possibility that sensory information from individual digits may be shared among others [32]. The role of somatosensory feedback in organizing multi-digit synergies A major goal of the present study was to investigate the contribution of somatosensory information to the structure of inter-trial variance. In isometric pressing tasks, a synergic structure of variance appears to be closely related to the availability of visual feedback on the task variable, but in some cases other sensory modalities can play a role. For example, one study [16] altered subjects' proprioception by applying vibration on the palm or wrist surfaces and indicated subjects' motor synergy strength was decreased, although synergies were still present. Similarly, results from a recent prehension study [14] revealed a reduction in the synergy index in the absence of selective digital sensory feedback in grasping tasks after digital anesthesia. However, many studies have reported loss of synergic structure of variance in isometric pressing when visual feedback was removed [44][45], and Koh and colleagues [17] reported no change in cross-trial structure of variance following removal of somatosensory feedback from all task-involved digits. We did not find that digital anesthesia weakened synergies in the present isometric pressing task. In fact, we found strong synergies in the absence of somatosensory feedback, similar to Koh's report [17]. Note that both our paradigm and Koh's provided explicit visual feedback that allowed prompt and precise error correction during the task. This result suggests, in agreement with previous studies, that synergic structure of variance can be easily organized with visual feedback alone. Another piece of evidence for this interpretation is our finding of synergies even in the 2.5% force level of the I+M+R+L task: an instance when theoretically no synergy should be observed. However, our results suggest that merely showing visual feedback from all fingers-even when participants are explicitly instructed to press with only one fingeris enough to induce a synergic structure of across-trials variance. These findings corroborate previous studies investigating the effect of adding digits to isometric pressing [44][45] which also reported minimally altered structure of variance as additional digits were added to an isometric pressing task and synergic structure of variance when instructed to press with the index finger only. Even if somatosensory feedback is not sufficient for organizing multi-digit synergies during isometric pressing tasks, it appears to play a role in motor variance. In particular, we observed decreases in V ORT under anesthesia, indicating that somatosensory feedback actually drive small fluctuations in task performance (similar to the lower RMSE observed under anesthesia). High indices of synergy are often interpreted as healthy (especially because some neurological populations display lower indices of synergy), therefore, our observation of increased ΔV Z during the anesthesia session may be a surprising outcome. In fact, increased indices of synergy can also be observed in highly decoupled systems which are joined only at a high feedback level. A good example of this phenomenon is the very large index of synergy observed in isometric tasks performed by two people with shared visual feedback [46] which occurs because, across trials, motor output from individual people are very high compared to when one person produces all of the output. However, in this case, visual feedback ensures that the task is performed at an acceptable level across trials (V ORT is kept relatively low), leading to very high ΔV Z values. Similarly, if the CNS has little access to forces produced by individual fingers, it may just find a solution that works on a given trial instead of further refining these individual force levels based on other criteria that are not explicitly involved in task completion-like comfort. Dovetailing on the theme of somatosensory information moderating "communication" between digits, in our I+M+R+L task, ΔV Z increased approximately linearly for anesthesia session. In contrast, the increase in ΔV Z associated with adding digits saturated in the control session. These results could occur if the addition of digits is relatively independent under anesthesia (e.g. because the CNS does not have access to information regarding what other fingers are doing under anesthesia), resulting in relatively high variance across trials in the forces (modes) produced by individual fingers. In contrast, finger forces may be re-organized by the CNS as additional fingers are added when somatosensory function is intact, resulting in more stereotypical values of fingers forces across trials. These more stereotypical values could represent individual preferences about sharing force production between fingers. Synergic structure of inter-trial variance could be a product of feedforward control [15,18] where finger forces for a given trial are "selected" by the CNS from some distribution and implemented with motor noise, or feedback [8,19] control processes where variability in output which do not affect task performance are disregarded. While the role of feedback in general is very important for organizing synergies, our results suggest that somatosensory information in particular might be used to optimize secondary, implicit objectives of the task like comfort. This can be seen as a stage in synergy learning [12] within the framework of the uncontrolled manifold hypothesis [8], where task performance is first ensured before the CNS settles on specific (preferred) levels of elemental output occur. Similarly, it could be explained in the parlance of optimal feedback control [18]: reduced availability of somatosensory information may alter the ability of the CNS to optimize a cost function that includes terms for individual finger forces (evaluated in terms of somatosensory information from cutaneous receptors), or change the cost function being evaluated to disregard such terms if they are known to be corrupt; however, given that the task is performed in terms of visual feedback, the CNS still preferentially stabilizes output which are consistent with task success. Conclusions Temporary somatosensory deprivation via digital anesthesia decreases maximal voluntary contraction force, but it does not detrimentally affect target-tracing force performance or the organization of multi-finger motor synergies. Our study indicates that the CNS is capable of retaining the force-stabilizing synergies in a redundant system. However, there may be costs associated with coordinating both sensory-impaired and -intact motor elements. These results may be explained in the optimal feedback control context by assuming that reduced somatosensory input does not directly interfere with the CNS' ability to execute an isometric task with redundant elements, but altered structure of variance may indicate that it interferes with the CNS' ability to optimize secondary motor goals related to comfort or distribution of force to specific digits. Supporting information S1
2019-12-19T09:17:14.284Z
2019-12-17T00:00:00.000
{ "year": 2019, "sha1": "dee3ee438d1df69bc5adff93d4743459fd71eb73", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0226596&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f16314201e56f31d6621dc29421620ffee4e8af", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
234434083
pes2o/s2orc
v3-fos-license
Simulation Analysis of Voltage Distribution of 500kV Degraded Magnetic Insulator String Insulators are important power grid equipment and key components to ensure electrical insulation performance. Porcelain insulators with reduced insulation performance are generally low or zero, and are broken down under lightning overvoltage or even normal operating voltage. The thermal effect generated by the strong power frequency fault current flowing through the porcelain insulator will often cause deterioration of the porcelain insulator. The iron cap exploded, causing serious accidents such as stringing of insulators and grounding of wires. If there are continuous multiple pieces of zero-value insulators in the insulator string, the voltage of the insulators around the zero-value insulators will increase compared to normal conditions, and the normal insulators farther from the zero-value insulators will be less affected. If there are multiple consecutive zero-value insulators at different positions in the insulator string, if the number of zero-value insulators in the insulator string is large enough, and the distribution positions are more diffuse, the voltage of the entire string of insulators in the insulator string will be biased as a whole high. Introduction Insulators are important power grid equipment and key components to ensure electrical insulation performance. At present, porcelain insulators, glass insulators and composite insulators are widely used due to their respective characteristics. Among them, porcelain insulators have the longest history of application. Porcelain insulators may suffer from insulation performance degradation due to long-term exposure to the electrical and mechanical load of the line, wind, rain and thunder and other harsh natural environments, or due to the quality of their own equipment. Porcelain insulators with reduced insulation performance are generally low or zero, and are broken down under lightning overvoltage or even normal operating voltage. The thermal effect generated by the strong power frequency fault current flowing through the porcelain insulator will often cause deterioration of the porcelain insulator. The iron cap exploded, causing serious accidents such as stringing of insulators and grounding of wires [1][2][3] . In recent years, the zero-value problem of porcelain insulators in newly commissioned and gridconnected operations has become increasingly prominent. Frequent occurrences of insulator explosions in transit have seriously threatened the safe and stable operation of the power grid. Take Hubei Company as an example. Since 2016, there has been at least one porcelain insulator bursting accident on the 110kV and above transmission lines of Hubei Company since 2016. For example, the 220kV Saitao line was struck by lightning in 2016 because the fault string has multiple (low) zero values. Insulators, lightning IOP Conf. Series: Earth and Environmental Science 617 (2020) 012017 IOP Publishing doi:10.1088/1755-1315/617/1/012017 2 flashover, fault current passes through the (low) zero-value insulator steel cap, causing the cement material inside the steel cap to rapidly heat up and expand and explode, and finally cause the insulator string to explode at the (low) zero-value insulator; 2017 In 2017, the entire string of insulators on 220kV Xiaomeng No. 2 circuit burst, and it was found that the deterioration rate of insulators on the faulty tower was as high as 39.73%. In 2018, there were three consecutive failures on the first and second lines of the 110kV E-Xia and Lihong lines, and the second line of E-Xia was in the substation The string drop caused the ground turf to catch fire. The insulation resistance test of the replaced products of the same manufacturer and the same batch found that the deterioration rate was as high as 74.2%. The fundamental reason lies in the existence of low-zero-value insulators in the porcelain insulator string, and at the same time, the problem of inadequate daily inspection work for the low-zero value of porcelain insulators is exposed [4][5][6][7] . Fig.1 Cross section of insulator The insulator used in this simulation experiment is mainly composed of four parts, namely iron cap, porcelain plate surface, cement glue and steel foot. Since the porcelain suspension insulator ignores the part of the iron cap that is used to install the plug, it can be considered It is an axisymmetric structure, so in order to reduce the amount of calculation, the simulation adopts a two-dimensional axisymmetric model, and only half of the insulator is modeled and simulated. The advantage of this is that it can greatly reduce the amount of calculation required for simulation and greatly shorten the calculation time. The cross section of the insulator is shown in Fig.1. When doing multi-piece insulator simulation analysis, geometric modeling can be used to construct a cross-sectional model of the multi-piece insulator string in order to analyze the setting of boundary conditions. During the simulation process, the simulation program scans and rotates the set two-dimensional cross section to obtain a complete three-dimensional model of the insulator string, as shown in Fig.2. The influence of multiple zero-value insulators at different positions on the voltage distribution of insulator strings In this simulation process, three adjacent zero-value insulators were selected and distributed on the conductor side, middle section, and grounding side of the insulator string, and simulated comparisons were performed respectively. The second, third and fourth pieces are zero value insulators To make it easier to analyze the results, compare the voltage distribution of insulators with zero value and the voltage distribution of normal insulator strings, as shown in Fig.3. Fig.3 Comparison of voltage distribution It can be clearly seen from Figure 5 that the voltage of the insulator around the zero-value insulator has increased compared to normal conditions, and the normal insulators farther from the zero-value insulator will be less affected. The 13th, 14th and 15th pieces are zero value insulators In order to facilitate the analysis of the results, compare the voltage distribution of insulators with zero value and the voltage distribution of normal insulator strings, as shown in Fig.4. Fig.4 Comparison of voltage distribution It can be clearly seen from Figure 4 that the voltage of the insulator around the zero-value insulator has increased compared to normal conditions, and the normal insulator that is farther from the zerovalue insulator will be less affected. The 25th, 26th, and 27th pieces are zero value insulators In order to facilitate the analysis of the results, compare the voltage distribution of insulators with zero value and the voltage distribution of normal insulator strings, as shown in Fig.5. Fig.5 Comparison of voltage distribution It can be clearly seen from Figure 5 that the voltage of the insulator around the zero-value insulator has increased compared to normal conditions, and the normal insulators farther from the zero-value insulator will be less affected. The influence of multiple zero-value insulators on the voltage distribution of insulator strings In order to facilitate the analysis of the results, compare the voltage distribution of insulators with zero value and the voltage distribution of normal insulator strings, as shown in Fig.6 Fig.6 Comparison of voltage distribution It can be seen that when the number of zero-value insulators in the insulator string is large enough and the distribution positions are relatively diffuse, the voltage of the entire string of insulators in the insulator string will be higher overall. 5.Conclusion If there are continuous multiple pieces of zero-value insulators in the insulator string, the voltage of the insulators around the zero-value insulators will increase compared to normal conditions, and the normal insulators farther from the zero-value insulators will be less affected. If there are multiple consecutive zero-value insulators at different positions in the insulator string, if the number of zero-value insulators in the insulator string is large enough, and the distribution positions are more diffuse, the voltage of the entire string of insulators in the insulator string will be biased as a whole high.
2020-12-31T09:12:24.162Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "d3aaac10443fe7d6ef975c8788ba0723a21b09f8", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/617/1/012017/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "958ad72baaa73fd4a770bf19bd8d859dedc6ac66", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251315767
pes2o/s2orc
v3-fos-license
Feasibility of Computer‐Aided Design in Limb Lengthening Surgery: Surgical Simulation and Guide Plates Objective To evaluate the feasibility and utility of computer‐aided design (CAD) in surgical treatment of leg length discrepancy (LLD) using monorail external fixators. Methods In the present case series, we retrospectively analyzed seven patients diagnosed with LLD who were surgically treated using a monorail external fixator between June 2018 and August 2020. A personalized surgical emulation of each patient was designed using CAD based on preoperative CT scans to measure limb parameters. Through reverse engineering, a surgical guide plate was then designed to assist with correcting the limb deformity. Patient general information and clinical history, leg length, mechanical lateral distal femoral angle (mLDFA), anatomical anterior distal tibial angle (aADTA), and surgical parameters were recorded during the perioperative period. Three months after external fixator removal, distraction‐consolidation time (DCT), healing index (HI), and lower extremity function score (LEFS) were calculated, and statistically analyzed by paired T‐test. Results The mean limb lengthening achieved was 6.41 ± 2.54 (range, 3.30–10.54) cm with either varus or valgus correction. The mean operative duration was 151 ± 41.87 (84–217) minutes and mean blood loss was 53.58 ± 22.51(25–87) ml. The mean distraction‐consolidation time was 3.67 ± 1.13 (range, 2.5–6.0) months and mean external fixator duration was 11 ± 2.45 (range, 8–14) months. The mean healing index (HI) was 18.11 ± 3.58 (range, 12.8–22.7) days/cm. Mean LEFS scores improved postoperatively from 32.17 ± 8.57 (range, 24–45) to 61.17 ± 6.68 (range, 50–67) with a significant difference (T = –14.26,P < 0.001). Conclusions Simultaneous length and angular correction can be achieved by incorporating CAD into the surgical treatment of patients with LLD, without compromising postoperative lower limb function. CAD demonstrates utility in the surgical treatment of LLD by improving the functionality of monorail external fixators. Introduction L eg length discrepancy (LLD) is one of the more common limb deformities. A 2005 review reported greater than 90% of the general population have at least a 1 mm discrepancy in leg length 1 . Numerous factors contribute to LLD, including congenital dysplasia and trauma injuries 2,3 . Mechanical imbalance of the lower limbs leads to pelvic tilt and compensatory scoliosis, followed by impaired mobility and hip pain. LLD may also cause imbalance in load distribution through the lumbar discs and facet joints contributing to degenerative changes in intervertebral joints [4][5][6] . LLD is frequently accompanied by abnormal bone angles, adding to the physical and mental burden placed on patients. Accordingly, the early diagnosis and treatment of LLD can substantially improve patient satisfaction and quality of life. LLD is currently diagnosed based on measurement of the lower limbs from two-dimensional standing radiographs 7,8 . However, three-dimensional parameters provide improved reliability over two-dimensional images. A more comprehensive understanding of preoperative parameters informs treatment options. LLD can be treated conservatively or surgically depending on the complexity and severity of each case. An LLD of less than 2 cm can be corrected using insoles or extracorporeal orthoses, while an LLD greater than 2 cm is commonly considered an indication for surgical intervention 9 . A variety of lengthening methods have been utilized clinically including the Ilizarov apparatus, monorail external fixator, and intramedullary lengthening nails, each with their specific strength and weaknesses 10,11 . The Ilizarov apparatus, commonly known as the circular external fixator, is one of the earliest developed external fixators and represents a robust device used in the surgical correction of complex deformities 12 . However, the complex installation procedure exposes patients to increased surgical duration which exponentially increases the risk of postoperative complications. The Taylor spatial frame is a modified version of the Ilizarov apparatus, which uses computer software to calculate the required angular rotations, displacement, and the overall lengthening required. However, this device relies on patient compliance and is comparatively more expensive 13,14 . The monorail external fixator is frequently used in leg lengthening surgeries due to ease of usage and provision of adequate structural support. A commonly reported weakness of the monorail fixator is the strict requirement for proper alignment of Schanz screws during installation. The magnetic lengthening nail is a novel apparatus recently introduced for limb lengthening 15 . The major advantage of this apparatus is the lack of a need for extracorporeal devices; however, it is limited by the small range of lengthening (3-4 cm) it can provide. Previous studies have reported an increased risk of implant failure after 15 months following fixation, which warrants careful consideration in patients requiring prolonged fixation 16 . In addition, patients with LLD may present with an angular deformity, which further increases the technical difficulty of surgical correction, leading to increased risks of surgical complications. The different lengthening strategies presently in clinical use are unable to provide a satisfactory solution in every LLD patient, which further emphasizes the need for improved treatment methods. The main treatment principle of LLD is lengthening of the shortened leg in order to achieve a physiological limb length similar to the healthy leg. Accurate measurement of the difference in limb length must first be performed, followed by the selection of a suitable limb lengthening method. An LLD limb accompanied by angular deformities increases the complexity of correction, in which traditional treatment methods may not suffice. Acquiring and processing the various parameters then changes a medical obstacle into a mathematical issue. CAD has been widely used in the field of orthopaedics and is garnering increasing attention. A previous review on the application of computer-assisted surgery and 3D printing reported improved treatment satisfaction and aesthetics in maxillofacial orthopaedic surgeries 17 . Xin et al. reported successful CAD-based preoperative surgical simulations and pedicle subtraction osteotomy guide plates in the treatment of thoracolumbar deformities, with results showing accurate intraoperative osteotomy 18 . Through reverse engineering software, preoperative patient data can be quantitatively analyzed to allow emulation of the surgical procedure and design of a personalized treatment plan for both surgeon and patient. Coupled with 3D printing, preoperative surgical simulation is then performed in the form of a surgical guide plate, which assists in replicating the surgical procedure. This minimizes dependence on the subjective judgment of individual surgeons, thereby reducing overall technical difficulty and surgical risks in addition to shortening surgical duration. Based on clinical trends in LLD patients and the need for personalized treatments, novel customized approaches are required to increase treatment efficacy and patient satisfaction. In the present study, we developed computer-aided surgical emulation and 3D-printed guide plates to allow simultaneous correction LLD and angular deformities using monorail external fixators. According, the present study aimed to: (i) measure and analyze patient limb parameters at a 3D level; (ii) evaluate the utility of preoperative computerassisted surgical simulation of the surgical procedure; (iii) evaluate the feasibility of CAD-designed surgical guide plates in improving the surgical treatment of LLD patients using a monorail external fixator. Patient Population Clinical data of patients undergoing surgical treatment of LLD at our medical center between June 2018 and August 2020 was retrospectively collected and analyzed. The inclusion criteria for the present study was: (i) voluntary participation and signed informed consent; (ii) diagnosis of LLD with a length discrepancy greater than 2 cm requiring surgical correction. The exclusion criteria for this study were: (i) lost to follow-up; (ii) received other lengthening treatment prior to surgery. This study was approved by the local ethics institute (K-2018-137-04) and all patients provided signed informed consent before surgery. Preoperative Limb Parameters All patients underwent preoperative computed tomography (CT) of the lower limbs. During preoperative CT scanning, a metal object was placed at the approximate surgical site and marked to facilitate the design and proper placement of the surgical guide plate. Raw DICOM data were entered into Mimics 21.0 (Materialize Software, Leuven, Belgium, USA) to reconstruct a 3D model of the lower extremities. The resulting STL files were then transferred to Imageware 13.0 (UGS Corporation, Plano, Texas, USA) to compare healthy and affected limbs (limb length and either mechanical lateral distal femoral angle, mLDFA, or anatomical anterior distal tibial angle, aADTA, were recorded; Figure 1). During preoperative planning, other lower limb angles including anatomical and mechanical femoral or tibial angles could also be modeled. However, for the purpose of the present study, only mLDFA and aADTA angles were included. After the input of STL data into Imageware, the area of interest could be selectively displayed or hidden. A direct visual understanding of the deformity in multiple views could be obtained through rotation, adjustment of STL data, and the mirroring function. Using the point-based rendering function, we were able to accurately measure the length of the lower limb and angle parameters to compare the healthy and affected limb. Surgical Simulation and Guide Plate Design A personalized surgical plan was then designed according to the complexity of each patient, with an example shown in Figure 1. Our surgical guide plates were designed with the healthy limb used as reference. We first determined the site of deformity as either femoral or tibial, and the site of osteotomy was then selected according to the affected bone. Osteotomy was emulated within the software and abnormal limb parameters were corrected by rotation and movement to achieve satisfactory surgical positioning. Schanz screw placement and trajectory were determined on the premise that the mechanical axis of the affected limb was restored. After the position and trajectory of the Schanz screws had been determined, the corrected bone was restored to its deformed state. A surgical guide plate was then constructed based on reverse engineering, allowing surgical emulation of osteotomy site location and determination of vertical screw trajectory from the bone towards the skin. Convex protrusions and semitubular channels were added to the guide plate to function as landmarks for proper instrument positioning. Semi-tubular channels on the guide plate ranging from 15 to 20 mm in width acted as drill sleeves to ensure perpendicular insertion of Schanz screws without the need for additional equipment. 3D Printing of Patient-Specific Guide Plates The guide plates used in the present study were resin-based and have been certified for use with medical devices as they provide adequate stability and rigidity. All guide plates used in this study were printed at the National Organization Function Reconstruction Technology Research and Engineering Center of South China University of Technology. Guide plates were printed using photocuring integrated molding technology and then sterilized before usage ( Figure 2). Surgical Method Patients were placed in supine position after anesthesia and the surgical site was disinfected. The guide plate was then placed upon the exposed skin according to the preoperative simulation. An incision of 1 cm length was made below each screw placement channel and deep tissues were separated using forceps. A Kirschner wire was drilled into the proximal and distal tracks of the drilling channel to stabilize the guide plate and act as a marker for the position of screw insertion. Wires on the proximal and distal channels were kept in place in order to prevent movement of the guide plates. A Kirschner wire was then drilled into each channel to allow proper positioning of the Schanz screws. After removal of the Kirschner wire from the channels, the resulting aperture on the bone was used to facilitate screw insertion. An incision 1-2 cm in length was then made below the osteotomy site marked on the guide plate and the bony surface was then exposed. Osteotomy was completed using miniature drills and an osteotome. In cases requiring tibial lengthening, fibular osteotomy was performed from the mid-upper section of the lower leg to prevent lateral malleolus instability. Drilling tracks were made in accordance with axial position after surgical correction rather than anatomical presentation during surgery, thereby allowing simultaneous lengthening and angular correction. Schanz screws were then drilled into each channel and the external fixator was fixed upon each screw following osteotomy. A typical surgical procedure is shown in Figure 2. Postoperative Rehabilitation and Management Patients were administered intravenous antibiotics to prevent postoperative infection of the surgical site. Lower limb movement, sensation, and blood flow were monitored for postoperative neurovascular traction injury. Patients were encouraged to begin bedside rehabilitation (flexion of the knee and hip) starting from the second postoperative day. After postoperative radiology, the external fixator was lengthened at a rate lower than 1 mm/day; however, rates differed for each patient according to adjacent soft tissue and pain intensity. Patients were followed up for 6 months after surgery (1, 2, 3, and 6 months). A radiographic assessment was performed at each follow-up visit to evaluate the degree of extension and bone consolidation. The external fixator was removed only after satisfactory extension and bone bearing capacity was confirmed. Three months after removal of the external fixator, patients were followed-up to evaluate bone consolidation. Patient Parameters Patient operative evaluation was performed by measuring intraoperative blood loss and operative duration. Postoperative evaluation was performed by measuring the following parameters. Distraction-Consolidation Time (DCT) Distraction-consolidation time (DCT) is defined as time from surgery to radiographic evidence of bone consolidation in three-fourths cortices, reflecting the formation of bone callus. Evaluation of DCT assists in determining suitable postoperative weight-bearing and removal of the external fixator. Healing Index (HI) Healing index (HI) is defined as the ratio of number of postoperative days until consolidation to the length of bone consolidation (days/cm). The index can be used to evaluate the rate of bone growth and tailor personalized postoperative functional exercise plans. Statistical Analyses Data collected were recorded as mean AE standard deviation and analyzed in SPSS version 22 (IBM Corp., Armonk, NY, USA). The paired T-test was used to compare lower extremity function scores. P-values less than 0.05 were regarded as statistically significant. General Results A total of seven patients diagnosed with LLD who underwent surgical correction at our hospital between June 2018 and August 2020 were included in the present study. The mean patient age was 11 AE 5.91 (range, 1-21) years. The study population comprised two male and five female patients. Except for two patients with congenital LLD, five patients had acquired disease (two cases of post-traumatic epiphyseal arrest, two cases of fibrous dysplasia, and one case of tibial pseudarthrosis). Three patients had LLD with knee varus, two with knee valgus, and two with tibial procurvatum. The basic parameters of each patient are shown in Table 1. A typical case in our study is represented in Figure 3 (a separate typical case is shown in Figures 4-6). Intraoperative Results All surgical procedures were performed according to preoperative surgical simulation and surgical design. Closed osteotomy and Schanz screw placement were performed with the aid of a surgical guide plate without the need for intraoperative fluoroscopy. The mean number of Schanz screws used in the study was 9.57 AE 2.29 (range, 6-13). The mean operative duration was 151 AE 41.87 (range, 84-217) minutes and the mean intraoperative blood loss was 53.58 AE 22.51 (range, 25-87) ml. Functional Evaluation All patients underwent standard postoperative care, functional rehabilitation, and regular wound dressing changes. Blood chemistry was monitored at regular intervals to evaluate postoperative conditions and patients were discharged upon having normal levels of inflammatory indices. Radiographs were taken at each follow-up appointment. Patient parameters were as follows: the mean length of extension in all the operated limbs was 6.41 AE 2.54 (range, 3.30-10.54) cm. Mean DCT was 3.67 AE 1.13 (range, 2.5-6.0) months and the mean retention time of the external fixator was 11 AE 2.45 (8)(9)(10)(11)(12)(13)(14) months. The mean final HI was 18.11 AE 3.58 (12.8-22.7) days/cm. Lower extremity function score criteria were used for preoperative and postoperative evaluation. One patient in our study was too young for lower limb function scoring. The mean LEFS score increased from 32.17 AE 8.57 (range, 24-45) to 61.17 AE 6.68 (range, 50-67) after surgery (T = -14.26, P < 0.001), the data was shown in Table 2. Complications During the follow-up period, two patients presented at 1 week and 1 month after discharge, respectively, with erythema and edema at the needle outlet. Both patients were readmitted to the hospital and treated symptomatically with regular dressing change and intravenous antibiotics. Infection was diagnosed upon confirmation of increased inflammatory indicators; C-reactive protein and procalcitonin. Bacterial culture was conducted on exudates and suitable antibiotics were selected for each patient. Patients were discharged after resolution of inflammatory indices and symptoms. CAD increases the accuracy of the analysis of LLD parameters The presence of LLD disrupts limb biomechanics, leading to pelvic tilt and compensatory scoliosis commonly manifesting as an improper pressure distribution on lumbar vertebrae and articular processes 4,6 . This leads to progressive immobility, pain, and progressive degeneration of the lower limbs and joints. Symptomatic LLD greater than 2 cm is commonly accepted as an indication for surgical correction 13 . This further emphasizes the importance of accurately obtaining limb parameters to determine surgical indications in individual patients. In the present study, we measured limb parameters (length, curvature, and angular measurements) from 3D reconstructions obtained from limb CT scanning. This allowed us to obtain multiple measurements from several angles, an advantage over 2D imaging which may be unable to sufficiently display lesions or defects due to a planar view 19 . Preoperative CAD-based Simulation Improves the Surgery Procedure Previous studies have reported the advantages of preoperative surgical simulation, particularly for complex orthopaedic surgeries 20,21 . We utilized reverse engineering using patient preoperative parameters, the healthy limb, patient age, epiphyseal growth plates, and parental height as references. Personalized treatment and surgical plans were then designed for individual patients by selecting a suitable degree of lengthening, site for osteotomy, trajectory of screw insertion, and number of screws required. Multiple comparisons of different surgical plans were performed allowing for selection of the most suitable strategy. This method allows for comprehensive testing and prediction of operative outcomes. Surgical treatment was performed according to the predetermined surgical strategy, effectively reducing dependence on personal subjectivity. Farsetti et al. and Tjernström et al. reported intraoperative blood loss ranging from 150 to 600 ml using traditional treatment procedures 22,23 . In the present study, the mean intraoperative blood loss volume was significantly lower at 53.58 AE 22.51 (range, ml. This may be due to the small incision used in our surgeries minimizing soft tissue trauma, indicating a further benefit of CAD-based surgical emulation. In addition, we did not encounter intraoperative changes from the surgical plan in our case series, further demonstrating the utility of CAD as a surgical guide. CAD Guide Plate Assisted-Monorail External Fixation Provides a Suitable Surgical Method for the Treatment of LLD The monorail external fixator allows axial extension and is relatively easier to install compared to the other apparatus currently used clinically. Decreased technical difficulty shortens operative duration and the improved mobility increases patient satisfaction. However, this approach has a major flaw of not being able to correct angular deformities of the limb 24 . In the present study, we used a computerized simulation of the surgical method and personalized guide plates during the installation of the monorail external fixator. We were able to accurately insert the fixator screws into their desired positions after angular correction even prior to surgery without the need for multiple intraoperative radiographs. Surgical guide plates have been widely used in trauma and spine surgeries and have been reported to improve the overall surgical process 25,26 . However, in our study, the usage of guide plates greatly attenuated the precision Schanz screw positioning. Traditionally, determination of screw trajectory is performed using multiple intraoperative fluoroscopies to avoid fixation failure or injury to surrounding vasculature and nerves 27,28 . Computerized design based on preoperative parameters allows for more personalized guide plate design and surgical planning through preoperative emulation. Increasing the precision of screw positioning during fixation greatly reduces the risk of articular rupture and fixation failure. Computer-assisted planning of the osteotomy also shortens overall operative duration, further reducing intraoperative bleeding, and effectively omits the risk of radiation exposure during surgery. Using CAD-based preoperative surgical emulation and 3D-printed surgical guide plates to guide surgical procedures, we found follow-up parameters (DCT, HI, LEFS) were comparable to other studies of leg lengthening surgical procedures. Zak et al. 29 utilized magnetic-actuated intramedullary nails to treat LLD limbs in their case series of 19 patients and reported an average DCT and HI of 8.4 months and 72.8 days/cm respectively. In a similar study, Cosic et al. 15 reported an average DCT and HI of 8.9 months and HI of 83 days/cm using PRECICE nails in the treatment of 21 patients with LLD. However, the average age of the participants in these studies was 43 and 36.4 years, respectively, which may explain the relatively high numerical value for the reported DCT and HI values due to adults having poorer bone healing functions than adolescents. In a study by Szymczuk et al. 30 comparing treatment results following the use of monorail external fixators with a mean patient age of 9.4 years and intramedullary nails with a mean patient age of 15.4 years, the average HI was 29.3 days/cm and 34.77 days/cm, respectively, showing no statistical significance. The mean DCT and HI in our study was similar to the results reported in the studies mentioned above, showing promising LLD treatment results. CAD-based surgical emulation and 3D-printed guide plates demonstrates utility in aiding the surgical treatment of LLD using external monorail fixators. Accurate osteotomy and screw placement allows for simultaneous correction of length and angular deformity, thereby improving postoperative quality of life. Limitations The primary limitation of the present study was the small retrospective case series due to the long duration of external fixation leading to loss of patients from follow-up. Proper positioning of the surgical guide plate on the skin of the operated limb may also represent a technical limitation. Skin and muscles have a degree of elasticity, leading to difficulties in the proper placement of the surgical guide plate according to preoperative designs and causing a shift in the predetermined position of the screw trajectories and osteotomy. The surgical guide plates used in the present study maintained a fixed position of the proximal and distal Schanz screws relative to the osteotomy line. A small degree of displacement of the osteotomy does not affect the treatment result (both mLDFA and aADTA). In cases where angular correction of the LLD limb deviates 5 from the preoperative plan, computer-assisted design can be implemented to manufacture a personalized external fixator using the original screws. Conclusion In the present study, the use of computer-aided design of guide plates and personalized surgical planning through emulation allowed surgical correction based on 3D modeling. Incorporation of CAD into the diagnostic procedure provides a more comprehensive understanding of each LLD limb. Combining surgical simulation and surgical guide plates allows more personalized LLD surgery. Future studies of different types of external fixators in larger study population, combining augmented reality or infra-red marking, are required to further strengthen the benefits and overcome the limitations of our reported method. Declaration Ethics Approval and Consent to Participate T his study was approved by the Ethics Institute of Guangzhou First People's Hospital (K-2018-137-04). All of the patients signed informed consent before surgery. Consent for Publication A ll the authors in this study agreed to be involved and have agreed upon the submission and publication of this manuscript. Availability of Data and Materials A ll the data and materials in this study are available upon request.
2022-08-05T06:17:41.134Z
2022-08-04T00:00:00.000
{ "year": 2022, "sha1": "4313e3c22af1ca3876f948ca28f9d1f6353300ab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "95abadf328db8ffd7091b1842c8a3f261f913c8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269646861
pes2o/s2orc
v3-fos-license
Txnip deletions and missense alleles prolong the survival of cones in a retinitis pigmentosa mouse model Retinitis pigmentosa (RP) is an inherited retinal disease in which there is a loss of cone-mediated daylight vision. As there are >100 disease genes, our goal is to preserve cone vision in a disease gene-agnostic manner. Previously we showed that overexpressing TXNIP, an α-arrestin protein, prolonged cone vision in RP mouse models, using an AAV to express it only in cones. Here, we expressed different alleles of Txnip in the retinal pigmented epithelium (RPE), a support layer for cones. Our goal was to learn more of TXNIP’s structure-function relationships for cone survival, as well as determine the optimal cell type expression pattern for cone survival. The C-terminal half of TXNIP was found to be sufficient to remove GLUT1 from the cell surface, and improved RP cone survival, when expressed in the RPE, but not in cones. Knock-down of HSP90AB1, a TXNIP-interactor which regulates metabolism, improved the survival of cones alone and was additive for cone survival when combined with TXNIP. From these and other results, it is likely that TXNIP interacts with several proteins in the RPE to indirectly support cone survival, with some of these interactions different from those that lead to cone survival when expressed only in cones. Introduction Retinitis pigmentosa (RP) is an inherited retinal degenerative disease that affects one in ~4000 people worldwide (Hartong et al., 2006).The disease first manifests as poor night vision, likely due to the fact that many RP disease genes are expressed in rod photoreceptors, which initiate night vision.Cone photoreceptors, which are required for daylight, color, and high acuity vision, also are affected, as are the retina-pigmented epithelial (RPE) cells (Chrenek et al., 2012;Napoli et al., 2021;Napoli and Strettoi, 2023;Wu et al., 2021), which support both rod and cone photoreceptors.However, cones and RPE cells typically do not express RP disease genes.Nonetheless, RP cones lose function and die after most of the rods in their immediate neighborhood die.While it is not entirely clear what causes cone death, there are data suggesting problems with metabolism, oxidative stress, lack of trophic factors, oversupply of chromophore, and inflammation (Komeima et al., 2006;Mohand-Said et al., 1998;Punzo et al., 2009;Xue et al., 2023;Zhao et al., 2015).We have been pursuing gene therapy to address some of these problems.Our hope is to create therapies that are disease-gene agnostic by targeting common problems for cones across disease gene families.One of our strategies is aimed at cone metabolism.Several lines of evidence suggest that RP cones do not have enough glucose, their main fuel source (Reviewed in Xue and Cepko, 2023).We found that overexpression of TXNIP, an α-arrestin protein with multiple functions, including glucose metabolism, prolonged the survival of cones and cone-mediated vision in three RP mouse strains (Xue et al., 2021).Regarding the mechanism of rescue, we found that it relied upon the utilization of lactate by cones.In addition, cones treated with Txnip showed improved mitochondrial morphology and function.As TXNIP is known to bind directly to thioredoxin, we tested a Txnip allele with a single amino acid (aa) change, C247S, which abolishes the interaction with thioredoxin (Patwari et al., 2006).This allele provided better rescue than the wild-type (wt) Txnip allele, ruling out its interaction with thioredoxin as required for cone rescue.These findings inspired us to further modify Txnip in various ways to look for better rescue, as well as to explore potential mechanisms for Txnip's action.To this end, we also tested a related α-arrestin protein, as well as an interacting partner, for rescue effects. Arrdc4 reduces rd1 cone survival As TXNIP is a member of the α-arrestin protein family, we explored whether another family member might prolong RP cone survival.There are six known α-arrestins in mammals (Puca and Brou, 2014).Among them, arrestin domain-containing protein 4 (ARRDC4) is the closest to TXNIP in amino acid sequence, sharing ~60% similar amino acids with TXNIP (Figure 1A).ARRDC4 is thought to have functions that are similar to those of TXNIP in regulating glucose metabolism in vitro (Patwari et al., 2009).Like TXNIP and other α-arrestins, ARRDC4 is composed of three domains: a N-terminal arrestin (Arrestin N-) domain, a C-terminal arrestin (Arrestin C-) domain, and an intrinsically disordered region (IDR) at the C-terminus.Because an IDR lacks a stable 3D structure under physiological conditions, previous studies using crystallography did not reveal the full structure of the TXNIP protein (Hwang et al., 2014).None of the other α-arrestins have been characterized structurally.To begin to examine potential similarities in structure among some of these family members, we utilized an artificial intelligence (AI) algorithm, AlphaFold-2, to visualize the predicted 3D full structure of ARRDC4 (Jumper et al., 2021).Similar to TXNIP, ARRDC4 is predicted to have a 'W' shaped arrestin structure, which is composed of the Arrestin N-and C-domains, plus a long IDR which looks like a tail (Figure 1B). Arrdc4 was tested for its ability to prolong cone survival in rd1 mice using AAV-mediated gene delivery, as was done for Txnip previously (Xue et al., 2021).Expression of Arrdc4 was driven by a cone-specific promoter, RO1.7, derived from human red opsin (Krol et al., 2010;Wang et al., 1992;Ye et al., 2016).The vector was packaged into the AAV8 serotype capsid.AAV-Arrdc4 was injected sub-retinally into P0 rd1 mouse eyes along with AAV-H2BGFP, which is used to trace the infection and to label the cone nuclei for counting.At P50, the treated retinas were harvested and flat-mounted for further quantification of cones within the central retina, the area that first degenerates.Unlike Txnip, the cone counts were much lower in Arrdc4 treated retina relative to the AAV-H2BGFP control (Figure 1C and D). Evaluation of cone survival using Txnip deletion alleles expressed in the RPE We previously showed (Xue et al., 2021) that overexpressing the Txnip wt allele in the RPE using an RPE-specific promoter, derived from the human BEST1 gene (Esumi et al., 2009), did not improve RP cone survival.The wt allele removes the glucose transporter from the plasma membrane, thus preventing the RPE from taking up glucose for its own metabolism, and preventing it from serving as a conduit for glucose to flow from the blood to the cones.However, a triple mutant, Txnip.C247S.LL351 and 352AA, improved cone survival when expressed only in the RPE (Xue et al., 2021).The C247S mutation eliminates the interaction with thioredoxin, and enhances the Txnip rescue when expressed in cones (Xue et al., 2021).The LL351 and 352AA mutations eliminate a clathrin-binding site, which is required for Txnip's interaction with clathrin-coated pits for removal of GLUT1 from the cell surface (Wu et al., 2013).We previously proposed a model in which Txnip.C247S.LL351 and 352AA promotes the use of lactate by the RPE (Xue et al., 2021), as we found was the case when Txnip was expressed in cones.Although the RPE normally uses lactate in wt animals, in RP, it is hypothesized that it retains the glucose that it normally would deliver to cones (Reviewed in Hurley, 2021).The retention of glucose by the RPE is thought to be due to a reduction in lactate supply, as rods normally provide lactate for the RPE, and with rod loss that source would be greatly diminished.If the RPE can utilize lactate in RP, perhaps using lactate supplied by the blood, and the LL351 and 352AA mutation impairs the ability of TXNIP to remove the glucose transporter from the plasma membrane, this allele of Txnip may then allow glucose to flow from the blood to the cones via the GLUT1 transporter.The expression of Txnip.C247S.LL351 and 352AA allele thus has the potential to address the proposed glucose shortage of RP cones.However, we noted two caveats.One is that the survival of cones was not as robust as when Txnip was expressed directly in cones.In addition, the rd1 retina in the FVB strain used here, even without any treatment, shows holes in the cone layer, which appear as 'craters.'An RP rat model presents a similar pattern (Ji et al., 2014;Ji et al., 2012;Zhu et al., 2013).When Txnip.C247S.LL351 and 352AA are expressed in the RPE, there are more craters in the photoreceptor layer.We note that these craters are common only in the rd1 allele on the FVB background, i.e., not as common on other inbred mouse strains that also harbor the rd1 allele, so the meaning of this observation is unclear. Arrestins are well-known for their protein-protein interactions via different domains.Different regions of TXNIP are known to directly associate with different protein partners to affect several different functions.For example, the N-terminus is sufficient to interact with KPNA2 for TXNIP's localization to the nucleus (Nishinaka et al., 2004), while the C-terminus of TXNIP is critical for interactions with COPS5, to inhibit cancer cell proliferation (Jeon et al., 2005).The C-terminus of TXNIP is also necessary for inhibition of glycolysis, at least in vitro, through an unclear mechanism (Patwari et al., 2009).Based on these studies, we made several deletion alleles of Txnip, and expressed them in the RPE using the Best1 promoter.We assayed their ability to clear GLUT1 from the RPE surface (Figure 2A), as well as promote cone survival (Figure 2B-G).To enable automated cone counting and trace the infection, we co-injected an AAV (AAV8-RedO-H2BGFP-WPRE-bGHpA) encoding an allele of GFP fused to histone 2B (H2BGFP), which localized to the nucleus.As the red opsin promoter was used to express this gene, H2BGFP was seen in cone nuclei, but not in the RPE, if AAV8-RedO-H2BGFP-WPRE-bGHpA was injected alone.However, when an AAV that expressed in the RPE, i.e., AAV8-Best1-Sv40intron-(Gene)-WPRE-bGHpA, was co-injected with AAV8-RedO-H2BGFP-WPRE-bGHpA, H2BGFP was expressed in the RPE, along with expression in cones (Figure 2A).We speculate that this is due to concatenation or recombination of the two genomes, such that the H2BGFP comes under the control of the RPE promoter.This may be due to the high copy number of AAV in the RPE, as it did not happen in the reverse combination, i.e., AAV with an RPE promoter driving GFP and a cone promoter driving another gene.It was previously observed that the AAV genome copy number was »10 fold lower in cones than in the RPE (Wang et al., 2020). To assay GLUT1, we focused on the basal surface of the RPE, as it is easier to score than the apical surface, where its processes are intertwined with those of the retina, where GLUT1 is also expressed.The 149-397aa portion of Txnip.C247S (C.Txnip.C247S) had the highest activity for GLUT1 removal from the RPE basal surface in vivo, while the 1-228aa portion (N.Txnip) failed to remove GLUT1 (Figure 2A and Figure 2-figure supplement 1).As predicted by ColabFold, an AI algorithm based on AlphaFold-2 (Mirdita et al., 2022), the Arrestin C-domain, which is part of C.Txnip.C247S, but is not present in the N-domain of TXNIP, interacts with the intracellular C-terminal IDR of GLUT1 (Figure 2-figure supplement 2).These results are consistent with these predictions, in that the C-terminal portion of TXNIP is sufficient to bind and clear GLUT1 from cell surface, while the N-domain is not. Cone survival was assayed in vivo following infection of rd1 with these missense and deletion alleles at P0 and sacrifice at P50 (Figure 2B-G).Similar to Best1-wt Txnip (Xue et al., 2021), Best1-Txnip.C247S did not show significant improvement of cone survival, ruling out the C247S mutation alone as promoting the cone survival by Best1-Txnip.C247S.LL351 and 352AA.In addition, Best1-N.Txnip (1-228aa) and Best1-sC.Txnip (255-397aa, sC: short C-) failed to improve cone survival.However, Best1-C.Txnip.C247S (149-397aa), Best1-C.Txnip.C247S.LL351 and 352AA (149-397aa), and Best1-nt.Txnip.C247S 320 (1-320aa, nt: no-tail) promoted significant cone survival compared to the corresponding control retinas.Best1-N.Txnip and Best1-sC.Txnip-treated rd1 retina did not have increased numbers of craters, while all other vectors increased the number of craters.These results suggest that the C-terminal portion of TXNIP expressed in the RPE is required for RP cone survival, for a function(s) that is unrelated to the removal of GLUT1, or to the mechanism that leads to an increase in craters. Evaluation of Txnip deletion alleles for autonomous cone survival Our previous study used the human red opsin promoter, 'RedO,' in AAV to drive the expression of Txnip in rd1 cones, with a low level of expression in some rods.This same strategy was used to In comparison, the full-length Txnip.C247S promoted an increase of 97% in cones in our previous study (Xue et al., 2021).These results show that the full-length Txnip provides the most benefit in terms of RP cone survival.To determine if expression of this allele might give increased survival when expressed in both the RPE and in cones, we used a CMV promoter to drive expression, as CMV expresses highly in both cell types (Xiong et al., 2015).CMV-Txnip.C247S provided a 38% rescue (Figure 3A and C), which is lower than RedO-Txnip.C247S (97%) alone.These and previous results are summarized in Figure 4. Inhibiting Hsp90ab1 prolongs rd1 cone survival To further investigate the potential mechanism(s) of cone survival induced by Txnip, we considered the list of protein interactors that were identified in HEK293 cells using biotinylated protein interaction pull-down assay plus mass spectrometry (Forred et al., 2016).Forred et al. identified a subset of proteins that interact with Txnip.C247S, the mutant that provides better cone rescue than the wt Txnip allele (Xue et al., 2021).As we found that Txnip promotes the use of lactate in cones, and improves mitochondrial morphology and function, we looked for TXNIP interactors that are relevant to mitochondria.We identified two candidates, PARP1 and HSP90AB1.PARP1 mutants have been shown to protect mitochondria under stress (Hocsak et al., 2017;Szczesny et al., 2014).Accordingly, in our previous study, we crossed the null PARP1 mice with rd1 mice, to ask if mitochondrial improvements alone were sufficient to induce cone rescue.We found that it was not.In our current study, we thus prioritized HSP90AB1 inhibition, which had been shown to improve skeletal muscle mitochondrial metabolism in a diabetes mouse model (Jing et al., 2018). Three shRNAs targeting different regions of the mRNA of Hsp90ab1 (shHsp90ab1) were delivered by AAV into the retinas of wt mice.Knock-down was evaluated using an AAV encoding a FLAG-tagged HSP90AB1 that was co-injected with the AAV-shRNA.All three shRNAs reduced the HSP90AB1-FLAG signal compared to the shNC, the non-targeting control shRNA (Figure 5A and B), suggesting that they are able to inhibit the expression of HSP90AB1 protein in vivo.The promotion of cone survival was then tested in rd1 mice using these shRNA constructs.The two shRNAs with the most activity in reducing the FLAG-tagged HSP90AB1 signal, shHsp90ab1 (#a) , and shHsp90ab1 (#c) , were found to increase the survival of rd1 cones at P50 (Figure 5C and D).To determine if this effect was capable of increasing the Txnip rescue, the shRNAs were co-injected with Txnip.C247S.A slight additive effect of shHsp90ab1 and Txnip.C247S was observed (Figure 5E and F).We also asked if there might be an effect of the knock-down of Hsp90ab1 on a Parp1 loss of function background.We did not observe any rescue effect of the shRNAs on this background (Figure 5G and H). Discussion In RP, the RPE cells and cones degenerate due to non-autonomous causes after the death of rods.Although the causes of cone death are not entirely clear, one model proposes that they do not have enough glucose, their main fuel source (Hurley, 2021;Punzo et al., 2009;Xue and Cepko, 2023).In a previous study, we found that Txnip promoted the use of lactate within cones and led to healthier mitochondria.The mechanisms for these effects are unclear, and we sought to determine what domains of TXNIP might contribute to these effects, as well as explore alleles of Txnip that might be more potent for cone survival.We further tested the rescue effects of several alleles when expressed in the RPE, a support layer for cones, through which nutrients, such as glucose, flow to the cones from the choriocapillaris.The results suggest that Txnip has different mechanisms for Txnip-mediated cone survival when expressed in the RPE versus in cones. The C-terminal portion of Txnip.C247S (149-397aa) expressed within the RPE, but not within cones, delayed the degeneration of cones (Figure 2).The full-length Txnip.C247S expressed within cones, but not within the RPE, was the most effective configuration for cone survival (Figure 3).The expression of full-length Txnip.C247S in both the RPE and cones did not provide better rescue than in cones alone.As TXNIP has several domains that presumably interact with different partners, it is possible that these different effects on cone survival are due to the interaction of different TXNIP domains with different partners in the RPE versus the cones, or different results from the interactions of the same domains and partners in the two cell types.The N-terminal half of TXNIP (1-228aa) might exert harmful effects in the RPE, that negate the beneficial effects from the C-terminal half, suggested by the observation that its removal, in the C-terminal 149-397 allele, led to better cone survival when expressed in the RPE (Figure 2).In cones, the C-terminal half, including the C-terminal IDR tail, may cooperate with the N-terminal half, or negate its negative effects, to RP cone survival.However, the C-terminal half is not sufficient for cone rescue when expressed in cones, as the 149-397 allele did not rescue. The C-terminal half of TXNIP apparently affects cone survival differently when expressed within the two cell types.This notion is informed by the different rescue effects of expression of the 149-397 allele, which rescues cones when expressed in the RPE, but not when expressed in cones.This domain loses the cone rescue activity if it loses aa 149-254, when expressed in the RPE, as shown by the 255-397 allele.In cones, the rescue activity is present in the 1-301 and the 1-320 allele, but is lost in the 149-397 allele.It is possible that effects on protein structure cause this loss, or that an interaction between N-terminal and C-terminal domains is required for cone rescue within cones. One TXNIP function that likely is important to these effects in the two cell types is TXNIP's removal of the glucose transporter from the plasma membrane.The LLAA TXNIP mutant is unable to effectively remove the transporter, due to its loss of interaction with clathrin (Wu et al., 2013).When this mutant allele is expressed in the RPE, it leads to improved cone survival, in contrast to the wt allele.This might be due to better health in the RPE, when it is able to take up glucose to fuel its own metabolism, and/or to provide glucose to cones.When the LLAA allele is expressed in cones, it also promotes cone survival, though not as well as the wt allele (Xue et al., 2021).The wt allele might be more beneficial in cones if it is part of the mechanism that forces cones to rely more heavily on lactate vs. glucose.All of these observations of cone rescue from expression within cones suggest that cone rescue relies on activities that reside in both the N and C-terminal portions, including the ability of TXNIP to interact with clathrin.However, it will be important to probe structural alterations and stability of TXNIP in cones and RPE when these various alleles are expressed to further support these hypotheses. ARRDC4, the most similar α-arrestin protein to TXNIP that also has Arrestin N-and C-domains, accelerated RP cone death when transduced via AAV (Figure 1).This observation suggests that TXNIP has unique functions that protect RP cones.Recently, ARRDC4 has been proposed to be critical for liver glucagon signaling, which could be negated by insulin (Dagdeviren et al., 2023).The implication of this potential role regarding RP cone survival is unclear, but interestingly, the activation of the insulin/mTORC1 pathway is beneficial to RP cone survival (Punzo et al., 2009;Venkatesh et al., 2015). Regarding potential protein interactions beyond the glucose transporter, the interaction of TXNIP with thioredoxin is apparently negative for cone survival, as we found in our previous study with the C247S allele.This is most easily understood by the release of thioredoxin from TXNIP, whereupon it can play its anti-oxidation role, which would be important in the RP retina which exhibits oxidative damage.It also would free TXNIP to interact with other partners, of which there are several, though many also depend upon C247 (Forred et al., 2016).Another partner interaction suggested by previous studies and explored here is the interaction with HSP90AB1 (Figure 5).HSP90AB1 interacts with both the wt and C247S alleles (Forred et al., 2016).Little is known about the function of HSP90AB1.Knocking down Hsp90ab1 improved mitochondrial metabolism of skeletal muscle in a diabetic mouse model (Jing et al., 2018).Knocking out HSP90AA1, a paralog of HSP90AB1 which has 14% different amino acids, led to rod death and correlated with PDE6 dysregulation (Munezero et al., 2023).Inhibiting HSP90AA1 with small molecules transiently delayed cone death in human retinal C247S or Txnip.C247S+shHsp90ab1 (same as in E). (G) Representative P50 Parp1 -/-rd1 flat-mounted retinas with H2BGFP (gray)-labeled cones transduced with shNC (non-targeting shRNA control, AAV8-RedO-shRNA, 1×10 9 vg/eye; plus AAV8-RedO-H2BGFP, 2.5×10 8 vg/eye) or shHsp90ab1 (AAV8-RedO-shRNA #a or #c, 1×10 9 vg/eye; plus AAV8-RedO-H2BGFP, 2.5×10 8 vg/eye).( H organoids under low glucose conditions (Spirig et al., 2023).However, the exact role of HSP90AA1 in photoreceptors needs to be clarified, and the implications for HSP90AB1 in RP cones are still unclear. Here, we found that sh-mediated knock-down of Hsp90ab1 enhanced cone survival in rd1 mice.This rescue seems to be dependent on PARP1, another binding partner of wt TXNIP and Txnip.C247S (Forred et al., 2016).As shown by PARP1 knock-out mice, PARP1 is deleterious to mitochondrial heath under stressful conditions (Hocsak et al., 2017;Szczesny et al., 2014;Xue et al., 2021).When we examined a possible rescue effect of PARP1 loss on rd1 cone survival, we did not see a benefit, indicating that the TXNIP-mediated rescue is not due solely to its beneficial effects on mitochondria, nor does TXNIP-mediated rescue rely upon PARP1 (Xue et al., 2021).These results indicate that the Txnip rescue is more complex than inhibition of HSP90AB1, and a PARP1independent mechanism is involved.It is possible that HSP90AB1 directly interacts with PARP1, and this interaction is critical for shHsp90ab1 to benefit RP cones.We looked into the predicted 3D structures of HSP90AB1 and PARP1 using AlphaFold-2 (Figure 5-figure supplement 1), but did not gain additional insight into such interactions.We also explored AlphaFold Multimer, which is an algorithm predicting the interaction of multiple proteins based upon AlphaFold-2 (Evans et al., 2021), and noticed that the Arrestin-C domain of TXNIP linked PARP1 and HSP90AB1 together in one of the predicted models (Figure 5-figure supplement 2).Despite the unclear mechanism, combining Hsp90ab1 inhibition with Txnip.C247S could be a potential combination therapy to maximize the protection of RP cones. Figure 2 . Figure 2. Txnip deletions expressed only within retinal pigmented epithelium (RPE) cells: effects on GLUT1 removal and cone survival.(A) Glucose transporter 1 (GLUT1) expression in P20 wild-type eyes infected with control (AAV8-RedO-H2BGFP, 2.5×10 8 vg/eye), or a Txnip allele (2.5×10 8 vg/ eye) plus RedO-H2BGFP (2.5×10 8 vg/eye), as indicated in each panel.Txnip deletions are detailed in Figure 4. GLUT1 intensity from basal RPE is quantified in Figure 2-figure supplement 1. Magenta: GLUT1; green: RedO-H2BGFP for infection tracing; gray: DAPI.(B, D, F) Representative P50 rd1 flat-mounted retinas after P0 infection with one of seven different Txnip alleles expressed only within the RPE, as indicated in the figure, or control eyes infected with AAV8-RedO-H2BGFP, 2.5×10 8 vg/eye alone.(C, E, G) Quantification of H2BGFP-positive cones within the center of P50 rd1 retinas transduced with indicated vectors, as shown in B, D, F. The number in the round brackets '()' indicates the number of retinas within each group.Error bar: standard deviation.Statistics: ANOVA and Dunnett's multiple comparison test for C and E; two-tailed unpaired Student's t-test for G. C.Txnip.CS: Figure 2 continued on next page Figure supplement 1 . Figure supplement 1. Txnip deletions expressed only within retinal pigmented epithelium (RPE) cells: quantification of the Glucose transporter 1 (GLUT1) level within the basal surface of the RPE. Figure 4 . Figure 4. Summary of various alleles of Txnip in this and previous study (Xue et al., 2021).'Retinal pigmented epithelium (RPE) Glucose transporter 1 (GLUT1) Removal' refers to the amount of GLUT1 immunohistochemical signal on the basal surface following expression in the RPE using the Best1 promoter.'Cone Rescue: Expression in RPE' refers to cone rescue following expression only in the RPE using the Best1 promoter.'Cone Rescue: Expression in Cones' is due to expression only in cone photoreceptors using the RedO promoter.Abbreviations: Y (x%): Yes with x% increase compared to AAV-H2BGFP control; N: No; NT: Not tested.N.TXNIP, N-terminal portion of TXNIP; C.TXNIP.C247S, C-terminal portion of TXNIP.C247S mutant allele; sC.TXNIP: a shorter version of C-terminal portion of TXNIP; nt.TXNIP.C247S, no tail version TXNIP.C247S mutant allele; Arrestin N-, N-terminal arrestin domain; Arrestin C-, C-terminal arrestin domain; PPxY, a motif where P is proline, x is any amino acid and Y is tyrosine. Figure 5 Figure 5 continued on next page ) Quantification of H2BGFP-positive cones within the center of P50 Parp1 -/-rd1 retinas transduced with shNC or shHsp90ab1 (same as in G).Error bar: standard deviation.Statistics: ANOVA and Dunnett's multiple comparison test for B and D; two-tailed unpaired Student's t-test for F and H. NS: not significant, p>0.05, *p<0.05,**p<0.01,***p<0.001****p<or << 0.0001.The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1.This file contains the source data of Figure 5B, D, F and H. Figure supplement 2 . Figure supplement 2. Predicted 3D protein interactions among TXNIP, HSP90AB1, and PARP1 by AI algorithm AlphaFold Multimer from two angles of view. Figure 5 continued
2024-05-11T06:17:36.030Z
2024-05-10T00:00:00.000
{ "year": 2024, "sha1": "bd6367c93ca6faa0faa181a58c322664e64062d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.90749", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "75779d12767968578d126e4b1f0078c82e2bfcfd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17386048
pes2o/s2orc
v3-fos-license
Light-triggered Supramolecular Isomerism in a Self-catenated Zn(II)-organic Framework: Dynamic Photo-switching CO2 Uptake and Detection of Nitroaromatics A self-catenated Zn(II)-organic framework formulated as [Zn2(3,3′-bpeab)(oba)2]·DMF (1) exhibiting a six-connected 44·610·8 topology has been successfully synthesized through the mixed-ligand of kinked 3,3′-bis[2-(4-pyridyl)ethenyl]azobenzene (3,3′-bpeab) and 4,4′-oxybis-benzoic acid (H2oba) under solvothermal condition. UV light triggers isomerization of complex 1 in a single-crystal-to-single-crystal (SCSC) manner, giving rise to a conformational supramolecular isomer 1_UV through the pedal motion of photoresponsive double bonds. Dynamic photo-switching in the obtained light-responsive supramolecular isomers leads to instantly reversible CO2 uptake. Furthermore, the ligand originated fluorescence emission of water-resistant complex 1 is selectively sensitive to 4-nitrotoluene (4-NT) owing to a higher quenching efficiency of the perilous explosive over other structurally similar nitroaromatics, prefiguring the potentials of 1 as a fluorescence sensor towards 4-NT in aquatic media. MOFs showing chemo-sensing properties have been reported till now [20][21][22][23] . Introducing hydrophobic groups near coordination sites is an effective method to improve water stability of M(II)-carboxylate-based MOFs. In addition, the catenation may improve the water resistance of MOFs for the difficulty in the displacement of the ligands locked within the framework 24 . Pillared-layer MOFs with different degrees of interpenetration have been constructed from mixed-ligand of rigid linear dicarboxylate linkers and diamine ligands. While the construction of self-catenated pillared-layer MOFs from mixed flexible or kinked ligands remains largely unexplored, especially with helical character 25,26 . Herein, a pillar ligand 3,3′ -bis[2-(4-pyridyl)ethenyl]azobenzene (3,3′ -bpeab) bearing dual distinctive stimuli-responsive functional units (− C= C− and − N= N− bonds) is designed. The combination of step like 3,3′ -bpeab and V-shaped 4,4′ -oxybisbenzoic acid (H 2 oba) as ligands may favor the formation of diverse and helical structures, and promote the formation of self-catenation, which is beneficial to obtain moisture stable porous frameworks. Fortunately, a self-catenated porous Zn II -organic framework based on paddle-wheel type secondary building units (SBUs) of Zn 2 (CO 2 ) 4 was isolated. The incorporation of photo responsive components into the coordination network leads to its dynamic manners, more particularly, the interconversion of conformers of 3,3′ -bpeab, which is scarcely practicable through conventional synthetic methods. Fortunately, the crystallinity of resulting crystals upon stimulus is retained to offer complete structural details, which could provide useful insights into the relationship of the photo-switching CO 2 uptake and the conformational changes. The photoluminescence properties of long delocalized 3,3′ -bpeab are enhanced as rigidifying the aromatic conjugated ligand into the Zn(II) porous framework results in a non-radiative relaxation reducing, allows the π -electron rich framework a considerable candidate for fluorescence sensing. Results Synthesis. Solvothermal reaction of Zn(NO 3 ) 2 ·6H 2 O, 3,3′ -bpeab and H 2 oba gives crystals of 1. Same patch of crystals was received UV irradiation to obtain complex 1_UV. Single crystals of 1 were heated at 100 °C for 2 hours under vacuum to get complex 1_heat. Structure description of [Zn 2 (3,3′-bpeab)(oba) 2 ]·DMF (1). Complex 1 crystallizes in the monoclinic space group C2/c. The asymmetric unit contains one crystallographically independent Zn II ion, half a 3,3′ -bpeab ligand, one oba ligand and half a guest DMF solvent molecule. Structural studies indicated that the metal center possesses a tetragonal pyramid geometry, coordinated by one pyridinic nitrogen atom from the 3,3′ -bpeab ligand and four oxygen atoms from four different oba ligands (Fig. 1a). The Zn− O bond lengths fall in the range of (c) A pair of sextuple-stranded helices along the b direction (different colors are used for the six strands). (d) A perspective view of the 3D framework along the b-axis. (e) The 3D self-penetrating framework with a uninodal 6-connected (4 4 .6 10 .8) topology (two shortest six-member rings are catenated) considering the binuclear SBUs as nodes. 2.030-2.046 Å and Zn− N bond lengths range from 2.000 to 2.013 Å. It is worth mentioning that, the 3,3′ -bpeab ligand displays two distinct conformations with different occupancies, nearly 53.6% of which adopts conformation I and the remaining 46.4% adopts the conformation II, due to the different orientations of the − C= C− and − N= N− bonds ( Figure S1). The two terminal pyridyl rings in the bpeab ligand are both coplanar with respect to the middle phenyl ring. Zn1 and Zn1A are linked by four bridging carboxyl bridges to form a dinuclear paddle-wheel Zn 2 (COO) 4 secondary building unit (SBU). The SBUs are further connected to four more units by the oba ligands to obtain a 2D [Zn 2 (oba) 2 ] sublayer lying in the bc plane with rhombic grids (Fig. 1b). The 2D layer is developed by two kinds of helical chains running along the b-axis with a pitch of 9.798 Å, where the right-and left-handed helical chains with the same composition of (-Zn-oba-Zn-oba-) n are aligned in an alternate array by sharing the Zn(II) centers, leaving a mesomeric 1D channel with interchanging chiralities (Fig. 1b). These highly corrugated (4, 4) 2D layers are further pillared by 3,3′ -bpeab ligands to construct a pillared-layer porous 3D framework (Figs 1d and S2). Unlike the typical parallel arrangement, adjacent bpeab ligands employ a criss-cross manner, coupling with the oba ligands to generate unusual intertwined sextuple-stranded helical chains in the b-axis direction (Fig. 1c). Topologically, each dinuclear SBU is connected to six identical units, four by individual oba ligands, and the other two by 3,3′ -bpeab ligands; thus the Zn 2 (COO) 4 SBUs can be simplified as six-connected nodes, and the whole structure can be described as a six-connecting uninodal net with Schlafli symbol of 4 4 .6 10 .8 (TD 10 = 6679) (Fig. 1e). It is proverbial that the commonly encountered single pillared-layer structures generally exhibit pcu topology with different interpenetrations when utilize linear dicarboxylate as ligands. While the flexible oba ligands can tune the orientation around the paddlewheel SBUs to form different sub-layers, which was further pillared by bipyridyl-based ligands with distinct topologies, and several cases with roa, jsm, 6T9 topologies have been reported recently [27][28][29][30][31] . It should be noted that, the 6T9 topology and the observed topology of complex 1 have the same Schlafli symbol, but with different TD 10 (5391 for 6T9). In the 6T9 topology, interpenetrated 2D double layers are pillared by the pyridyl spacer ligands, while the current observed topology of complex 1 is constructed from non-interpenetrated single layers. In addition, an interesting feature of this topology is the presence of self-catenation. The extremely tight self-catenation causes a high topological density of the net, TD 10 = 6679, as each 6-ring is crossed by 132 other rings (50 6-rings and 82 8-rings). According to the RCSR database 32 , this is the highest topological density among all known 6-coordinated nets. Two smallest six-membered circuits form the catenane-like interlocking structure as highlighted in Figs 1e and S3. The six-membered circuit consists of a pair of bpeab, four oba ligands and six Zn II dimers, with the distances of 26.778 and 27.825 Å between two neighbouring vertices, bringing about a 68.45° intersection angle between the two edges ( Figure S4). The SCSC transformation of complex 1 via stimulus of UV light. Pedal motion has been observed in compounds with − C= C− bonds just as those with − N= N− bonds, extending the scope of this movement to various molecules such as azobenzenes, stilbenes, etc 12,[33][34][35][36] . In the present study, the 3,3′ -bpeab molecule couples two types of selected pedal motion groups, − C= C− double bonds and − N= N− moiety, to explore the pedal motion in crystals under stimulus of UV light. The UV irradiated samples 1_UV remained the same connectivity with complex 1 ( Figure S5). Compared with complex 1, the distance of the dinuclear SBUs across 3,3′ -bpeab in 1_UV varied from 26.778 Å to 26.923 Å as 3,3′ -bpeab ligands converted to conformation III and VI, different from both of the two conformations in parent crystal, as depicted in of molecules are not restricted to their lowest energy conformer during crystallization, when external stimulus was introduced to overcome the obstacles of activation energy for isomerization, a series of conformation interchanging was initiated ( Figure S7). While irradiated with UV light, the coexisting conformer I, II can transform to conformer III and IV. It is well established that the − N= N− bonds incline to undergo light-induced reversible trans-to-cis isomerization, while such a transformation was suppressed in the coordination framework, resulting in the pedal motion of azo moiety under UV light. Although the − C= C− pedal motion is not observed in the UV irradiation process through the X-Ray diffraction, the probable transformation process may involve the isomerization of − C= C− bonds in conformer I towards conformer IV or conformer II to conformer III, respectively. In addition, the 3,3′ -bpeab molecules are disposed in a slip-stacked manner in complex 1, leaving phenyls closer to the adjacent pyridine rings with a 4.038 Å distance among the parallel aligning − C= C− bond pairs. Despite satisfying the Schmidt's criteria, [2 + 2] photochemical cycloaddition reaction was excluded, this might ascribe to the pedal motion of the double bonds. It should be noted that not all the feasible mechanism for such conformational changes have been included. The possibilities of a certain synergism or interplay between all the aforementioned isomerization processes have not been ruled out. The SCSC transformation of complex 1 via stimulus of heat. After 1 was heated at 100 °C under vacuum for 2h without losing its single crystallinity (with some cracks), X-ray diffraction analysis reveals that the structure held the same connectivity with small deviation in relative positions of Zn atoms to form 1_heat, as the volume of the unit cell was decreased from 5755(10) Å 3 to 5648.3(9) Å 3 , which might be affected by the removal of the solvent molecules ( Figure S8). The Zn− Zn distance through 3,3′ -bpeab and oba are also changing to 23.626 Å and 14.171 Å, respectively. The most obvious conversion of the 3,3′ -bpeab ligands was the − N= N− bonds pedal motion since the conformations of − C= C− bonds appear to maintain in the heating process. The − N= N− bonds change to a same orientation unanimously, leaving the 3,3′ -bpeab ligands as conformer II and III. Along with the pedal motion, the inside acute angle of the six-membered metallocyclic ring widens even more to 69.05°, with concomitant variation of the rhombus [Zn 8 (oba) 4 ] ring as its inside acute angle shrinks even more to 39.75°, comparing to the ones in complexes 1 and 1_UV. The different existing conformers in complexes 1_UV and 1_heat show different ways of transformation among conformers triggered by the stimulus of UV or heat. Thermal Stability and Moisture Stability. The TG analysis curve for complex 1 shows a weight loss of about 6.7% near 150 °C, corresponding to the loss of the solvent DMF molecules ( Figure S9). And the TGA spectrum of 1_activated shows a plateau before its collapse, suggesting the complete removal of the solvent molecules that occupied the frameworks. After sample of 1 was immersed in water over a week, the obtained sample shows one-step weight loss process until the decomposition temperature at 320 °C. The PXRD patterns of the activated, water immersed samples, water boiled samples coincide with the simulated one, suggesting the stability towards temperature and humidity ( Figure S10). As far as we know, there are rare MOFs showing good stability in boiling water 17,37,38 . Structurally, the acute angle of the adjacent phenyls around the SBUs is smaller due to the bent nature of the oba ligand, and the distance between the adjacent H atoms of phenyl of oba and the pyridinyl of the 3,3′ -bpeab ligand is smaller than the usually observed distance in pillared-layer MOFs based on the Zn 2 -paddle-wheel SBUs ( Figure S11), which may enhance the shielding ability of the ligand to protect the SBUs against water molecules. In addition, the self-catenation of the framework is another key factor to its moisture resistance 24 . Porosity Measurements and Photo Switching Studies. The high water stability and the azo decorated porous structures, make the complex a potential candidate for gas separation under practical conditions. The dynamic nature of the double bonds under UV light may induce the structural flexibility to influence the interactions with CO 2 molecules. In addition, the channels are decorated by the O atoms of the oba ligand and N atoms of the 3,3′ -bpeab ligands, which may facilitate the interactions with CO 2 molecules to resulting a higher uptake than other gases, which is a prerequisite for the application of a separation material. The isotherm of activated 1 at 77 K shows a normal type-I isotherm shape, indicative of permanent micropores, resulting in BET surface areas of 299 m 2 /g (Fig. 3a). As shown in Fig. 3b, the activated 1 adsorbs very small amounts of N 2 , while the uptake of CO 2 at 120 kPa is ~13 times higher than N 2 . Selectivity is of fundamental importance in processes such as gas separation, the near-linear adsorption profiles for N 2 is indicative of their low affinity with the frameworks as expected from its relatively low polarizability. The Henry's constants were employed to estimate the selectivity of the complex. 1 shows ideal CO 2 /N 2 adsorption selectivity of 19.7 at 298 K. The activated 1 can absorb substantial amounts of CO 2 with the uptake capacity of 1.69 mmol/g at 298 K and 120 kPa. These values are comparable to the best performing ZIF material, but are moderate compared to some highly porous MOFs due to the much lower surface area of 1 [39][40][41] . Isosteric adsorption enthalpies (Q st ) as a function of the quantity of gases adsorbed were calculated using virial method ( Figure S12). Virial analysis shows that the enthalpies of CO 2 adsorption is 28.3 kJ/ mol. Such a moderate Q st value is a strong advantage for the implementation of low-energy regeneration for CO 2 separation. In addition, keeping physicochemical stability is the primary consideration for practical applications, many MOFs face the hydrolysis issues and restrict their application in humid conditions, because of the dative nature of the metal-ligand bonds. The boiling water treatment of the sample results in lower crystallinity and partial collapse of the framework, the N 2 uptake at 77 K is negligible, however, the CO 2 uptake is 0.63 mmol/g at 298 K and 120 kPa, about 37% of the pristine sample ( Figure S13). Under static irradiation conditions, the CO 2 uptake capacity drops to 1.43 mmol/g at 298 K and 120 kPa (Fig. 3b). The dynamic irradiation isotherms follow values obtained under continuous irradiation conditions or UV-OFF conditions. In order to clarify whether the irradiation would promote the formation of a steady state of the conformations, the sample was irradiated under UV light for 3 hours before the adsorption experiment ( Figure S14). And the isotherms follow values collected under no UV light. The phenomenon indicates the flexible nature of the framework can be triggered by UV irradiation, and the transformation of the conformers occurred in a dynamic fashion. The UV− vis spectrum exhibits two absorption bands in the UV region at 297 nm and 382 nm, which are attributed to π -π * and n-π * electronic transitions ( Figure S15). Small fractions of the structure were found to periodically oscillate under irrdiation, this may be ascribed to the rotation of the phenyl ring and the bending movement of the related bonds. The transformations are occurred quite quickly in the UV-vis experiment under UV light. In addition, the gas adsorption experiment also indirectly provides a view of this phenomenon, the gas uptake can trace the switching of the light. A different batch of sample was collected again under the same photo-switching experiment, and the results show subtle difference, probably due to the different exposing surface area of the samples under UV light ( Figure S14). The conversion occurred to the powder form that used in the UV− vis experiment is comparable, the single crystals used for gas sorption experiments also change ever so promptly, being able to keep pace with the switching on-and-off of UV light immediately. As stated by Hill et al. 9 , light irradiation increased the MOF surface energy to weaken the interactions between CO 2 molecules, which was correlated with structural oscillations from C− C− N bending movement under UV trigger of the azo-MOF. In the present work, the variations of the CO 2 uptake may be ascribed to the conformational changes through the pedal motion of the double bonds which induced dynamic flexibility of the framework under UV light. Detection of Nitroaromatic Explosives. Complex 1 has been shown to retain its crystallinity nature when it was dispersed in H 2 O, the photo luminescence spectra show a maximum at 406 nm, and the intensities are much stronger than other ten more solvents ( Figure S16). Due to this emissive property, 1 was tested for sensing some nitro derivatives in water solution, as it is very crucial to test out nitroaromatic explosives using a simple and rapid method for applications such as security-screening, mine-fields analysis and environmental monitoring. The experimental data on water solubility of nitro compounds was obtained from the literature 42 . Fluorescence quenching titrations with different 4-NT addition levels were conducted with an excitation wavelength of 285 nm at room temperature (Fig. 4a). Stern− Volmer quenching constant (ppm −1 ). The quenching constant of K sv is an important parameter to describe the fluorescence quenching efficiency, which for 4-NT is quantified to be 8.06 × 10 −2 ppm −1 ( Figure S17). Furthermore, to check the potentials of 1 as a fluorescent probe for specific detection, we tested the fluorescence variations of 1 in the presence of various possible nitroaromatic explosives such as 1,3-DNB, 1,4-DNB and 2,4-DNT, all can also act as fluorescence quenchers for 1, yet even with rather similar chemical structures, their fluorescence quenching efficiencies are much lower than that of 4-NT. The order of K sv values for the four quenchers is 4-NT > 2,4-DNT > 1,4-DNB > 1,3-DNB (Figure S18-S20), and the K sv values lie in the normal range for the known MOFs [43][44][45][46] . The ratio between K sv of 4-NT and that of other nitro explosives is defined as the selective factor (SF), which is generally used to evaluate the selectivity. The SF values for 2,4-DNT, 1,4-DNB, 1,3-DNB over 4-NT are 0.641, 0.331 and 0.256, respectively, suggesting that 1 has a certain degree of selectivity towards 4-NT detection in aqueous solution (Fig. 4b). Discussion To summarize, a six-connected self-catenated Zn(II)-organic framework constructed from 3,3′ -bpeab bearing dual distinctive stimuli-responsive functional units (− C= C− and − N= N− bonds) has been successfully synthesized. Dynamic photo-switching in the obtained light-responsive supramolecular isomers leads to instantly reversible CO 2 uptake, which is ascribed to the light-triggered pedal motion of the double bonds of the 3,3′ -bpeab ligands. In addition, complex 1 was tested for sensing a couple of nitro explosives, which displays selective fluorescence quenching towards 4-NT compared to its analogues. Methods General. All chemicals were commercially purchased and used as received without further purification. The ligand 3,3′ -bis[2-(4-pyridyl)ethenyl]azobenzene (3,3′ -bpeab) was synthesized according to the literature method 33 . Powder X-ray diffraction (PXRD) patterns were obtained using a Bruker D8 ADVANCE diffractometer at 40 kV and 40 mA for Cu Kα radiation (λ = 1.5406 Å), with a scan speed of 0.1 s per step and a step size of 0.01° in 2θ . The simulated PXRD patterns were calculated using single-crystal X-ray diffraction data and processed by the free Mercury program provided by the Cambridge Crystallographic Data Center. Elemental analyses for C, H and N were determined on a Perkin-Elmer 2400C elemental analyzer. Fourier transform (FT) IR spectra (KBr pellets) were taken on an Avatar-370 (Nicolet) spectrometer. UV-vis spectra (solid) were recorded on a Hitachi U-4100 UV-Vis-NIR spectrophotometer. Thermogravimetric analysis (TGA) experiments were performed on Shimadzu simultaneous DTG-60A compositional analysis instrument from room temperature to 800 °C under N 2 atmosphere at a heating rate of 10 °C/min. The sorption isotherms for CO 2 and N 2 were measured using an automatic volumetric adsorption apparatus (Micrometrics ASAP 2020M). Ultrahigh-purity-grade CO 2 and N 2 were used for all measurements. For photo-switching experiments, the UV lamp was surrounded by a cooling system and fixed the sample tube with a distance more than 30 cm to eliminate possible temperature effect on CO 2 adsorption resulted from UV, and the gas sorption experiments were carried out on intermittently or continuously exposing samples under UV light.
2018-04-03T00:45:08.649Z
2016-10-11T00:00:00.000
{ "year": 2016, "sha1": "de47d3b210dc284ed5b6625ec3c3a2d4d75cf0db", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep34870.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de47d3b210dc284ed5b6625ec3c3a2d4d75cf0db", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
85510759
pes2o/s2orc
v3-fos-license
Mechanical Behavior of Recycled Self-Compacting Concrete Reinforced with Polypropylene Fibres This paper intends to study the possibility of producing fiber recycled self-compacting concrete (FRSCC) using demolitions concrete as a coarse aggregate. Polypropylene fibers (P.P.F) were used in recycled self-compacting concrete (RSCC) with different percentages of coarse recycled concrete aggregate. Nine concrete mixtures were set up to accomplish the objective proposed at this paper. Polypropylene fibers fraction changed from 0% to 0.15% by the volume of concrete and the ratio replacement of recycled coarse aggregate with natural aggregate was 25%, 50%, 75%, and 100%. The fresh properties of (FRSCC) and (RSCC) were assessed utilizing V-funnel, L-box and slump flow tests. Flexural strength, compression strength, and tensile strength tests were performed with a specific end goal to examine mechanical properties. The results indicate that the optimum volume fraction of polypropylene fibers was (0.1%) for the mixes contained recycled coarse aggregate (75%), for optimum content of (P.P.F), the compression strength, flexural strength, and splitting tensile strength; improved by (34%), (14%), and (8.1%), respectively with consideration to control mix. Also the flexural strength and the tensile strength for the mixes were improved with increasing the fibers ratio compared with control mix. Introduction Self-compacting recycled aggregates concrete with fibers (FRSCC) are a good known as an invention in concrete technology, has many advantages over conventional concrete, it is flow under its own weight without compaction effort, segregation resistance [1], and minimized the construction cost by using recycled aggregates (RA), also reduced the plastic and drying shrinkage cracking and preserve concrete durability, with used polypropylene fibers in concrete. Moreover, the use of fibers helps in reducing the permeability of concrete and its tendency to bleed [2]. Recycled aggregates are produced from demolition waste materials, with the largest source being the construction. Properties of the recycled aggregates concrete depend on the quality of the waste material, and also the coarse recycled aggregate comprises of two phases: namely the original virgin aggregate and the adhered mortar, the quantity of adhered mortar influences to a large extent the engineering mechanical and durability properties of the aggregate [3]. Polypropylene fibers (P.P.F.) are one of the most successful commodity fibers reaching a world production capacity of four million tons a year because of its depressed density, high stiffness and excellent chemical/ bacterial resistance [4]. Literature Review Many studies have dealt with improved Self-compacting concrete (SCC) with recycled aggregates and polypropylene fibers characteristics: Grdic et al. [5], studied on the potential usage of (RA) obtained from crushed concrete for making of SCC, and additionally emphasizing its ecological value. In their experiment, many kinds of concrete mixes were made, where the percentage of substitution of natural coarse aggregate with recycled aggregate was (0%, 50% and 100%). In the process of mixing, equal consistence of every solid blend was accomplished. Zhao et al. [6] studied on the effect of coarse aggregate gradation on the properties of SCC. Four SCC mixtures with A/B (size 5-10 mm coarse aggregate weight/size 10-20 mm coarse aggregate weight) ratio 4/6, 5/5, 6/4 and 7/3 were prepared; the bulk density of aggregates with various A/B ratios was investigated. The effectiveness of various types of coarse aggregates on fresh and hardened properties of SCC was investigated. Pandaa et al. [7] studied the effect of partially replaced of 10%, 20%, 30% and 40% of recycled coarse aggregate (RA) obtained at a demolished Town Club compared the results with normal vibrated concrete. The resultants shown that the flexural and compression and split tensile strengths of (SCC) with 100% natural aggregate is less than the normal vibrated concrete (NVC) with 100% natural aggregate and the strength of SCC decreases with an increase in recycled aggregate (RA) replacement ratios. Abukhashaba et al. [8] investigated the (SCC) with Polypropylene fiber, (PPF) and containing Cement-Kiln-Dust (CKD) on its stress strain characteristics. Six mixtures with water-binder ratio (w/b) of 0.45 were conducted. The variables were fiber content (0.005, 0.010, and 0.015), and fiber length (20, 40 and 60 mm). It was found that SCC-shrink-age was reduced using (PPF). (PPF) and (CKD) could be successfully used in (SCC) production in spite of its slightly negative effect on workability and a higher dosage of superplasticizer is required to achieve similar flow properties. Materials Cement The cement used for the present work is ordinary Portland cement, from Al-Douh refectory. Test results shown the cement identified with Iraqi specifications ASTM Specf.C150-02a/2002 [9]. The chemical and physical properties of cement shown in the Tables 1 and 2. Aggregate Fine aggregate: Sand is used as fine aggregate which is passing through 4.75 mm sieve aggregate and listed in Table 3 and compatible to the requirement specifications of ASTM C33-01 [10]. The physical properties were shown in Table 4. Natural and recycles coarse aggregate: Natural and recycled aggregates were used to prepare the (SCC) mixtures. The natural aggregate (NA) used was crushed stone of maximum size 12.5 mm, also recycles coarse aggregate (RA) of 12.5 mm max. size utilized as a part of the examination was gotten from the demolished cubes tested in concrete technology laboratory of civil engineering department of which were used for casting. The aggregates were separated by crushing the demolished lumps manually and were then cleaned, the properties of aggregates obtained experimentally ASTMC 127 [11] is presented in Table 4 to reduce the amount of adhered mortar, the aggregates were put through a 300 rev abrasion process using a Los Angeles machine ( Table 5). Superplasticizer Superplasticizer (SP) is a chemical compound used to increase the workability without adding more water Superplasticizer is essential for the creation of (SCC). The job of (SP) is to impart a high degree of flow ability and deformability, however the high dosages generally associate with (SCC) can lead to a high degree of segregation. The requirements are of superplasticizer according to ASTM-C494 Type B, D and G [13]. Table 6 shows the typical properties of superplasticizer. Table 7 shows the properties of steel fibers used in this work. High execution short (10 mm) polypropylene fiber was used in this investigation. Polypropylene fiber complied with requirements of ASTM C1116-02 [12]. Table 7 shows the physical and technical properties of polypropylene fiber. Mix Proportions. Due to the difficulty in determining the real water/cement (W/C) ratio because of the high variation of absorption in the RCA, it was decided to use basic ACI 211.1 [14]. The instruction of ACI 237R-07 [1] was used for mix design to realize self-consolidating concrete. The high absorption of recycled aggregates directly effects on the quality of the SCC concrete in order to control the high absorption of recycled aggregates, and is often uneven, pre-soaking method used for recycled aggregates for a fixed time, generally the time needed to achieve the 80% of the water absorption at 24 h. [15]. The amount of water added depends on the initial water content and effective absorption of (RA) during the mixing period. The (RA) utilized have a lower density and a higher water retention proportion (equivalent to around 6%) than the common coarse totals. except for little varieties in the amount of superplasticizer with the end goal of accomplishing parallel consistency for all the blends and because of marginally higher water absorption by the recycled aggregate. The water/cement ratios were set at 0.5 (Table 8). For the purpose of the experiment, two groups of concrete were made: control concrete -made only with the natural aggregate, the first group of (RSCC) mixes were substituted the natural coarse aggregate by recycled coarse aggregate in the ratio of 25%, 50%, 75%, and 100% by weight, in the second group of (RSCC) the same replacement of (RA) with Polypropylene fibers adding at the percentage of 0.025%, 0.5%, 0.1%, and 0.15% by the volume of SCC to study the combined effect of (RA) and (P.P.F). The quantity of components required for making of (1) m3 of concrete was settled except for little varieties in the amount of water with the end goal of accomplishing the equivalent consistency and because of the somewhat higher water assimilation by the reused total. Structure of composed blends has been appeared in Table 9. Rheological characteristics of fresh (SCC): Self-compacting concrete is portrayed by its streaming capacity, passing capacity, filling capacity and isolation resistance. For any solid to be portrayed as selfcompacting it ought to have the previously mentioned attributes. In this trial the accompanying test techniques as proposed by EFNARC [16] were utilized. Slump-flow test for flowability, V-funnel and T500 for viscosity and L-box test for testing passing capacity. The segregation resistance was watched outwardly amid droop stream test. Table 3 gives the prescribed esteems by EFNARC [16]. Slump flow test is a measure of mixture filling ability of SCC and provides a procedure to determine the slump flow of self-consolidating concrete in the laboratory or the field, the average diameter of the concrete circle is a measure for the filling ability of the concrete. The time required for the concrete to spread over the 500 mm diameter of circle is measured (that is T500 time), this parameter means that the thickness of concrete and demonstrates how stable the solid Shorter time of T500 indicates the better flow capacity. A lower time points to a greater lowability (Figure 1). The L-box test is utilized to assess the passing capacity of (SCC) to flow through tight openings including spaces between strengthening bars and other an obstructions without isolation or blocking. There are two varieties of the test, to be specific the two-bar test and the threebar test. The three-bar test, which was additionally utilized with the end goal of the present examination, mimics more congested support. Subsequently, the solid is emptied from the compartment into the filling container of the L-box. At that point the entryway is raised so that the solid flows into the even area of the crate. At the point when the development is stopped, the vertical separations are measured, toward the finish of the flat area of the L-box, between the top layer of the solid and the highest point of the level segment of the case, and at three positions similarly divided over the width of the crate. Varying from the stature of the even segment of the crate, these three estimations are utilized to ascertain the mean profundity of concrete as H2. A similar methodology is taken after to ascertain the profundity of cement quickly behind the door as H1. The estimation of H2/H1 as blocking proportion is then detailed (Figure 2). V-funnel test is to determine the viscosity, filling-ability and segregation resistance of fresh concrete. Design of the shape of V-funnel is tend to indicate any blocks on fresh concrete. If composition of concrete consists a lot of coarse aggregate, fresh concrete need longer duration to flow. Sketch of V-funnel tool and test are shown on Figure 3. The funnel is loaded with around 12 liters of cement and the time taken for it to course through the contraption is measured. Further, T 5 min is likewise measured with V-funnel, which shows the inclination for isolation, wherein the funnel can be refilled with concrete and left for 5 minutes to settle. If the concrete shows segregation, the stream time will increment essentially (Figures 3, 4 and Table 8). Concrete testing: The compression resistance of concrete samples tested for 7 and 28 days of curing period Self-compacting concrete specimens prepared at general according to ASTM C192-88 [17]. For each mix, six (150 mm) standard cubic steel molds used for casting specimens and for Compression strength in (7) and (28) days in agreement with ASTM C39-98 [18], the splitting tensile strength used two molds of 100 × 200 mm cylindrical concrete samples measured conformity to the ASTM C 496-86 [19], two (100 × 100 × 400 mm) prisms for flexural strength at (28) day, correspond with ASTM(C78-02) [20]. Workability The results of the fresh (RSCC) mixtures with and without (P.P.F.) at group I and II is shown in Tables 5 and Figures 5-7. Generally, the rheological characteristics of fresh SCC reduced with increased the (P.P.F) amount. It can be seen that slump flow value of control concrete mixture with natural aggregate was lower than that for recycled coarse aggregate self-compacted concrete mixtures in group I without (P.P.F.) as shown in Figure 3, this is because of the more water absorption of the (RA) compared to natural aggregates which were utilized at the air dried condition [7]. Also at the group II with P.P.F. the slump flow diameter reduced due to addition of Polypropylene fibers caused a partially blocking effect [21]. Table 10 represented the viscosity of mixture by T 500 slump flow results, all the mixtures recorded the T500 time between 1.82 to 2.18 seconds. According to EFNARC [16] detail V-channel time extending from 6 to 12 seconds is viewed as sufficient. The test aftereffects of V-funnel test for all mixtures at Figure 4 meet the prerequisite of stream time which means that great filling capacity even with congested reinforcement. All the blends meet the measure that the apportion of statures of concrete at the finishes of L-box is no under 0.80 as appeared in the Figure 5. L-box proportions for all blends were recorded over 0.87. All the blends demonstrated flat droop stream with no seeping at the fringe which shows great deformability and segregation resistance. At the group II the slump flow diameter reduced due to addition of Polypropylene fibers was registered about 15% due to adding 0.75% (P.P.F.) compared with mixture without (P.P.F.) (Figures 5-7). Table 11, Figures 8 and 9 shown that the compressive strength of (RSCC) mixtures decreased with increasing in (RA) replacement ratio in 7 days, and 28 days tests. The compressive strength increases with increase in curing time due to the hydration products [22]. Figure 8 records the compressive resistance acquired from concrete samples at 7 days, compression resistance was appeared to decay with growing in substitution proportion of (RA), this finding is in concurrence with those gotten in past investigations [23][24][25]. It can be seen that the max. Rate loss of compression resistance was 50.1% with 100% replacement of (RA). The compressive strength increases for all percentages of fibers than (RSCC) without (P.P.F) as shown in Figure 10, the reason is that because of repression given by fiber, holding qualities of concrete increments and henceforth compressive resistance increments with the increments in the fiber content [26]. The optimum addition of polypropylene fibers to the mix at 0.1% with 75% of (RA) raised the (7) day's compression resistance of the mix by 24%. Compressive strength The strength decreasing caused with (RA) was more obvious at 28 days in Figure 9. The use of 25%, 50%, 75% and 100% recycled aggregates caused approximately 18 respectively reduction in compressive strength compared to the sample with normal aggregates mixture, adding the (P.P.F) raises the on compressive resistance because of split capturing limit of crossing over fibers, so that adding 0.1% (P.P.F) to the plain self-compacting mixes with nearness of 75% of (RA) expanded surprisingly the compression resistance (about12.7%), contrasted and the (RC75%-PPF0.00%) mix (Figures 8-10). Splitting tensile strength From the test results at Table 12 and Figure 11, it can be concluded that adding (P.P.F) can increase the splitting strength of concrete to different degrees, due to the ability to delay the fine-crack formation and fixing their propagation [27], the splitting strength of (RC25%-PPF0.025%), (RC50%-PPF0.050%), and (RC100%-PPF0.15%) increased 4.3%, 8.4%, 14% and 3.3% respectively, in splitting tensile strength compared with control concrete. Moreover, the optimum increasing in the tensile strength found in the mixture of 0.1% (P.P.F) by volume and 75% of (RAC) was about (5.1%). Thus, mixing fiber can effectively improve the splitting strength of concrete. Figure 11 presented that the tensile strength of concrete large diminished when the proportion of recycled aggregates become greater, as seen in the compression strength, be that as it may, the level of diminishment in the splitting tensile strength was more prominent than in the compressive resistance [15]. For example,, the use of 25%, 50%, 75% and 100% recycled concrete aggregates caused approximately 4.1%, 7.4%, 13.3% and 16.1% a decreases in the tensile strength severally, paralleled to the case with natural aggregates only (NC-RC0%-PPF0%) at 28 days of curing. Table 12: Average test of splitting tensile resistance resultants of (SRCC) samples. Resultant of flexural strength The consequences of flexural resistance tests are exhibited in Table 13 and Figure 12, the flexural resistance of (RSCC) mixtures are reduced with increased with replacement ratio of (RA). For example, the flexural strength with 25%, 50%, 75% and 100% recycled aggregates had 6.8%, 13.5%, 17.7% and 44.4% lower tensile strengths respectively, when compared with control concrete (NC-RC0%-PPF0%) at 28 days of curing. Figure 12 also shows that the addition of (P.P.F) volume fractions from 0.025% to 0.15% causes the increasing at the flexural strength of (RSCC) with 4.1%, 10.6%, 13.4%, and 46%, successively, with regard to the normal (SCC) specimens (NC-RC0%-PPF0%). Moreover, the highest value of flexural strength can see in the mix of 0.1% of P.P.F and 75% replacement of (RCA) was 3.38 MPa and percentage replacement of (RCA) for (SCC) with (P.P.F) and without (P.P.F). Conclusion Pursuant to results of self-compact Concrete with (RA) and (P.P.F), we considered that: 1. The results of the present investigation show that both (RA) and (P.P.F) can be used for (SCC) production. 2. The slump flow and L-Box ratio of the (SRACC) mixtures increased with increasing coarse recycled aggregate content. The slump flows of all the (RSCC) mixtures were from 680 mm to755 mm mixture and the L-Box ratios varied from 0.85 to 0.93. 3. Using (P.P.F) has negative effect in rheology properties of fresh (RSCC) and it reduce workability and increase both consistency and viscosity, the slump flow declined to 655 mm.
2019-04-30T13:03:24.847Z
2017-07-31T00:00:00.000
{ "year": 2017, "sha1": "e242041d0ffdbe1f5cfcfd21f0ed26c2fc7d7450", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2168-9717.1000207", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b93bb0c1ba6a4865a6f0b147ec1c1b9c17db7abc", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
269020390
pes2o/s2orc
v3-fos-license
Cellular and Molecular Immunity to Influenza Viruses and Vaccines Immune responses to influenza (flu) antigens reflect memory of prior infections or vaccinations, which might influence immunity to new flu antigens. Memory of past antigens has been termed “original antigenic sin” or, more recently, “immune imprinting” and “seniority”. We have researched a comparison between the immune response to live flu infections and inactivated flu vaccinations. A brief history of antibody generation theories is presented, culminating in new findings about the immune-network theory and suggesting that a network of clones exists between anti-idiotypic antibodies and T cell receptors. Findings regarding the 2009 pandemic flu strain and immune responses to it are presented, including memory B cells and conserved regions within the hemagglutinin protein. The importance of CD4+ memory T cells and cytotoxic CD8+ T cells responding to both infections and vaccinations are discussed and compared. Innate immune cells, like natural killer (NK) cells and macrophages, are discussed regarding their roles in adaptive immune responses. Antigen presentation via macroautophagy processes is described. New vaccines in development are mentioned along with the results of some clinical trials. The manuscript concludes with how repeated vaccinations are impacting the immune system and a sketch of what might be behind the imprinting phenomenon, including future research directions. Introduction It has been observed that, in the case of vaccination targeting the influenza (flu) virus, the immune system can produce more antibodies (Abs) directed against virus strains first encountered by an individual rather than strains present in the vaccine [1][2][3].This phenomenon was given the name "Original Antigenic Sin" (OAS) by Thomas Francis Jr. [4].An early demonstration of OAS in humans was presented by de St. Groth and Webster [5].Two monovalent flu vaccines (FM1 and SW) were administered to subjects divided into groups based on the presence of Ab as measured by hemagglutinin inhibition (HAI) titer to SW before vaccination.The results showed that the titer of the FM1 Ab after SW boosting was as high as that after vaccination with the FM1 vaccine, and the quantity of the Abs after SW administration was as high as that for FM1.A review about OAS by Monto et al. [6] described the persistence of Abs to a first infection with respect to a person's age for three different pandemic strains of flu virus: ASw, PR8, and FM1.The data indicated that the older group, >30 yr, had high Ab titers measured by HAI to the 1931 swine strain, ASw; the 17-26-yr group had Ab titers to all strains studied; and the younger subjects, 4-10-yr, had Abs to only the most recent strain at the time, FM1.Children vaccinated against FM1 produced Abs only to FM1, but when they were vaccinated with ASw or PR8 they generated Abs to ASw and PR8 as well as high titers to FM1 [7].However, adults vaccinated with FM1 produced comparable amounts of Ab to the older stains, suggesting Figure 1.Depiction of the "original antigenic sin" phenomenon.The results of an experiment reported in 1957 by Davenport and Hennessy [7] showed direct evidence of the OAS phenomenon.Subjects were all vaccinated with a monovalent recent strain of the virus, FM1.Ab titers after vaccination measured by HAI indicated that the oldest age group, >30 yr, had more Ab generated against previous viral strains, ASw (blue epitopes) and PR8 (orange epitopes), than to FM1 (green epitopes).The young adult age group, 17-26 yr, had an even amount of Ab generated against each virus strain as measured by HAI.The youngest age group, 4-10 yr, had Ab generated only against the vaccine strain, FM1.Antibodies are matched to the colors of the epitopes. Demonstration of protection against exposure to a pandemic flu strain was described by McCullers et al. [8].The study found that individuals who were vaccinated with the 1976 flu vaccine had protection to the 2009 pandemic strain according to the HAI titers.Phylogenetic analysis showed that the 1976 vaccine strain (A/New Jersey/1976) bore similarities to the 2009 pandemic strain (A/California/7/09), and both had elements present in the 1918 pandemic strain.A more recent study described protection incurred by childhood exposure to the zoonotic strains H5N1 and H7N9, which indicated that exposure to the zoonotic viruses generated Abs that gave protection to novel hemagglutinin (HA) subtypes within the same phylogenetic group [9].Therefore, if the virus strain antigenic differences incurred during childhood or recently by a new novel virus strain are large compared to current circulating strains, protection can be achieved, but if strain differences are small, strong protection could be compromised.This phenomenon was referred to as "immune imprinting" [9].Immune imprinting was later explained as "the bias to use immune memory, independent of whether that immune memory was induced by the very first flu strain an individual is exposed to or an antigenically novel flu virus that an individual is exposed to later in life" [10]. This review presents a brief history of theories of antibody generation, followed by a discussion of how elements of the innate and adaptive immune systems interact to create high-affinity Abs that can protect against flu infection [11].The subject of how vaccination can be affected by individual infection and immunization history will be covered.Also, human leukocyte antigen (HLA) diversity and cytokine gene expression variations will be suggested as contributors to diverse responses to flu vaccination [12,13].The review will conclude with a review of efforts to create a more effective flu vaccine, followed by a discussion of how immune imprinting may still hamper these efforts.Discoveries about how Figure 1.Depiction of the "original antigenic sin" phenomenon.The results of an experiment reported in 1957 by Davenport and Hennessy [7] showed direct evidence of the OAS phenomenon.Subjects were all vaccinated with a monovalent recent strain of the virus, FM1.Ab titers after vaccination measured by HAI indicated that the oldest age group, >30 yr, had more Ab generated against previous viral strains, ASw (blue epitopes) and PR8 (orange epitopes), than to FM1 (green epitopes).The young adult age group, 17-26 yr, had an even amount of Ab generated against each virus strain as measured by HAI.The youngest age group, 4-10 yr, had Ab generated only against the vaccine strain, FM1.Antibodies are matched to the colors of the epitopes. Demonstration of protection against exposure to a pandemic flu strain was described by McCullers et al. [8].The study found that individuals who were vaccinated with the 1976 flu vaccine had protection to the 2009 pandemic strain according to the HAI titers.Phylogenetic analysis showed that the 1976 vaccine strain (A/New Jersey/1976) bore similarities to the 2009 pandemic strain (A/California/7/09), and both had elements present in the 1918 pandemic strain.A more recent study described protection incurred by childhood exposure to the zoonotic strains H5N1 and H7N9, which indicated that exposure to the zoonotic viruses generated Abs that gave protection to novel hemagglutinin (HA) subtypes within the same phylogenetic group [9].Therefore, if the virus strain antigenic differences incurred during childhood or recently by a new novel virus strain are large compared to current circulating strains, protection can be achieved, but if strain differences are small, strong protection could be compromised.This phenomenon was referred to as "immune imprinting" [9].Immune imprinting was later explained as "the bias to use immune memory, independent of whether that immune memory was induced by the very first flu strain an individual is exposed to or an antigenically novel flu virus that an individual is exposed to later in life" [10]. This review presents a brief history of theories of antibody generation, followed by a discussion of how elements of the innate and adaptive immune systems interact to create high-affinity Abs that can protect against flu infection [11].The subject of how vaccination can be affected by individual infection and immunization history will be covered.Also, human leukocyte antigen (HLA) diversity and cytokine gene expression variations will be suggested as contributors to diverse responses to flu vaccination [12,13].The review will conclude with a review of efforts to create a more effective flu vaccine, followed by a discussion of how immune imprinting may still hamper these efforts.Discoveries about Vaccines 2024, 12, 389 3 of 34 how repeated vaccinations influence immune responses will be presented as clues to what might be the mechanisms behind immune imprinting. Theories of Ab Generation Human immune responses to flu vaccinations were not readily explained by the early theories of Ab production proposed from 1930 to 1960.The first of these was the instruction model, which proposed that a foreign molecule, an antigen (Ag), would serve as a template for the Ab's structure [14,15].The second was the selective theory proposed by Burnet and Fenner [16], which proposed that it was not the Ag but the Ab that played a central role in determining specificity.The selection theory was later modified by Burnet [17], who suggested that the immune system generated a multitude of B cells with different hypervariable sequences upon encountering a specific Ag, and B cells with the best affinities respond by undergoing more clonal expansion.Most of the B cells produced during the expansion become plasma cells and produce Abs specific to the antigenic determinants (epitopes) of encountered Ags, but a few differentiate into memory B cells.Each foreign particle, like a virus, has multiple Ags and each Ag possesses epitopes that may stimulate specific responses in B cells with help from T cells to produce Abs with different idiotypes to each Ag's epitopes.Adaptive immunity includes T cells with T cell receptors (TCRs), and B cell clones with high-affinity BCRs to an Ag's epitopes may capture and process more Ag, providing longer cognate interaction with T cells (CD4 + Th cells) to obtain more cytokines, such as IL4, IL-5, and IL-6, for stimulation, proliferation, and differentiation to plasma cells.This might explain idiotypic differences in the levels of Abs to Ag epitopes.The selection theory was expanded by findings about immune responses to infections and/or vaccinations.In 1974, Niels Jerne presented his immune network theory [18].This theory proposed that anti-idiotypic Abs could be generated against the variable domain (paratope and/or idiotopes) in an Ab's hypervariable Ag-binding fragment (Fab), i.e., its Fab idiotype.The idiotype of a B cell's epitope-specific receptors (its BCRs) are the same as the idiotype of the Abs that it produces, which recognizes an epitope of the Ag.In contrast, TCRs recognize a peptide of the Ag in union with the antigen-presenting cell's (APC's) major histocompatibility complex class I or class II (MHC-I or MHC-II) molecules for CD8 + and CD4 + T cells, respectively.Anti-idiotypic antibodies are made to Ab idiotypes, but Abs are not typically made to TCRs.These interactions would create a network of clones that have the potential to regulate immune responses.Infections and vaccinations generate a multitude of Abs in accordance with this immune network theory.An illustration of the immune network theory concept appears in Figure 2. The first infection occurring during childhood would initiate the existence of memory B cell clones that survive from year to year based on the generation of anti-idiotypes, which may have sequences that resemble the viral epitope and anti-anti-idiotypes that might have specificities to the viral peptide sequences.Development of a method to identify idiotype-driven T-cell/B-cell interactions through sequences of BCR led to the identification of the corresponding TCR epitope [19].The method defines patterns of TCR-recognizedepitope-motifs (TREMs) towards a viral Ag's epitope(s) through analysis of BCR sequences.It is possible that this new method would be useful in the creation of more effective flu vaccines since past attempts at idiotype-based therapies have been disappointing [20].The first infection occurring during childhood would initiate the existence of memory B cell clones that survive from year to year based on the generation of anti-idiotypes, which may have sequences that resemble the viral epitope and anti-anti-idiotypes that might have specificities to the viral peptide sequences.Development of a method to identify idiotype-driven T-cell/B-cell interactions through sequences of BCR led to the identification of the corresponding TCR epitope [19].The method defines patterns of TCR-recognized-epitope-motifs (TREMs) towards a viral Ag's epitope(s) through analysis of BCR sequences.It is possible that this new method would be useful in the creation of more effective flu vaccines since past attempts at idiotype-based therapies have been disappointing [20]. Infection vs. Vaccination and Immune Response Much has been discovered about the human immune response to flu vaccinations over the past seven decades, but some aspects of the OAS phenomenon remain a mystery, such as the association with early childhood infection as opposed to vaccination [21][22][23][24].A childhood infection supplies a large concentration of Ag to naïve B cells for the development of memory B cells, as well as generating memory CD4 + and CD8 + T cells specific to the infecting strain's Ags.Consequently, anamnestic responses, such as cross-reactive neutralizing Abs, antibody-dependent cellular cytotoxicity (ADCC), and cytotoxic T cells (Tc cells) will be diminished during subsequent infections [24].New methodologies developed in the last few decades have provided results that shed some light on how the Infection vs. Vaccination and Immune Response Much has been discovered about the human immune response to flu vaccinations over the past seven decades, but some aspects of the OAS phenomenon remain a mystery, such as the association with early childhood infection as opposed to vaccination [21][22][23][24].A childhood infection supplies a large concentration of Ag to naïve B cells for the development of memory B cells, as well as generating memory CD4 + and CD8 + T cells specific to the infecting strain's Ags.Consequently, anamnestic responses, such as cross-reactive neutralizing Abs, antibody-dependent cellular cytotoxicity (ADCC), and cytotoxic T cells (Tc cells) will be diminished during subsequent infections [24].New methodologies developed in the last few decades have provided results that shed some light on how the immune system reacts to a flu infection and the yearly flu vaccination [25][26][27][28][29][30].The new information has provided some clues as to how OAS functions, and these recent findings have stimulated discussion towards the development of a universal flu vaccine [21,[31][32][33]. Pre-and Post-Vaccination Abs to Flu Proteins Despite a relatively high flu vaccination rate among humans, the flu virus still causes outbreaks every year due to mutations in amino acid sequence within the flu's head group of the HA protein, which is the main target of neutralizing Abs [34].These small changes in sequence have been described as a drift from the original viral sequence and can be shown diagrammatically as a phylogenetic tree [8], which illustrates how the flu's HA may change a little each year and gradually move away from its parent HA protein.The genome sequencing of flu viruses taken from a collection of flu viruses in New York state and isolated over a period of several years reveals how flu has changed by small increments over time [35].Due to these sequence changes on the head of the viral HA molecule, a new vaccine needs to be designed each year to immunize the human population against the strains that are anticipated to be in circulation.In addition to the HA sequence changes, the virus induces N-linked glycan changes that will also impact the generation of protective Abs against the HA protein [3], which demonstrated Ab responses against recombinant glycoproteins representing natural viral diversity as measured by ELISA.Human Ab responses against H1N1 and H3N2 infections revealed a broad range of responses and cogent evidence of OAS [3].Flu vaccines have been termed trivalent because they contain three different virus types (2 A types and 1 B type): A/H1N1, A/H3N1, and B.More recently, quadrivalent vaccines have been generated containing two B flu types in addition to the 2 A types.A major finding from the study of Abs generated against the HA molecule from strains present in the yearly flu vaccine is that vaccines generate Ab diversification, which includes Abs binding to viral components not part of the vaccine [26,36].A variety of methods have been employed to study Abs in pre-and post-vaccination serums.Included among these studies were B cell sorting and monoclonal Ab (mAb) development, ELISPOT, memory B cell activation, ELISA, HAI, and surface plasmon resonance (SPR) [3,25,[37][38][39][40][41].The HA present on the surface of the virus contains head and stem moieties.Most of the diverse Abs produced by vaccination are against epitopes present on the head of the HA molecule, since most mutations due to drift occur primarily in the head region of the HA molecule [25,26].In contrast, the stem region sequences are more conserved.Abs generated against the stem region by certain individuals tend to be rather broadly reactive.These stem Abs may be more broadly reactive, but they will only bind weakly to the whole virus [42].Unfortunately, revaccination against the same virus strain generates only Abs against the HA head moiety [43].Abs generated during the yearly flu vaccinations have been found to bind to the strains present in the current vaccine, but also cross-react to strains absent from the vaccine (homosubtypic cross-reactivity).Table 1 gives the composition of flu vaccines from 2003 to 2017. In our laboratory, this cross-reactivity has been demonstrated by SPR analyses performed using serum IgG from subjects who received the 2006/2007 vaccine and then the 2007/2008 vaccine the following year.Two study subjects who had never received flu vaccination and did not receive a vaccination until after their second blood donation, demonstrated low IgG binding to each vaccine at each pre-vaccine time-point.However, each subject had an increase in IgG binding to each vaccine at about four weeks postvaccination (Figure 3A).Serum Ab at 14-15 days after vaccination from subjects who received the 2007/2008 vaccine displayed binding to all vaccines, including the 2008/2009 vaccine which had strains not present in the 2006 and 2007 vaccines (Figure 3B).All subjects who had received the 2006/2007 vaccine had Ab binding to this vaccine highest at two weeks post-vaccination; however, subjects with elevated Ab levels before vaccination generated less of a boost in the amount of Ab post-vaccination.The results indicate the high diversity of responses to the yearly flu vaccine within our human study population (Figure 3B).Control SPR for the 2006/2007 season was carried out by loading vaccine in equal proportions on flow-cells Fc2, Fc3, and Fc4.Pre-and post-serum IgG from 2007/2008 subjects was passed over all three of the vaccine-containing cells with the results showed very low variation in binding between flow cells (Figure S1).(Again, results from this chip indicated that elevated levels of flu-specific pre-serum Abs led to lower Ab responses after the vaccination.A study by Fonville et al. [44] followed individual responses to the yearly flu vaccine by correlating the H3N2 strain with HAI titers over a period of several years for 69 individuals; Ab landscapes were generated by correlating HAI titers to antigenically similar viruses and HA antigenic distance.This information could generate three-dimensional graphs whose appearance resembled a landscape after the data points were connected by smooth curves.A group of six individuals was followed from 2007-2012 and the resulting Ab landscape for each person showed that there was high variability in landscapes between individuals, but for each person the landscape shape was stable from year to year and had its own distinctive features [44].The finding that each individual had a characteristic response to the vaccination year after year suggests that the genetic makeup of the individual might be affecting the immune response to the flu vaccine.Diverse human factors could include HLA variations, cytokine gene expression levels, and epigenetic environmental influences.In another study demonstrating high variability among individuals, subjects receiving the 2006/2007 vaccine had flu-specific IgG-secreting cells in the blood enumerated with ELISPOT and flow cytometry for 12 to 28 days following the vaccination [40].It was found that peak frequencies for H1-and H3specific Ab secreting cells (ASCs) were 26 ± 10% for H1 and 22 ± 17% for H3 of total ASCs.ASCs peaked between 5 and 10 days after the vaccination.Measurement of HAI titers over the 28-day period indicated that the HAI titers rose concurrently with the percentage of HA-specific ASCs.However, while the percentage of ASCs fell to near baseline levels around 28 days, the HAI titers remained elevated for most participants.One person did not have a measurable response to the 2006/2007 vaccine [40].The high HAI titers found for the HA-specific IgG in this study peaked around day 12 and remained elevated over the 28 days of the study.This timing of 14-15 days post-vaccination generally corresponds to the peak of high flu-specific IgG found by our studies (Figure 3B).Further, like Haliiley et al. [40], Ab responses after vaccination showed a high amount of variation among study subjects (Figure 3B). It has been shown that a yearly flu vaccination will generate Abs that are homosubtypic (against one type, like H1N1) or heterosubtypic (against all types, even those of animal origin).These cross-reactive Abs have been isolated and characterized by a variety of methods.A protein microarray of HA1 proteins from various flu strains was employed to identify memory B cells before and after vaccination [45].The results revealed homosubtypic and heterosubtypic Ab production from memory B cells.Corti et al. [46] immortalized IgG-expressing B cells from four individuals after flu vaccination with H1 and H3 strains and found 20 heterosubtypic mAbs produced by the B cells.These mAbs were capable of neutralizing strains H1, H5, H6, and H9.Evidence of these cross-reactive neutralizing Abs generated by the yearly flu vaccine indicates that the vaccinations are providing a certain level of protection, depending on the individual, to most strains encountered during the flu season.It has been shown that a yearly flu vaccination will generate Abs that are homosubtypic (against one type, like H1N1) or heterosubtypic (against all types, even those of animal origin).These cross-reactive Abs have been isolated and characterized by a variety of Study of the 2009 Pandemic Strain and Ab Response The 2009 pandemic strain of H1N1 would be considered a shift in the flu's yearly circulation [47].A crystal structure study of the HA protein from the A/California/04/2009 H1N1 revealed that the HA had antigenic sites like flu circulating early in the 20th century [48].Among the regions on the HA molecule favored for binding highly cross-reactive IgGs were the Sa and the Sb regions, and these were relatively conserved between the three strains studied (SC1918, PR8, and Brisbane07).For this reason, the older individuals were more protected from being infected than the young who did not have prior exposure to any of these strains.It is possible that activation of memory B cells specific to these conserved antigenic sites, Sa and Sb, could explain results from our study where IgG Abs isolated preand post-vaccination (12-15d) during the 2006 season displayed a relatively high boost in binding to the 2010/2011 vaccine, which included the 2009 pandemic strain, in three of the subjects, but two of the subjects did not display increased binding to the 2010/2011 vaccine (Figure 4). The Ab responses to the 2010/2011 vaccine did not show a correlation with age, since the youngest (subject 141) had the highest boost in Ab and the two subjects with no Ab increase to the 2010/2011 vaccine (Subjects 094 and 064) were 51 and 40 years of age, respectively.The Brisbane07 strain, which contains the Sa and Sb sites, may have been already circulating during 2006; the three subjects that displayed a relatively high increase in binding to the 2010/2011 vaccine may have had exposure to the Brisbane07 strain and, therefore, had memory B cells directed toward the Sa and Sb conserved antigenic sites (Figure 4).How these memory B cells may lead to an increase in Ab against the pandemic strain has been suggested and illustrated by Guthmiller and Wilson [10].Therefore, our results could fit the "antigenic imprinting model" where a new exposure against conserved epitopes results in antigenic imprinting of memory B cells where every new strain gets a place in the hierarchy.Analysis of Ab binding to a random peptide library from pandemic 2009 HA through display on the surface of yeast cells resulted in the identification of five antigenic regions, of which all but one was not present within the stem of the HA protein. Study of Ab binding to the pandemic 2009 H1N1 flu by yeast display and deep mutational scanning identified five single-domain Abs (nanobodies) that bound a highly conserved pocket in the HA2 domain of the pandemic HA [49].These nanobodies bound with high affinity and inhibited virus membrane fusion and were considered neutralizing.Discovery of these highly conserved Ab binding sites within the HA stem provided hope that a universal flu vaccine could be developed.It was observed that the 2009 pandemic strain possessed a reassortment of elements from human, bird, and swine flu, and it was found that the HA epitopes for Sa and Sb could provide complete protection in mice vaccinated with a DNA vaccine containing coding sequences for these regions [50]. Broadly neutralizing Abs directed against the HA stem region have been detected in phage display libraries, and a study was performed to examine the extent of stemreactive Abs generated by infection from the pandemic 2009 flu [47].It was found that stem-reactive Abs increased significantly after infection with the 2009 pandemic H1N1, and HAI microneutralization values correlated with this result.However, the higher stemreactive Ab levels after a pandemic virus infection returned to pre-pandemic levels at two years post-infection.The mAbs generated by plasmablasts obtained from individuals infected by the pandemic 2009 virus revealed that many of the neutralizing Abs were broadly cross-reactive to epitopes within the stem and head domains of multiple influenza strains [42].It was speculated that these broadly cross-reactive Abs were produced by plasmablasts derived from activated memory B cells specific to conserved epitopes present on a variety of influenza strains.Monoclonal 70-1F02 was able to rescue all mice infected with pandemic 2009 H1N1, PR/8 H1N1, or FM/1 H1N1, while monoclonal 1009-2B06 was able to rescue mice infected with pandemic 2009 H1N1 or FM/1 H1N1.This suggested that these two mAbs had some therapeutic value and could be used to treat patients severely infected with a pandemic strain of flu. in three of the subjects, but two of the subjects did not d 2010/2011 vaccine (Figure 4).The finding that the neutralizing cross-reactive Abs produced by individuals inflected with the pandemic 2009 virus that were derived from memory B-cells is reflected in findings about Ab generation after vaccination with the A/California/07/2009 (HINI) strain [37,51,52].The mAbs derived from post-vaccination plasmablasts were screened for binding to the HA protein of the vaccine strain and other HAs, including H5 and H3.It was found that these mAbs could bind to more than one strain and showed a high degree of somatic hypermutation, indicating they were of memory B-cell origin.Several of the Ab showed a high degree of cross-reactivity and were found to bind to the HA stem region [37].Seven highly cross-reactive neutralizing mAbs that bound to the HA stem region and, specifically, to the fusion peptide of HA2 have also been reported [51].Broadly cross-reactive heterosubtypic Abs produced by vaccination with the A/California/07/2009 (HINI) strain protected mice from a lethal infection with the heterologous H5N1 strain [52].These results have increased speculation that the broadly cross-reacting neutralizing Abs to the HA stem could be induced upon yearly vaccination with vaccines based on subtypes of HA not circulating among humans.However, repeated yearly vaccination with the A/California/07/2009 (HINI) strain as a component of the yearly flu vaccine from 2010 to 2016 has shown that Abs no longer bind the HA stem region, but only bind the HA head region [43]. CD4 + T Cell Memory to Influenza Virus Infection/Vaccination Expansion of the T cells normally occurs following a viral infection.This increase in proliferation of pathogen-specific T cells is followed by the programmed cell death of the viral-specific T cells down to a few cells known as T memory cells [53].Central memory T (T CM ) cells can be identified in the peripheral blood by the presence of surface markers, including CCR7, CD45RO, HLA-DR, and CD38.CD4 memory T cell responses to flu infection have been reviewed [54][55][56].Naïve and memory T cells become activated to become effector memory (T EM ) cells by interaction with dendritic cells (DCs) (present in secondary lymphoid tissue) that present flu epitopes via class II HLA molecules on their cell surface.These T EM cells begin proliferating and express Th1 cytokines, such as interferon (IFN)-γ.The T EM cells transition to T CM by upregulation of CCR7.The process of transitioning from T EM to T CM has been found to require IL-2 to sustain the memory cells during down-regulation of apoptotic signaling and upregulation of CCR7 [57].These processes ensure that the rapid contraction of T EM cells does not proceed to memory cell extinction but yields a population of stable CD4 + T CM cells to protect against future pathogen assaults.Some of these memory cells become lung resident cells by upregulation of CD69 and CD11a.Other CD4 + T CM cells enter the peripheral blood as CD45RO + T cells and will be activated back to T EM cells in response to vaccination [58].A study performed in our laboratory during 2003 and 2005 indicated that memory (CD45RO + ) CD4 + T cells in the peripheral blood decreased in number between 5 and 12 days after the vaccination (Figure 5A).Further study of the CD4 + T cells during 2003 indicated that the absolute number of CD4 + /DR + /CD38 + T cells decreased in the blood between 5 and 12 days after vaccination with the 2003/2004 TIV vaccine (Figure 5B). Presumably these T cells migrated to lymph nodes to be activated by APCs, such as DCs.A recent human study by Wilkinson et al. examined the potential utility of the preexisting flu-specific CD4 + T cell memory cells [59]; in this study, humans were challenged with either H3H1 or H1N1 seasonal virus, and the CD4 + and CD8 + T cell responses were examined on day 7 and day 28.Results indicated pre-existing flu-specific CD4 + T cells, but not CD8 + T cells, responded to internal viral proteins and were associated with lower virus shedding and reduced illness.Therefore, pre-existing flu-specific CD4 + T cells will offer some protection from severe illness.A study of memory T cell populations generated after stem cell transplantation in humans revealed that memory cells could not revert to naïve cells, but after several months post-transplant the thymus will start producing naïve T cells of donor origin [60].An analysis of CD4 + naïve T cells, T EM , and T CM in humans following the 2017/2018 vaccination in our laboratory indicated that at five days post-vaccination the number of naïve CD4 + T cells (CD45RA + /CCR7 + ) decreased, and the number of CD4 + T EM (CD45RA − /CCR7 − ) increased slightly (Figure 5C).At two weeks post-vaccination, the number of CD4 + T CM (CD45RA − /CCR7 + ) increased, and the number of naïve CD4 + T cells increased slightly over pre-vaccination counts, while the number of T EM decreased slightly.At 63 days post-vaccination, all CD4 + T cell numbers were back to pre-vaccination levels.These results suggest that the thymus generates more naïve cells after the vaccination, since memory cells cannot revert back to naïve cells, as indicated by Rufer et al. [60].Therefore, the increase in the naïve T cell counts post-vaccination at 14 days had to be due to naïve CD4 + T cells originating from the thymus.The results portray a scenario where naïve and T CM cells are activated to become CD4 + T EM , which then proceed to become T CM at 14 weeks post-vaccination.The number of flu-specific T CM cells then declines with time, perhaps due to decreased antigen stimulation or some other mechanism.In the case of CD8 + memory T cells, a quick cytolytic memory response to a second infection would aid in control of the infection by decreasing the viral population.Most flu vaccines are composed of inactivated viruses and do not illicit a strong CD8 + T cell response.However, the vaccines promote generation of helper CD4 + T CM, and they have been demonstrated to aid in the generation of pathogen-specific CD8 + memory T cells that can be activated by a secondary infection of the virus [56,61,62].In this manner, the present vaccines could promote CD8 + T cell involvement. Circulating CD4 + memory T cells will be able to help B cells develop into ASCs.With respect to influenza vaccination, HA-specific CD4 + memory T cells found in the blood are CXCR5 + /PD1 + /CXCR3 − [63].These cells are now known to be of follicular origin and are termed CD4 + follicular helper (Tfh) T cells [64][65][66][67][68].It has been established that Tfh cell production is promoted by a feed-forward loop by the transcription factors Bcl6 and Tox2 [67]. Figure 6 gives an illustration of the feed-forward loop proposed by Xu et al. [67]. Memory-specific CD4 + T cells against other membrane proteins, like M1 and NA, may also be effective in promoting a protective immune response [69][70][71].However, memory cells towards the NP protein may be a negative factor in the Ab response [63].Consequently, vaccines that promote CD4 + memory toward the virus membrane proteins, HA, NA, and M, could provide help to B cells that would lead to ASC secretion of cross-reactive Abs.The generation of neutralizing cross-reactive Abs to crucial flu membrane proteins could protect against future pandemic strains [72].In a study by Wild et al. [68], pre-existing flu-specific CD4 + T cells were studied in the context of the yearly seasonal vaccine.Subjects were divided into three groups: 1. never vaccinated, 2. not vaccinated in the past 1 or more years, and 3. vaccinated in the previous year.By analyzing CD4 + T cell specificity to the HA 118-132 epitope and HA IgG titers, it was found that CD4 + T cell and Ab responses to the vaccination were closely associated with a person's infection and vaccine history.A strong "early" response was obtained for naïve participants and participants who had not been vaccinated the year before, while participants who had received the vaccine the year before had a somewhat delayed response.In the study by Wild et al. [68], transcription factors Tox and Tox2 were evaluated for their expression in circulating Tfh (cTfh) cells.It was found that Tox expression was higher in cTfh than in non-cTfh cells four days after vaccination.In the peripheral blood, cTfh cells can be distinguished from non-cTfh cells by their characteristic cell markers.Wild et al. [68] performed multiparametric cytometry for T cell markers, including those for Tfh cells.It was found that the flu vaccination promoted the proliferation of CD4 + T cells that displayed markers specific to Tfh cells and were ICOS + and CD38 + [65].The ICOS + /CD38 + subset of Tfh cells was associated with those individuals who were identified as "early" responders (day 4) after the vaccination.The Tfh cells are crucial for the development of humoral immunity because they provide help to B cells in germinal centers, via ICOS/ICOSL interaction, to promote antibody production [73].Further analysis of Tfh early response and HA-specific antibody detection (day 28) were found to be directly associated.Both baseline levels of HA-specific IgG and the number of HA 118-132 -specific CD127 + /CD4 + T cells also had an influence on IgG generation after vaccination.The fold change of Ab levels was lower in the presence of high HA-specific IgG and low HA 118-132 -specific CD127 + /CD4 + T cells [68].During the study of a universal flu vaccine construct created using a replication competent vaccinia strain (Wyeth/IL-15/flu), it was observed that memory CD4 + T cells were needed for early Ab production and these memory cells were primarily of the Tfh and Th1 type [66].These findings seem to agree with the findings of Wild et al. [68] because, after applying dimension reduction with t-distributed stochastic neighbor embedding (tSNE) analysis to their data, it was found that four groups were generated where one and four were Th1-like and two and three were Tfh-like.Groups one and four corresponded with individuals who had a "late" response to the vaccination and groups two and three corresponded to individuals with an "early" response to the vaccination. Vaccines 2024, 12, x FOR PEER REVIEW 14 of 36 viruses and do not illicit a strong CD8 + T cell response.However, the vaccines promote generation of helper CD4 + TCM, and they have been demonstrated to aid in the generation of pathogen-specific CD8 + memory T cells that can be activated by a secondary infection of the virus [56,61,62].In this manner, the present vaccines could promote CD8 + T cell involvement.Circulating CD4 + memory T cells will be able to help B cells develop into ASCs.With respect to influenza vaccination, HA-specific CD4 + memory T cells found in the blood are CXCR5 + /PD1 + /CXCR3 − [63].These cells are now known to be of follicular origin and are termed CD4 + follicular helper (Tfh) T cells [64][65][66][67][68].It has been established that Tfh cell production is promoted by a feed-forward loop by the transcription factors Bcl6 and Tox2 [67]. Figure 6 gives an illustration of the feed-forward loop proposed by Xu et al. [67].Memory-specific CD4 + T cells and other membrane proteins, like M1 and NA, may also be effective in promoting a protective immune response [69][70][71].However, memory cells in the NP protein may be a negative factor in the Ab response [63].Consequently, vaccines that promote CD4 + memory toward the virus membrane proteins, HA, NA, and M, could provide help to B cells that would lead to ASC secretion of cross-reactive Abs.The generation of neutralizing cross-reactive Abs to crucial flu membrane proteins could protect against future pandemic strains [72].In a study by Wild et al. [68], pre-existing fluspecific CD4 + T cells were studied in the context of the yearly seasonal vaccine.Subjects were divided into three groups: 1. never vaccinated, 2. not vaccinated in the past 1 or more years, and 3. vaccinated in the previous year.By analyzing CD4 + T cell specificity to the HA118-132 epitope and HA IgG titers, it was found that CD4 + T cell and Ab responses to the vaccination were closely associated with a person's infection and vaccine history.A strong "early" response was obtained for naïve participants and participants who had not been CD8 + T-Cell Response to Influenza Infection/Vaccination Several studies have investigated the differences between the inactivated trivalent flu vaccine (TIV) and the live attenuated flu vaccine (LAIV) with respect to humoral and cellular responses [74][75][76].The two vaccines are administered through different routes (TIV intramuscular and LAIV as a nasal spray) and, therefore, initiate CD8 + T cell responses through different antigen presentation pathways.LAIV presumably enters APCs in the mucosal tissue, undergoes replication, and promotes a strong innate immune response, whereas the inactivated virus is administered intramuscularly and promotes systemic immune responses.To measure cellular responses, peripheral blood mononuclear cells (PBMCs) were isolated pre-and post (28-42 day)-immunization from adults and children before and after receiving the 2004/2005 and 2005/2006 vaccines (LAIV or TIV) and were cultured for 17 h in the presence of the live fluA/Wyoming (H3N2) strain.After this incubation, the numbers of CD4 + and CD8 + T cells expressing IFN-γ were evaluated by flow cytometry [74].It was found that the numbers of IFN-γ positive CD4 + and CD8 + T cells from the 2005/2006 season vaccination negatively correlated with the number of IFN-γ positive CD4 + and CD8 + T cells from the 2004/2005 season [74].In the children, the fluspecific CD8 + T cell counts were slightly higher than those from adults.However, although neutralizing (HAI) Ab responses were negatively associated with baseline HAI titers for both vaccines, the LAIV vaccine had, overall, slightly lower protective Ab generated as measured by HAI.Study of the flu-specific CD8 + T cell population in the adults at 10 and 28 days post-vaccination with the 2004/2005 TIV formulation indicated no change in the number of CD8 + T cells expressing perforin, but it increased expression of CD27 at both 10 and 28 days post-vaccination [75].Since CD27 is expressed by naïve and central memory cells, the significant rise in CD27 suggested that the number of naïve and central memory cells increased after vaccination.The absence of a change in perforin expression indicates a lack of CD8 + T cells with cytotoxic capability after the TIV immunization.No change was observed in the overall number of CD8 + T cells [75].In our laboratory, cell counts for T cells, B cells, and NK cells did not significantly differ from pre-vaccination values at 5 7C).Enumeration of lymphocytes at 3 and 14 days post-vaccination with the 2007/2008 vaccine indicated a decrease in CD4 + T cells and B cells at 3 and 14 days after the vaccination and a decrease in T cells at 14 days after the vaccination (Figure 7D).These results suggest that the composition of the vaccine will affect lymphocyte responses to the vaccination.Jegaskanda et al. [76] investigated the role of ADCC Abs and protection from flu infection in five cohorts as follows: (1) adults given monovalent H1N1pdm09 inactivated subunit vaccine (ISV), (2) adults vaccinated with monovalent H1N1pdm09 live attenuated vaccine (LAIV), (3) children vaccinated with seasonal ISV followed by seasonal LAIV, or with two doses of seasonal LAIV, (4) adults with communityacquired A H1N1pdm09 infection, and (5) adults experimentally challenged with influenza A (H3N2) virus.Results indicated that ADCC increased after natural infection and after vaccination with ISV in children and adults.Further, there was no increase in ADCC Abs after vaccination with the pandemic or seasonal LAIV vaccines.There was also no correlation between HAI titers and the amount of ADCC Abs [76].Results from our laboratory for the enumeration of lymphocyte subsets at 5-7 days following vaccination with the inactivated TIV vaccine from 2010/2011 and 2016/2017 containing the pandemic strain, A/California/07/2009 (H1N1), revealed that the absolute number of the CD8 + T cells increased following the vaccination (Figure 8A,B).This suggested that the pandemic strain of the virus promoted a cytotoxic response immediately after vaccination.When the pandemic 2009 strain was removed from the yearly TIV vaccine in 2017/2018, the increase in CD8 + T cells was not observed after the vaccination, but T cell numbers did increase at 14 days post-vaccination (Figure 8C).Therefore, the type of vaccine and the strain composition play major roles in how the T cells respond to the flu vaccination [38]. 2016/2017 containing the pandemic strain, A/California/07/2009 (H1N1), revealed that the absolute number of the CD8 + T cells increased following the vaccination (Figure 8A,B).This suggested that the pandemic strain of the virus promoted a cytotoxic response immediately after vaccination.When the pandemic 2009 strain was removed from the yearly TIV vaccine in 2017/2018, the increase in CD8 + T cells was not observed after the vaccination, but T cell numbers did increase at 14 days post-vaccination (Figure 8C).Therefore, the type of vaccine and the strain composition play major roles in how the T cells respond to the flu vaccination [38]. NK Cells and Response to Influenza Virus Infection/Vaccination It has been well established that NK cells are an important component of the innate immune response to flu infection, and several recent reports have shed further light on their role in humans [77][78][79][80][81][82][83].During acute flu infection, the number of CD56 bright CD25 + NK cells decrease in the peripheral blood, but after intramuscular flu vaccination these cells increased in the peripheral blood [82].Moreover, acute infection led to decreased plasma levels of inflammatory cytokines, including IFN-γ, macrophage inflammatory protein (MIP)-1ß, interleukin (IL)-2, and IL-15.The data suggested that this NK cell subset was recruited into infected tissues to aid in clearing the virus.A comparison of the NK response to infection of human PBMC cultures with either a seasonal virus or a pandemic flu A strain revealed that the NK cells responded differently to the two viral strains [83]; the magnitude of IFN-γ expression by NK cells was higher for the Cal/09 pandemic strain.By mass cytometry, it was determined that the difference between strains was due to the amount of CD112 and CD54 on the surface of the targeted infected cells-in this case it was monocytes [83].Cal/09-infected cells maintained a small amount of CD112 and CD54 markers on their cell surface, but the seasonal strain showed no evidence of CD112 and CD54 surface markers after infection.Therefore, live viral strains that lead to acute infection also promote up regulation of inflammatory cytokines and more cytotoxicity.However, it was noted that the inactivated viral strains in TIV do not decrease the presence of CD112 and CD54 on the surface of the monocyte cells [83].A study of seasonal flu vaccinated mice showed that NK cells are responsible for decreasing illness after infection with a pandemic flu strain, such as A/California/4/2009 [84]. The importance of NK cells and their contribution to the development of a protective response to flu infection following vaccination with the human TIV yearly formulation has been the subject of investigation [85].The study evaluated IFN-γ secretion by NK cells after vaccination with the 2003-2004 and 2005-2006 vaccine formulations [85].Both vaccines contained the A/New Caledonia/20/99-like (H1N1) virus.The study included 10 subjects: 4 received either the 2003/2004 or 2005/2006 formulations, 1 received both vaccines, and 2 were unvaccinated controls.Each subject was drawn pre-and post-vaccination once/week for eight weeks.Lymphocytes were isolated by lymphocyte M separation and frozen after each sampling.Frozen lymphocytes were thawed and suspended in RPMI medium + 15% fetal bovine serum and the numbers of NK, CD4 + , and CD8 + T cells were determined by flow cytometry.No changes were observed in the overall lymphocyte profile before and after vaccination for either vaccine.Lymphocytes were then placed in culture and stimulated with A/New Caledonia/20/99 whole virus and the cells expressing IFN-γ were quantified by flow cytometry.Additional cultures were stimulated with HA and M1 peptide mixes from the A/New Caledonia strain (79 of HA) and the A/Wisconsin/4754/94 strain (61 of M1).Results of the study indicated an increase in IFN-γ-expressing NK cells post-vaccination after stimulation with the whole virus strain, but stimulation with the HA or M1 peptides did not indicate any increase in IFN-γ expression. The role of NK cells in the immune response to flu vaccination was investigated in several studies in mice and one in human cell lines or isolated human NK cells [86][87][88].It was found that epitopes on the HA head could promote ADCC, which is carried out by NK cells for the clearance of virus-infected cells [88].These HA epitopes were designated E1 and E2, and mice infected with the A/Hong Kong/415742Md/2009(H1N1)pdm09 virus developed Abs against these HA regions.However, it was found that although ADCC activity increased and the viral load in the lungs was slightly lowered, ADCC activity increased alveolar damage and increased mortality.It was concluded that the ADCC activity led to inflammatory cell infiltration into the lungs in the E1-vaccinated mice upon H1N1 flu challenge.Therefore, it was concluded that vaccines containing domains within the HA head that elicited an ADCC response would be detrimental rather than protective, and potential universal flu vaccines would have to strike a balance between the harmful and helpful effects of ADCC.Guillonneau et al. [87] studied cross-reactive immunity of cytotoxic CD8 + T lymphocytes toward conserved regions of the HA protein.It was found that including α-galactosylceramide (α-galcer) as an adjuvant component of the flu vaccine prompted NKT cells to increase expression of indolamine 2,3-dioxygenase (IDO).Although IDO acts as an immune suppressor, it promoted the survival of the cross-reactive memory CD8 + T cell population, thus increasing protection against challenge by a potential pandemic strain.Therefore, up-regulation of IDO-expressing NKT cells by inclusion of α-galcer as a vaccine adjuvant would be one method of increasing protection by the yearly TIV flu vaccine. Viral Antigen Presentation by MHC Molecules-The Role of Autophagy It has now been established that the cellular function of autophagy can deliver peptideloaded MHC class II molecules to cell surfaces for presentation to CD4 + T cells [89].This non-canonical function of macroautophagy machinery was first reported by Paludan et al. [90] for presentation of Epstein-Barr virus nuclear antigens to CD4 + T cells by EBVpositive lymphoma cells.For example, in the case of vaccination, extracellular antigens contained in the flu vaccine would enter the APC by LC3-associated phagocytosis (LAP) or, in the case of infection, newly synthesized viral proteins would be bound to ubiquitin and trafficked into lipid membrane vesicles coated with the autophagy-related gene protein 8 (ATG8) ortholog GABARAP.The bound-up ubiquinated viral protein cargo will be attached to p62, a scaffold protein in the phagophore, and then channeled through the autophagy pathway into the autophagosome and on to the lysosome for breakdown of the viral proteins into peptides before entering the MHC class II compartment (MIIC) where the peptides would be loaded onto the MHC class II molecules.This is a simplification of the process which includes many factors and autophagy-related proteins (ATG).ATG4 is a cysteine protease that functions to convert pro-factors into their active form.At present, there have been over 40 identified ATG.Some IgG Abs bound to virus may aid endocytosis by binding to the FcγRs of APCs.Some viral proteins from vaccinations also may bind to TLRs.Both FcγR or TLR binding can lead to recruitment of LC3 lipidation factors, including many ATGs.The lipidation of LC3 is stabilized by the NADPH oxidase 2 (NOX2).With the LC3-associated endosome or phagosome fusing with lysosome, the viral proteins are reduced to peptides and channeled into MIICs.Peptides from the degradation process would then be loaded on the MHC class II molecules which would be chaperoned to the cell surface.Antigen presentation through the autophagy machinery is illustrated in Figure 9. Regulation of autophagy is associated with the availability of ATP, where depletion of ATP and increase in AMP, are signals of nutrient starvation and will increase autophagy.Vice versa, nutrient availability stimulates mTOR, the mechanistic target of rapamycin, which acts by phosphorylating the modulator of macroautophagy, ULK1, thereby decreasing the autophagy process. It has become evident that autophagosomes selectively target their cytosolic cargoes via the assistance of autophagy receptors (ARs).Among these receptors is the TAX1-binding protein, also known as TRAF6-protein-1 (TAX1BP1/T6BP).Because the ARs contain ubiquitin and an LC3-binding domain, they can bind ubiquitinated proteins and traffic them into newly forming autophagosomes through interaction with LC3.Upon analysis of several ARs via gene silencing, it was found that T6BP was the only one capable of effectively enhancing presentation of autophagy-dependent antigens to CD4 + T cells [91].It was found that T6BP functions in the presentation of both autophagy-dependent and independent endogenous processing and presentation of antigens by MHC class II molecules.The silencing of T6BP resulted in the rapid degradation of the invariant chain CD74 associated with the MHC class II molecule.Further study indicated that T6BP regulated degradation of CD74 through binding to the ER chaperone protein, calnexin (CANX).Therefore, T6BP is needed for regulation, through CD74 degradation, of processing and presentation of viral antigen by MHC class II molecules to CD4 + T cells.Regulation of autophagy is associated with the availability of ATP, where depletion of ATP and increase in AMP, are signals of nutrient starvation and will increase autophagy.Vice versa, nutrient availability stimulates mTOR, the mechanistic target of rapamycin, which acts by phosphorylating the modulator of macroautophagy, ULK1, thereby decreasing the autophagy process. It has become evident that autophagosomes selectively target their cytosolic cargoes via the assistance of autophagy receptors (ARs).Among these receptors is the TAX1-binding protein, also known as TRAF6-protein-1 (TAX1BP1/T6BP).Because the ARs contain ubiquitin and an LC3-binding domain, they can bind ubiquitinated proteins and traffic them into newly forming autophagosomes through interaction with LC3.Upon analysis of several ARs via gene silencing, it was found that T6BP was the only one capable of effectively enhancing presentation of autophagy-dependent antigens to CD4 + T cells [91].It was found that T6BP functions in the presentation of both autophagy-dependent and independent endogenous processing and presentation of antigens by MHC class II molecules.The silencing of T6BP resulted in the rapid degradation of the invariant chain CD74 associated with the MHC class II molecule.Further study indicated that T6BP regulated degradation of CD74 through binding to the ER chaperone protein, calnexin (CANX).Therefore, T6BP is needed for regulation, through CD74 degradation, of processing and presentation of viral antigen by MHC class II molecules to CD4 + T cells.Many of these virus proteins will be ubiquitinated and end up in a phagophore through interaction with LC3 and the autophagy receptor p62 (sequestosome), which is a ubiquitin-binding scaffold protein.This pathway is the same for phagocytized virus vaccine.The autophagy-related gene 8 protein (ATG8) orthologue, GABARAP, is one of several orthologues involved in capture of the viral proteins for breakdown in the lysosome and peptide binding to the MHC II molecule.ATG4 is a family of cysteine proteases that function to process the orthologues from their pro-forms into their active forms.From the lysosome the viral peptides channel into the late-endosomal MHC class II compartment (MIICs) for further processing and loading onto the MHC II molecules.The path via the extracellular TLR receptor does not play a substantial role for flu peptides but intracellular TLR are stimulated by flu nucleotides. It has been found that autophagy is necessary for maintaining the viability of both CD8 + and CD4 + T CM [92,93].In a model system where autophagy is deleted in mature cells, a comparison of autophagy function between CD8 + and CD4 + T cells reported that the elimination of autophagy greatly impacted the survival of the CD8 + T cells but did not affect the viability of the CD4 + T cells.Further study by injection of antigen experienced CD4 + memory T cells into naive mice indicated that autophagy-deficient cells were incapable of transferring humoral immunity.It was suggested that the autophagy-deficient CD4 + T cells suffered from the toxic effects of lipid overload and elevated mitochondrial activity, because of defects in mitochondrial function and elevated lipid in these cells [92].A study of young vs. old subjects receiving vaccination for respiratory syncytial virus (RSV) demonstrated that memory CD8 T cells displayed decreased autophagy in the elderly (>65 yr) subjects [93].It was found that this was due to a decrease in the autophagy-related metabolite, spermidine.Spermidine regulates autophagy via hypusination of eIFSA which then regulates the synthesis of TFEB, a transcription factor that contains two triproline motifs in humans.The addition of spermidine to PBMC cultures from older donors stimulated with anti-CD3/CD28 was found to increase memory CD8 + T cell viability through increased autophagy via a two-fold improvement in eIFSA and TFEB expression. Presentation of epitopes from exogenous antigens and phagocytosed material on MHC class I molecules has been termed cross-presentation.This is an alternate path employed by DC and macrophages [94,95].Most of the routes involving Ag capture for cross-prestation are through phagocytosis.Peptide loading onto MHC class I molecules can occur in the endoplasmic reticulum, or the phagosome followed by transport to the cell surface.In the case of DCs, only CD8 + DC are capable of cross-presentation in mouse cells.Here crosspresentation takes place in conjunction with the present autophagy pathway of the cells.In humans, CD1a + but not CD14 + DC were capable of cross-presentation [95].Like in the case of the mouse cross-presentation capable DC, the human CD1a + DC displayed the presence of ubiquinated aggregates within the cell after activation by culturing overnight.Cell imaging results of the CD1a + DC indicated that cross-presentation was being performed through the autophagy pathway. Possible evidence of direct involvement of influenza A nucleoprotein in cross-presentation has been suggested through construction of a fusion protein containing the transduction domain of Tat from HIV type I with the C terminus, amino acids 301-498, restricted to HLA-B27 of influenza A NP [96].It was demonstrated that the fusion protein was able to enter target cells and become processed correctly and the flu NP peptide was presented on Class I antigen.Although processing occurred within 1 h after the protein entered the cell, confocal microscopy results showed that most of the fusion protein accumulated in the trans-Golgi network.It was believed that it was the C-terminus peptide of influenza A NP that directed the fusion protein to the trans-Golgi network where it could be directed for proteolytic processing, perhaps via the autophagy process for peptide insertion into the Class I HLA molecule.The HLA B27 restricted peptide of the NP protein presented on B LDL bar cells was able to activate cytolytic CD8 T-cells, as demonstrated by 54 Cr-release analysis. The Role of Macrophages in Influenza Infection/Vaccination Influenza A virus H1N1 is capable of infecting macrophages, but viral replication reaches a "dead end" in alveolar macrophages and new virions are not released.The interaction of flu and macrophages has been reviewed [97][98][99].It has been determined that flu replication can continue in marrow-derived macrophages.Because the macrophage population is very diverse, replication has been studied in many types of macrophages and it has been found that viral replication is viral strain and macrophage subtype dependent [97].The replicative ability of 16 HA subtypes was studied and it was revealed that H5N1 alone had by far the best ability to replicate in macrophages, due to its ability to overcome early blocks in the replication process.In humans, only the later blocks of replication have been studied well.These are the "dysfunction of nucleocytoplasmic trafficking and viral proteins" and "defective assembly and budding of infectious virions" [97].Mature macrophages can be divided into two categories, M1 (classically activated) and M2 (alternatively activated), depending on the gene expression profile of cytokine expression after activation.Both M1 and M2 macrophages can be infected, but M2 are more susceptible to infection with H5N1 and CA/09.After infection, M2 macrophages become more M1-like and secrete the inflammatory cytokine TNF-α, which has been blamed for contributing to the hypercytokinemia and enhanced inflammation associated with H5N1 infection [97,98].In the lung, the flu infects both epithelial and macrophage cells and reasons for flu replication blockage in macrophages compared to the absence of blockage in epithelial cells have been described [99].Further, infected macrophage cells display increased phagocytosis of apoptotic epithelial cells, which aids in control of the infection. In germinal centers (GCs) large macrophages had been identified over 100 years ago and were given the name tingible body macrophages (TBM) because of the presence of phagocytosed lymphocytes in their cytoplasm.TBM are huge cells and have been called "Gargantuan chameleons of the germinal center" [100].Over the years it has been speculated that involvement of TBM in the immune response to infection was primarily limited to phagocytosis of B cells that had undergone apoptosis [101].In mice it was noted that germinal center numbers of TBM decreased as the mice aged and TBM might be important for regulating the magnitude of the germinal center response to the infection [102].This hypothesis was supported by an experiment with an (OVA)-specific T H hybridoma that secreted IL-2 upon stimulation.These cells were cultured with OVA-bearing TBM, and IL-2 secretion did not occur but was inhibited.It was found that indomethacin added to the cultures could overcome this inhibition.Therefore, a role in downregulating the germinal center reaction was proposed for the TBM [103].Impaired function of the TBM in their role of clearing apoptotic cells in the germinal center can result in autoimmunity.Experiments with Mer−/− mice demonstrated that Mer-deficient TBM had decreased ability to clear apoptotic cells and therefore, lead to an increase in antibody forming cells [104].Mer is a tyrosine kinase receptor that functions by transducing signals from the extracellular matrix into the cytoplasm and, thus, can regulate cellular processes, such as phagocytosis.These data indicated that Mer on TBM has an active role in helping TBM clear apoptotic cells in the germinal centers.Recent studies of TBM have been performed by imaging their activities via movie recordings [105,106].It was found that TBM are stationary cells that phagocytose the B cells with highly dynamic protrusions and accommodate final stages of the B cell apoptosis.Germinal center TBM are derived from bone marrow-derived precursors within lymphoid organs prior to challenge by infection or vaccination [105].This macrophage differentiation process is driven by GC B cells.During immunization, TBM precursors in the follicle are activated by apoptotic B cell fragments and migrate to the GC where they appear as stationary TBM.In the GC the number of dead cells increases after the immunization and the TBM captures apoptotic fragments with long extended active processes followed by phagocytosis of the fragments [106]. Flu vaccine development Presently, yearly flu vaccines are composed of inactivated virus of H1, H3, and two B strains.The vaccination goal is to generate neutralizing Abs against the head group of the HA protein based on the viral strains circulating during the current season [34].Selected viral strains are grown in eggs and the process of producing the vaccine can take as long as six months.This runs the risk of changes in viral strains circulating by the time the vaccine is ready for distribution.In addition to these difficulties, the vaccine is, at best, 60% effective at protection, and any protection gained will wane by the end of the flu season.Therefore, past approaches to deal with hitting the moving target of circulating seasonal viral strains have not been adequately effective at warding off viral infection and viral spread and have left the world vulnerable to another possible influenza pandemic.For these reasons, in more recent years, there has been a surge in novel approaches aimed at new flu vaccination approaches.These approaches have been suggested in several manuscripts [33,[107][108][109].Many of these new potential vaccines have incorporated the findings mentioned previously to design their vaccines which involve the combination of viral epitopes and stimulants of certain host cell types, such as mucosal cells, to achieve better longer-lasting viral clearance capacity [110].Development of a DNA-based vaccine that encoded conserved CD4 + T cell epitopes was used to vaccinate HLA-DR (HLA-DR4) transgenic mice [111].The vaccine consisted of a DNA plasmid that carried the coding region for 20 virus epitopes.Viral proteins included were NP, M1, PA, PB1, PB2, and NS2.The immunized mice were challenged with a lethal dose of PR8, and 70% of the mice receiving the plasmid containing the viral epitope DNA survived, but only 10% of the mice receiving the empty control plasmid survived. One approach to flu vaccination has been induction of mucosal immunity through an intranasal spray of attenuated live flu virus.By immunization in this manner, the effort is to activate the mucosal immune system to produce secretory IgA, which has a role in the prevention of respiratory illnesses [112].However, nasal spray vaccines in use at present have not been able to increase protection over the injected seasonal vaccine.To increase the effectiveness of the intranasal spray vaccines, an assortment of substances, known as mucoadhesives, have been examined to increase residence time of the vaccine in the nasal cavity.These include cellulose, polyacrylate, starch, alginate, and gellan.Further, experiments with adjuvants that facilitate the transport of antigens through the mucus and epithelial cell layers are being performed.Among the transport enhancers are endotoxins (whose safety has been questioned), oil-in-water nano-emulsions, and mannitol among others.A vaccine manufactured by Berna Biotech incorporated an A-B toxin as an adjuvant in their nasal spray and found that it was linked to at least a 19 times higher rate of Bell's Palsy after vaccination compared to the control group.Therefore, toxins are to be avoided as adjuvants.In addition, the viruses for vaccines are grown in eggs, and early efforts associated with vaccine production had vaccines containing several protein contaminants including endotoxins, which resulted in safety issues.A study of endotoxin content in various vaccines showed some direct association with endotoxin content and severity of systemic reactions [113].Important for intranasal vaccines is the targeting of respiratory M-cells, cells of the mucosal associated lymphoid tissue (MALT), that can transport antigenic material to APCs.Since flu A-type viruses could adhere efficiently to the M-cells in vitro, it may not be necessary to add a facilitator for M-cell targeting to a flu vaccine preparation.Experimentation is ongoing to develop an efficient applicator for intranasal vaccine delivery.In the US, the AccuSpray device is currently being employed to administer the FluMist, but it can also be improved for delivery and dosage.In an attempt to develop a safer LAIV, a split segment vaccine was made by overlapping reading frames of M1 and M2 viral RNAs [114].This split segment vaccine was safe when tested in mice and showed better efficacy than the current temperature-sensitive LAIV. Although potentially dangerous, the addition of adjuvants to vaccine preparations has been found to increase the efficacy of flu vaccines.Increasing the cytolytic activity of CD8 + T cells proved to be helpful.Guilloneau et al. [87] found that adding α-galactosylceramide (α-galcer) to the TIV promoted NKT cell expression of IDO, which in turn promoted the survival of memory CD8 + T cells.Therefore, α-galcer would a potential candidate for an adjuvant in a seasonal flu vaccine.The adjuvant CAF01 was added to the seasonal split vaccine, and the adjuvanted vaccine was tested in Ferrets [115].The results indicated that the adjuvanted vaccine was able to induce CD4 + T cell and antibody responses for protection against challenge by heterologous viruses.Strong immunogenicity was observed by adding a tocopherol-based oil-in-water emulsion adjuvant (AS03) to a prepandemic H5N1 virus vaccine [116].A controlled phase trail in human adults demonstrated that the adjuvanted vaccine generated higher neutralizing antibodies against clade 2.2 (60.7% increase) and clade 1 (38.3% increase) than the nonadjuvanted vaccine.The adjuvant AS03 was found to be effective in a second vaccine trail for H5N1 [117].It was determined that after the first immunization antibodies against the stem region of HA were crossreactive and very protective.However, after booster vaccination, antibodies were generated primarily against the HA head region and were not as protective.It was speculated that the antibodies against the stem region from the first vaccination were blocking epitopes on the stem and inhibiting further antibody generation. As indicated earlier in this article, it has been found that there are regions of the viral HA protein that are conserved, and that Abs directed against these regions can be neutralizing and afford protection.Therefore, many vaccine approaches involving the HA protein have been developed and have been reviewed [118].These include chimeric HAs, mosaic HAs, computationally optimized antigens (COBRAs), headless HAs, and mosaic nanoparticles [118].All these approaches showed promise when tested in mice and ferrets, but only the chimeric HA and the headless HA, i.e., mini-HA presented on ferritin particles, have undergone clinical trials [119].A second ferritin particle vaccine (H1 ssF), which displays a stabilized H1 stem immunogen on a ferritin particle, has now undergone a phase 1 clinical trial [120,121].The results indicated that the vaccine elicited a neutralizing antibody response against H1 viral subtypes.The conclusion was that the HA stem vaccine could induce broadly neutralizing B cell clones. In addition to the HA protein, the influenza virus has RNA polymerase proteins, neuraminidase (NA), nucleoprotein (NP), matrix protein (M1), membrane protein (M2), nonstructural protein (NS1), and nuclear export protein (NEP).Vaccines have been developed that target several of these proteins as well [118].The M2 protein provides a good candidate for a vaccine due to its conserved sequence between human and avian strains.Since M2 is an ion channel that is required for viral entry and egress from cells, Abs generated against M2 have the potential to be protective.Several methods have been employed for the development of vaccines containing M2.These include presentation on virus-like particles (VLPs), linking M2 to flagellin, and expressing it in a target cell with a DNA or mRNA vector.These vaccines have been successful in animals and have led to clinical trials which have indicated that M2 vaccines would be helpful in a combination approach with other vaccines, such as those for HA.The internal viral proteins M1 and NP are produced in relatively high amounts in infected cells and, therefore, peptides of these proteins are presented to T-cells via HLA molecules.Vaccines of M1 and NP combinations have been prepared by expression of the proteins by modified vaccinia Ankara (MVA) or by DNA or mRNA expression vectors.These vaccines have been in large Phase II clinical trials.Additionally, conserved peptides of the two proteins have been added to the standard yearly vaccine.The peptide addition approach has undergone a large phase III clinical trial. Within the last four years, successful development of lipid-nanoparticle encapsulated nucleoside-modified mRNA vaccines has created a new pathway for vaccine production.Attempts to design a universal influenza vaccine are being based on this methodology [122].One of these vaccines targets conserved sequences of the HA stem, M2, NA, and NP [69].These vaccines were prepared for each protein singly and in combination.In mice, all animals survived a 50X LD 50 challenge from an H1N1pdm virus strain when vaccinated with the combined protein vaccine, but mini-HA, M2, and NP alone showed only partial protection.Vaccination with the NA alone gave full protection even at the highest viral dose challenge by the H1N1 pandemic strain.However, the NA vaccine alone could not protect against heterotypic viral challenge, such as that caused by cH6/1N5 or H5N8.Another approach was to create a multivalent nucleoside-modified mRNA vaccine containing conserved HA sequences from all 20 known A and B influenza viruses [123].It was found that Abs generated by the vaccine in mice were able to recognize both variable and conserved HA epitopes.Protection against an H1 pandemic strain challenge was not obtained when H1 HA sequences were removed from the vaccine.This result suggested that full protection could only be obtained if all 20 HA sequences were included in the vaccine. Many flu vaccines are now undergoing clinical trials and these vaccines have been discussed in a recent review by Hu et al. [124].Presented in this manuscript are descriptions of vaccines on six vaccine platforms.New possible adjuvants are mentioned along with new optimal routes of immunization.Several virus-like particles (VLPs) that function to display influenza antigens are discussed.In addition to two VLP-based flu vaccines that are in clinical trials, there are 22 in preclinical development.In addition to the ferritin nanoparticle, there are other nanoparticle platforms.Among these are one based on a pentameric lumazine synthase particle (FLuMos-v1) and another on an OVX313 nanoparticle.Viral vector-based vaccines are being developed to express flu proteins.Possible viral vectors include adenoviruses, poxviruses, herpesviruses, vesicular stomatitis virus, and lentiviruses.Four of the five flu vaccines currently undergoing clinical trials have been developed with the adenovirus platform.It is worth mentioning that there are currently five mRNA vaccines in phase III clinical trials and two of these are produced by Moderna and Pfizer/BioNTech. Imprinting and vaccine boosting A recent report has indicated that evidence of imprinting has been observed after repeated immunizations with the mRNA vaccine for SARS-CoV-2 [125].It was found that booster immunizations with the bivalent vaccine, WA1/2020 and BA.5, produced a robust response against the original strain WA1/2020, but had a significantly lower response to the BA.5 variants.Subclass characterization of the Abs generated by the bivalent mRNA vaccine indicated that the major Ab was a IgG4 isotype.Therefore, during the course of repeated vaccinations for SARS-CoV-2, isotype switching had occurred from the initial IgG1 and IgG3 isotypes to IgG4.In terms of CD4 + T cells, this signifies skewing from a Th1 inflammatory response to a Th2 noninflammatory response.Another recent study concerning the flu vaccine studied DNA methylation patterns in a longitudinal analysis of yearly vaccinations [126].In this study, it was found that repeated vaccinations lead to DNA methylation, primarily of genes associated with the signaling pathway for the pattern recognition receptor RIG-1, retinoic-acid-inducible gene-1.The genes found to be methylated and the influence on RIG-1 is shown in Figure 10. the BA.5 variants.Subclass characterization of the Abs generated by the bivalent mRNA vaccine indicated that the major Ab was a IgG4 isotype.Therefore, during the course of repeated vaccinations for SARS-CoV-2, isotype switching had occurred from the initial IgG1 and IgG3 isotypes to IgG4.In terms of CD4 + T cells, this signifies skewing from a Th1 inflammatory response to a Th2 noninflammatory response.Another recent study concerning the flu vaccine studied DNA methylation patterns in a longitudinal analysis of yearly vaccinations [127].In this study, it was found that repeated vaccinations lead to DNA methylation, primarily of genes associated with the signaling pathway for the pattern recognition receptor RIG-1, retinoic-acid-inducible gene-1.The genes found to be methylated and the influence on RIG-1 is shown in Figure 10.Methylation of these genes resulted in increased function of RIG-1.The outcome of increased RIG-1 activity was an increase in transcription of type 1 IFNs (α/ß).Analysis of the cells associated with the repeated flu vaccinations has shown that mast cells are involved in some way [126].A ligand for RIG-1 is 5' ppp-dsRNA, and it has been found that incorporation of this ligand, as an adjuvant, with a flu vaccine markedly increased the generation of neutralizing high affinity antibodies in mice [127].It was found that this ligand greatly enhanced activity within the germinal center by promoting Tfh cell induction.It is possible that repeated vaccinations are skewing responses toward Tfh2 function and transcription [65].Activation of this process in humans will result in production of IgG2 and IgG4 as well as the involvement of mast cells.A representation of how repeated vaccination affects the immune response is presented in Figure 11. ligand greatly enhanced activity within the germinal center by promoting Tfh cell induction.It is possible that repeated vaccinations are increasing and, thus, skewing the Tfh2 function and transcription [65].Activation of this process in humans will result in production of IgG2 and IgG4 as well as the involvement of mast cells.A representation of how repeated vaccination affects the immune response is presented in Figure 11.T cells with the same epitopes appears to primarily lead to Th2 and Tfh2 phenotypes, which will promote B cell switching from IgG1 to IgG4.This is likely due to higher levels of type 2 cytokine IL-4.The IL-4 and IL-21 will stimulate memory B cells to generate plasmablasts that will secrete IgG4 antibodies.The type 2 responses also include mast cells, basophils, and eosinophils, which are more involved in allergic responses.IgG4 Abs are useful for naturalization but weakly stimulate inflammation due to low affinity to FcγRs. Concluding remarks: First exposure to the flu virus usually occurs during childhood and, in most cases, has a high impact on the body.During the infection, the body attempts to bring the infection under control by an interaction between the innate and adaptive immune systems.Depending on the individual's cytokine expression levels the degree of inflammation generated by the infection could be rather high.One of the ways the body would attempt to bring down the inflammation and reduce the virus load might be to initiate epigenetic modifications to the DNA.These modifications might be designed to trigger a stronger T cells with the same epitopes appears to primarily lead to Th2 and Tfh2 phenotypes, which will promote B cell switching from IgG1 to IgG4.This is likely due to higher levels of type 2 cytokine IL-4.The IL-4 and IL-21 will stimulate memory B cells to generate plasmablasts that will secrete IgG4 antibodies.The type 2 responses also include mast cells, basophils, and eosinophils, which are more involved in allergic responses.IgG4 Abs are useful for naturalization but weakly stimulate inflammation due to low affinity to FcγRs. Concluding remarks First exposure to the flu virus usually occurs during childhood and, in most cases, has a high impact on the body.During the infection, the body attempts to bring the infection under control by an interaction between the innate and adaptive immune systems.Depending on the individual's cytokine expression levels the degree of inflammation generated by the infection could be rather high.One of the ways the body would attempt to bring down the inflammation and reduce the virus load might be to initiate epigenetic modifications to the DNA.These modifications might be designed to trigger a stronger response by the innate immune system to infection by a similar virus.The outcome is that, during vaccination with drifted virus strains, a higher antibody response is observed in the first strain encountered, perhaps due to the epigenetic modifications.Further, the first viral infection will generate long-lasting T and B cell clones that will be directed against conserved viral protein sequences.These clones will be activated during infection by a shifted virus, like the pandemic 2009 virus.Therefore, the imprinting phenomenon would be the result of the interplay between the innate and adaptive immune system and an individual's genetic makeup. Reference Search Method The databases of OVID and PubMed were searched using general terms, such as "influenza virus infection" or "antibody response to virus infections".Papers found were then selected based on the relevance to the manuscript subject matter and more subjectspecific terms were searched, such as "CD4 Tfh memory cells and B cells".The search engine "Bing" was also helpful in that it provided a list of more recent papers when we downloaded a manuscript found using an OVID or PubMed search.All references were selected based on appropriateness to the subject matter of the present manuscript.1. Isolation of Human IgG Blood drawn into red-topped vacutainer tubes from BD Biosciences was allowed to clot at room temperature followed by centrifugation (300× g for 10 min at 4 • C) to pellet the clot and obtain the serum.The serum was aliquoted into freezer vials (500 µL each) and stored at −70 • C. Pre-and post-12-14-day serum IgG was isolated via the NAB G spin column system by Pierce Biotechnology according to the manufacturer's instructions.The concentration of IgG Ab was determined with a nanodrop spectrophotometer ND-1000 (Thermo Scientific, Waltham, MA, USA). Antibody Binding Analysis by Surface Plasmon Resonance A BIA3000 instrument (Pharmacia Biotech, Uppsala, Sweden) was employed for the SPR measurements.Vaccines were attached by amine coupling at similar RU (4500-5000) to flow cells 2 (2006/2007 formulation), 3(2007/2008 formulation), and 4 (2008/2009 formulation) of a Biacore CM5 chip.Flow cell one was a blank surface control.IgG (500 µg/mL) was injected (40 µL) over the CM5 chip at 10 µL/min.Running buffer was degassed and filtered HBS-EP, pH 7.5 plus soluble 1 mg/mL CM dextran, and the chip surface was regenerated with two pulses of 10 µL each of 10 mM glycine, pH 1.5, at a flow rate of 20 µL/min.Response unit (RU) readings were taken at 15 sec after completion of the sample injection and about 100 s later before regeneration of the chip surface.All RU readings were programmed to be measured mechanically by the instrument software.No manual measurements were made to avoid error due to time discrepancies.A second CM5 chip contained 2006/2007 vaccine on flow cells 2, 3, and 4 in nearly equal amounts (4000-5000 RU).This chip was made as a control to be sure that each flow-cell was responding to the sample in a roughly equal manner. TBNK Immunophenotyping Blood samples were collected between 0915 and 1130 h in heparin or EDTA vacutainer tubes and were stored at room temperature no longer than 24 h before processing.Immunophenotyping for T, B, and NK cells was performed using the 4-or 6-color MultiTest reagents from BD Biosciences (San Jose, CA, USA).The whole blood was stained in Tru-Count tubes according to the manufacturer's protocol.Stained samples were run on BD FACSCalibur or FACSCanto instruments using MultiSet software (v1.1.1)or FACSCanto (v2.0) clinical software, respectively. Memory Cell Analysis In 2003, T-cells were analyzed for the presence of CD45RA and CD62L using the four-color combination CD45RA/CD62L/3/8, according to the fluor order FITC/PE/Per-CP/APC from BD Biosciences (San Jose, CA, USA), and for the presence of CD38 and HLA-DR using the four-color combination 4/CD38/3/HLA-DR.In 2005, 4 color combinations were prepared for CD45RA/CD62L/3/8 and HLA-DR/CD45RO/3/4 from single color reagents purchased from BD Biosciences (San Jose, CA, USA).Cells were stained as for the immunophenotyping procedure above and were run and analyzed under identical settings, between 2003 and 2005, using the CellQuest software (v3.3) on the FACS Calibur flow cytometer.Absolute cell counts obtained from the MultiSet software were employed to calculate absolute cell counts of subsets bearing the CD45RA, CD62L, CD38, or HLA-DR surface markers.In 2017 for naïve and memory T cell determinations, the Human Naïve/Memory T Cell Panel from BD Biosciences (San Jose, CA, USA) was employed and EDTA was used as an anticoagulant.Each tube for cell labeling contained 100 µL of blood and 5 µL each of labeled antibodies, Alexa-fluor 647-anti-human CD197(CCR7), APC-H7-anti-human CD3, PerCP-Cy5.5-anti-humanCD4, and FITC anti-human-CD45RA.Tubes were vortexed briefly to mix and were incubated at room temperature for 30 min in the dark.After labeling, the red cells were lysed with 1X Pharm Lyse and the leukocytes were washed 3X with phosphate-buffered saline.Labeled cells were then resuspended in 1 mL of 1 X FACS lysing solution and run on the FACS Canto flow cytometer using the FACS Diva software (V6.1.3).Gating strategies for analyzing naïve and memory flow data are shown in the supplemental section. Figure 2 . Figure 2.An illustration of the immune network theory.Anti-idiotypic Ab is generated against an idiotope or paratope present in an Ab's Ag-binding fragment (Fab).Ab (Ab2) generated against the paratope might resemble the Ag's epitope.An anti-anti-idiotype (Ab3) may be cross-reactive to the original Ag.In this manner, a network of idiotypes is established as a strengthening of the immune response towards a pathogen and possibly regulating Ab responses. Figure 2 . Figure 2.An illustration of the immune network theory.Anti-idiotypic Ab is generated against an idiotope or paratope present in an Ab's Ag-binding fragment (Fab).Ab (Ab2) generated against the paratope might resemble the Ag's epitope.An anti-anti-idiotype (Ab3) may be cross-reactive to the original Ag.In this manner, a network of idiotypes is established as a strengthening of the immune response towards a pathogen and possibly regulating Ab responses. Figure 3 . Figure 3. Homosubtypic cross-reactivities of IgG serum Abs to different flu strains are revealed by SPR measurements.Pre-and post-13-15-day serum IgG Abs were isolated with a NAB spin column according to the manufacturer's protocol.IgG binding to three different vaccine preparations was evaluated with a BIA3000 instrument equipped with a CM5 chip coated with three different vaccines as described in the Section 4. (A) Pre-2006 vaccination IgGs at two time points followed by vaccination with the 2006/2007 vaccine (B) Pre-and post-13-15 days for the 2007 vaccination. Figure 4 . Figure 4. Capability of several subjects from the 2006 study gro 2011 vaccine containing the pandemic strain.The Ab (IgG) was Figure 4 . Figure 4. Capability of several subjects from the 2006 study group to generate Ab against the 2010-2011 vaccine containing the pandemic strain.The Ab (IgG) was isolated from serum before and after (12-14 days) the 2006 vaccination.IgG binding was evaluated by surface plasmon resonance as described in the Section 4. Figure 6 . Figure 6.A depiction of the role of Tox2 in generation of CD4 + Tfh cells.Bcl6 is the essential lineage transcription factor in Tfh cells.The role of the transcription factor Tox2 is to drive Bcl6 expression and Tfh development.Tox2 binds to loci associated with Tfh cell differentiation and function.Binding of Tox2 to the chromatin increases chromatin accessibility at loci associated with Tfh differentiation.The Tox2-Bcl6 axis creates a transcriptional feed-forward loop that promotes Tfh development as discovered by Xu et al. [67].In this scheme, IL-6 and IL-21 bind to antigen stimulated CD4 + T cells and induce STAT 3 activation.STAT 3 then promotes expression of Tox2, which then promotes transcription and expression of Tox.It is the Tox that generates the feed-forward loop effect by its interaction with Bcl6.Interaction of Tox2 with the chromatin will promote the expression of factors associated with Tfh cells and block the expression of factors associated with other types of CD4 + T cells. Figure 6 . Figure 6.A depiction of the role of Tox2 in generation of CD4 + Tfh cells.Bcl6 is the essential lineage transcription factor in Tfh cells.The role of the transcription factor Tox2 is to drive Bcl6 expression and Tfh development.Tox2 binds to loci associated with Tfh cell differentiation and function.Binding of Tox2 to the chromatin increases chromatin accessibility at loci associated with Tfh differentiation.The Tox2-Bcl6 axis creates a transcriptional feed-forward loop that promotes Tfh development as discovered by Xu et al.[67].In this scheme, IL-6 and IL-21 bind to antigen stimulated CD4 + T cells and induce STAT 3 activation.STAT 3 then promotes expression of Tox2, which then promotes transcription and expression of Tox.It is the Tox that generates the feed-forward loop effect by its interaction with Bcl6.Interaction of Tox2 with the chromatin will promote the expression of factors associated with Tfh cells and block the expression of factors associated with other types of CD4 + T cells. -12 days post-vaccination during the 2003/2004 and 2005/2006 seasons in adults vaccinated with the TIV seasonal vaccine (Figure S2A,B).All TIV vaccines from 2003-2006 contained the A/New Caledonia (HIN1) strain, while H3N2 and B viruses differed between vaccines.Immunophenotyping of study subject volunteers during 2006 in our laboratory showed no change in absolute lymphocyte counts among the no-vaccination controls, but individuals who had received the LAIV vaccine in 2005 had a significant decrease in the number of CD8 + T cells in their blood (Figure 7A,B).Study subjects who had not received the LAIV vaccine in 2005 responded to the 2006/2007 vaccine with an increase in NK cell counts at 5-7 days after the vaccination (Figure Figure 8 . Figure 8. Inclusion of the pandemic strain in the 2010/2011 vaccine resulted in an increase in both CD4 + and CD8 + T cell numbers in the peripheral blood following the vaccination.This increase persisted until the pandemic strain was removed for the 2017/2018 flu season.Immunophenotyping was performed by the 6-color MultiTest method and samples were run on the FACS Canto flow cytometer with FACS Canto clinical software v2.0 according to BD biosciences protocols.(A) Subjects from 2010 vaccination (N = 3).(B) Subjects from the 2016 vaccination (N = 2).(C) Subjects from the 2017 vaccination (N = 5).Significance was determined by a paired Student's t-Test, "*" significant at p < 0.05. Figure 9 . Figure 9.An illustration of flu antigens from either infection or vaccination being processed through the macroautophagy cell machinery for presentation to CD4 + T cells.During infection the virus enters the cell, and the RNA genome is incorporated into the DNA in the cell nucleus.The viral genes are then transcribed and translated into proteins.Many of these virus proteins will be ubiquitinated and end up in a phagophore through interaction with LC3 and the autophagy receptor p62 (sequestosome), which is a ubiquitin-binding scaffold protein.This pathway is the same for phagocytized virus vaccine.The autophagy-related gene 8 protein (ATG8) orthologue, GABARAP, is one of several orthologues involved in capture of the viral proteins for breakdown in the lysosome and peptide binding to the MHC II molecule.ATG4 is a family of cysteine proteases that function to process the orthologues from their pro-forms into their active forms.From the lysosome the viral peptides channel into the late-endosomal MHC class II compartment (MIICs) for further processing and loading onto the MHC II molecules.The path via the extracellular TLR receptor does not play a substantial role for flu peptides but intracellular TLR are stimulated by flu nucleotides. Figure 9 . Figure 9.An illustration of flu antigens from either infection or vaccination being processed through the macroautophagy cell machinery for presentation to CD4 + T cells.During infection the virus enters the cell, and the RNA genome is incorporated into the DNA in the cell nucleus.The viral genes are then transcribed and translated into proteins.Many of these virus proteins will be ubiquitinated and end up in a phagophore through interaction with LC3 and the autophagy receptor p62 (sequestosome), which is a ubiquitin-binding scaffold protein.This pathway is the same for phagocytized virus vaccine.The autophagy-related gene 8 protein (ATG8) orthologue, GABARAP, is one of several orthologues involved in capture of the viral proteins for breakdown in the lysosome and peptide binding to the MHC II molecule.ATG4 is a family of cysteine proteases that function to process the orthologues from their pro-forms into their active forms.From the lysosome the viral peptides channel into the late-endosomal MHC class II compartment (MIICs) for further processing and loading onto the MHC II molecules.The path via the extracellular TLR receptor does not play a substantial role for flu peptides but intracellular TLR are stimulated by flu nucleotides. Figure 10 . Figure 10.Longitudinal flu vaccination increases methylation of genes associated with the pattern recognition receptor, RIG-1.DNA methylation decreases expression of these elements, and the outcome is increased RIG-1 function.Activation of RIG-1 through a vaccination will lead to triggering of a pathway that will result in phosphorylation of transcription factors IRF3 or IRF7 through TANK-binding kinase 1 (TBK1), an analogue of IKKε.The phosphorylated transcription factors will Figure 10 . Figure 10.Longitudinal flu vaccination increases methylation of genes associated with the pattern recognition receptor, RIG-1.DNA methylation decreases expression of these elements, and the outcome is increased RIG-1 function.Activation of RIG-1 through a vaccination will lead to triggering of a pathway that will result in phosphorylation of transcription factors IRF3 or IRF7 through TANKbinding kinase 1 (TBK1), an analogue of IKKε.The phosphorylated transcription factors will enter the cell nucleus and promote the transcription of genes for type 1 IFNs.Translation of IFN mRNAs into protein will lead to secretion of IFN-α and IFN-ß from the antigen presenting cell. Figure 11 . Figure11.Booster vaccinations skew the immune response toward type 2. APC activation of CD4 + T cells with the same epitopes appears to primarily lead to Th2 and Tfh2 phenotypes, which will promote B cell switching from IgG1 to IgG4.This is likely due to higher levels of type 2 cytokine IL-4.The IL-4 and IL-21 will stimulate memory B cells to generate plasmablasts that will secrete IgG4 antibodies.The type 2 responses also include mast cells, basophils, and eosinophils, which are more involved in allergic responses.IgG4 Abs are useful for naturalization but weakly stimulate inflammation due to low affinity to FcγRs. Figure 11 . Figure11.Booster vaccinations skew the immune response toward type 2. APC activation of CD4 + T cells with the same epitopes appears to primarily lead to Th2 and Tfh2 phenotypes, which will promote B cell switching from IgG1 to IgG4.This is likely due to higher levels of type 2 cytokine IL-4.The IL-4 and IL-21 will stimulate memory B cells to generate plasmablasts that will secrete IgG4 antibodies.The type 2 responses also include mast cells, basophils, and eosinophils, which are more involved in allergic responses.IgG4 Abs are useful for naturalization but weakly stimulate inflammation due to low affinity to FcγRs. Peripheral blood was obtained from 11 consenting human donors, 7 females and 4 males, in 2003 and in 2005, 18 consenting human donors, 12 females and 6 males before and 7-12 days after the flu shot.In 2003, donors were immunized by intramuscular injection with a vaccine of the following composition: A/New Caledonia (H1N1)/Panama (H3N2), B/Hong Kong, and in 2005 the vaccine composition was A/New Caledonia (H1N1)/New York (H3N2) B/Jiangsu.The 2006/2007 season study had 24 females and 12 males enrolled.The average age was 47.5 yrs.The age range was 25 to 67 yrs.The 2007/2008 season study had 15 females and 10 males enrolled.The average age was 49.The age range was 26 to 68 yrs.The 2010/2011 study had 3 individuals enrolled, 2 males and 1 female.The 2016/2017 study had only 1 male and 1 female, and the 2017/2018 study had 3 males and 2 females.The median age for these smaller studies was 45.1 and the age range was 22-72.All study participants were either students or New York State Department of Health employees.Study participants completed the questionnaire and signed the consent form as part of the NYS IRB-approved protocol 98-108.Blood samples were drawn by licensed phlebotomists.Immunizations were intramuscular, and the vaccine compositions are listed in Table
2024-04-10T15:12:25.588Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "afc33fb50a75efa141f453f5e1d317b56de0d54f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/12/4/389/pdf?version=1712484732", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "defd5e02aa71583ecf2aae7cf1dff83782b7631d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267679217
pes2o/s2orc
v3-fos-license
Strategy use and its evolvement in word list learning: a replication study Spontaneous strategy employment is important for memory performance, but systematic research on strategy use and within-task evolvement is limited. This online study aimed to replicate three main findings by Waris and colleagues in Quarterly Journal of Experimental Psychology (2021): in word-list learning, spontaneous strategy use (1) predicts better task performance, (2) stabilizes along the task, and (3) increases during the first two task blocks. We administered a shortened version of their original real-word list-learning task to 209 neurotypical adults. Their first finding was partly replicated: manipulation strategies (grouping, visualization, association, narrative, other strategy) but not maintenance strategies (rehearsal/repetition, selective focus) were associated with superior word recall. The second finding on the decrease in strategy changers over task blocks was replicated. The third finding turned out to be misguided: neither our nor the original study showed task-initial increase in strategy use in the real-word learning condition. Our results confirm the important role of spontaneous strategies in understanding memory performance and the existence of task-initial dynamics in strategy employment. They support the general conclusions by Waris and colleagues: task demands can trigger strategy use even in a familiar task like learning a list of common words, and evolution of strategy use during a memory task reflects cognitive skill learning. Introduction Strategy use is an important aspect of memory and learning that contributes to individual differences in performance [1,2].Earlier studies have indicated that spontaneous strategy employment is associated with higher performance in different memory tasks [3][4][5][6][7][8][9][10][11][12].However, the within-task dynamics of memory strategy use remain largely unexplored.Besides retrospective (i.e. after the memory task has been completed) strategy reports, also concurrent block-by-block or trial-by-trial strategy reports have been employed [13][14][15], but in most cases such data have not been used to examine possible changes in strategy use within a memory task.A few exceptions exist: Delaney and colleagues [16,17] reported spontaneous switching from more shallow self-reported strategies such as rote rehearsal to deeper strategies (e.g.making a story with to-be-remembered words) with increased practice with word lists.Their studies concerned specific experimental memory paradigms tapping directed forgetting and spaced encoding.Moreover, Unsworth et al. [18] found that in their word list recall task, high-performing participants exhibited more flexible strategy use where a less effective strategy employed on the first word list was replaced by a more effective one on the second list. Establishing spontaneous evolvement of strategy use during a memory task is of interest as it links task performance to the general framework of cognitive skill learning.The skill learning view presupposes that performance on a new complex cognitive task is an adaptive process that can be divided into three major stages [19].At the task-initial Formation stage, the metacognitive system establishes strategies and behavioural routines for successful task performance.At the Controlled Execution stage, these processes are effortfully put into use.This is followed by the final stage, Automatic Execution, during which performance becomes increasingly more automatic and modular.This means that the initial metacognitive and executive load becomes minimized and these resources become available to other tasks.In two recent studies, Waris et al. [11,12], using open strategy reports that should avoid the risk for reactivity effects in commonly used multiple-choice strategy measures [13], found within-task evolvement of strategy use in common episodic (word list learning) and working memory (n-back) tasks that is in line with the general cognitive skill learning framework.Such evolvement is of theoretical and practical interest as it reveals task-internal dynamics, taking place on the time-scale of minutes, which are not observable with the commonly employed summative memory test scores.As this phenomenon is only scarcely studied, we wanted to replicate and extend the results of Waris et al. [11] concerning the spontaneous strategy use in word list learning, an episodic memory task. Waris et al. [11] employed two word list learning tasks, one with real words and the other with pseudowords, to examine the temporal pattern of strategy use and its role in recall performance in these tasks.Strategy use was probed by self-reports after each task block.In the present study, we used only the real word version that due to its relative familiarity to adult participants was critical to the hypotheses Waris et al. [11] tested.The central findings by Waris et al. [11] concerning the real word learning task-the findings we aimed to replicate-were as follows.(1) Strategy use was positively related to episodic memory performance.Based on earlier research [8], Waris et al. [11] recoded the original strategy types into three major categories, namely No strategy, Maintenance (including the primary strategies of rehearsal and repetition)/Other strategy, and Manipulation (the memoranda were mentally manipulated for example through grouping, association or visualization).When using this categorization, they found that Manipulation strategy users performed better than Maintenance/Other strategy users across the task blocks.When compared to No strategy, Manipulation strategy users exhibited better performance and a steeper learning curve.The same pattern was observed when comparing No strategy to Maintenance/Other strategy.Another strategy variable, the level of strategy detail (how many strategy-related details were given in the open-ended strategy reports), was also associated with objective performance across the task.(2) The proportion of participants changing from one strategy to another between the blocks went down during the first block transitions.(3) Self-reported strategy use (dichotomized as yes/no) increased during the first two task blocks.The results (2) and (3) on the word learning task, indicating task-initial dynamics in strategy employment, were taken as support for the idea that memory task performance entails cognitive skill learning.Moreover, Waris et al. [11] concluded that familiarity with a task (learning a list of real words is a commonplace compared with pseudoword learning) does not necessarily lead to employment of readily available, stable strategies right from the start, as the cognitive routine framework proposed by Gathercole et al. [20] would suggest.Rather, Waris et al. [11] took this as support for the alternative task demand hypothesis, according to which memory demands posed by the task triggered task-initial strategy adjustment.As these effects were found only in a post hoc analysis focusing on the initial task blocks, it is important to try to replicate them. In sum, this study attempted to replicate the following three central findings reported by Waris et al. [11]: strategy use in an episodic memory task (word list learning) predicts better objective task performance, strategic choices become more stable when the task advances, and strategy use increases during the first two task blocks.If these three findings are replicated, they would provide further support to the task demand hypothesis, according to which strategy generation is triggered by task difficulty rather than novelty (after all, learning a list of words can be considered as a rather familiar memory task for adults).At a more general level, successful replication would also provide support to the cognitive skill learning framework [19] by showing that even within the short time-span of a memory task, adaptive evolvement of strategy use takes place. Participants and procedure This article received results-blind in-principle acceptance (IPA) at Royal Society Open Science.Following IPA, the accepted Stage 1 version of the manuscript, not including results and discussion, was preregistered on the OSF (osf.io/j8vpy).This preregistration was performed after data analysis. Ethics clearance for the study was obtained from the Ethics Board of the Departments of Psychology and Logopedics at the Åbo Akademi University, Turku, Finland.The data of the Word List Learning task that was used in the current study were collected as a part of a large-scale online experiment that has been published in a separate paper [21]. As in Waris et al. [11], recruitment took place via the crowdsourcing site Prolific (https://www.prolific.co/).Participants remained anonymous and received their monetary compensation via Prolific.The present study concerns only neurotypical participants, but the original study described in Jylkkä et al. [21] included also adults with diagnosed attention deficit hyperactivity disorder (ADHD) which explains the large initial screening samples.All participants were 18 to 50 years of age, lived in the UK and had English as their first language.Here we focused only on the neurotypical participants' performance and strategy use in the Word List Learning task that is described below. The pre-registration of the larger project (https://osf.io/m7c9a)aimed at 250 neurotypical individuals who had completed the study.The motivation was that in this range correlations, an important analytical aspect for the larger project, stabilize [22].While this decision did not take into account the strength of evidence obtained by Waris et al. [11], it should be noted that the present sample is twice the size of their sample of 101 individuals.To examine this further, we conducted additional analyses by altering the priors to assume large effects (r = 1) or small effects (r = 0.25).These were performed on our main analyses reported in the Results section, and the results consistently led to the same conclusions.This convergence across various prior assumptions reinforces the robustness of our findings and provides assurance that our conclusions would not be driven solely by a specific choice of priors. The data were collected between August and December 2021.Data collection proceeded in three stages, with two short prescreening sessions and the actual study (figure 1 and Jylkkä et al. [21]).The aim of the first prescreening was to identify a large enough sample of adults with ADHD in the Prolific participant pool by posing a question on possible ADHD/ADD diagnosis and asking to fill out the Adult ADHD Self-Report Scale Part A [23].At the next step, non-ADHD participants on a first-come-first-serve basis took part in the second prescreen (N = 1513) that probed basic demographics (age, gender, education, level of income); medical history (e.g.diagnosis of bipolar disorder, severe depression, psychosis, schizophrenia, or neurological illness); colour vision and eyesight; alcohol consumption with AUDIT questions 1-3; use of nicotine products; and use of other possible psychoactive substances.Furthermore, the participants filled out ASRS part B and DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure-Adult [24], and some further questionnaires.The length of the second prescreening was about 10 min. Next, 293 suitable participants took part in the study proper, again on a first-come-first-serve basis.The inclusion criteria were as follows: normal or corrected-to-normal vision; no colour blindness; no neurological illness that affects the participant's current life; no neurodevelopmental disorders; never diagnosed with bipolar disorder, severe depression, psychosis, or schizophrenia; and no self-reported problem with substance abuse.Additionally, they had to fulfil the following criteria in the DSM-5 Symptom Measure: no reported suicidality (score 0 on item 11) and sum scores less than three in the domains depression, mania, and anxiety.This means symptoms rated as 'mild', or symptom occurrence not more often than during 'several days' within the last 2-week period. After considering all inclusion and exclusion criteria, missing data in our strategy and performance variables on the Word List Learning task (we performed listwise exclusions so that participants having one or more missing values were excluded), and univariate outlier screenings, the final sample size of the study group in the current analyses was 209.Thus, the present sample of neurotypical adults was twice royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 the size of the group to which the present results were compared, i.e. the Repeated Strategy Queries group (n = 101) in Waris et al. [11].That group in Waris et al. [11] was quite similar to ours in terms of sex (71% versus 75% females), average age (34.8 years versus 31.8 years), and distribution of educational attainment (66.1% versus 65.4% with bachelor's degree or higher).Table 1 depicts the background characteristics of the present final sample. Test sessions in the study proper The actual study encompassed five separate online assessment sessions.It included three prospective memory tasks, questionnaires, a 5-day diary on everyday prospective memory lapses, and the Word List Learning task.One of the prospective memory tasks, the video game EPELI [21], was always in the first session, and the diary task was presented in each session.The presentation of all the other tasks, including Word List Learning, were counterbalanced between the participants through random allocation of the participants into one of the four task sets.The participants took the five sessions on Word List Learning task The present task of interest is an episodic memory and learning task adopted from Waris et al. [11].Thus, it represents a commonly employed list learning format similar to, for example, the Rey Auditory Verbal Learning Task and enables the tracking of a participant's learning curve.It consists of an 18-word list of common nouns that Waris et al. [11] selected from the MRC Psycholinguistic Database with the following search criteria: (1) nouns according to the SOED database [25], (2) 4-6 letters long, (3) consisted of 1-3 syllables, (4) had a Kucěra-Francis [26] written frequency above 0, and ( 5) had concreteness and imageability ratings of 558 or more (i.e. at least 1 s.d.above the mean).This resulted in the pool of 444 words that was next narrowed down to 283 high-frequency words as defined by Zipf frequency values of four or more [27] by using the SUBTLEX-US corpus [28].Next, Waris and colleagues randomly selected two sets of 18 common nouns from this final word pool.In this study, we used one of the two lists (table 2).Our task was in all respects identical to the one used by Waris et al. [11] with one exception: instead of five presentation/free recall cycles of the 18 stimuli, the present shortened version included three cycles or task blocks.Shortening of the task avoided also any risk for ceiling effects: figure 1a in Waris et al. [11] shows that after three blocks, their participants' average performance was ca 13 words, well below the maximum of 18 words.The participants' task was to try to memorize as many words as possible.The same 18 words were presented in each of the three blocks, but for each presentation, the order of the items was randomized.The structure of a task block is illustrated in figure 2. The words were shown on-screen one at a time for one second, separated by an inter-stimulus interval of one second during which the screen went blank.After the final word in a list had been shown, the participants were asked to solve a distractor task that appeared on-screen.The distractor tasks were multiple-choice arithmetical tasks (e.g. 9 + 6 − 7 + 3 = ?)intended to minimize the contribution of working memory to recall performance. After the distractor task, the response screen was displayed, and the participants were instructed to type in the words they could recall one at a time in any order.Non-letter characters or spaces were permitted before or after the word, but otherwise the words had to be typed correctly.The dependent variable was the number of correctly recalled words per list. After recalling words for a task block, the participants wrote their response to the following openended strategy query: 'Please describe in as much detail as possible how you solved the previous word list task (not the math task).That is, how did you try to memorize the words?'. Strategy coding for the Word List Learning task Two independent raters coded each open-ended strategy report on three variables: the first reported strategy type (the primary strategy), the total number of specific strategy details given (be they for one or more strategy types), and the total number of strategy types reported.For each participant, there were 3 separate strategy reports for the Word List Learning task, i.e. one after each block.The first reported strategy was coded into one of 8 different strategy categories based on our earlier work on the classification of open-ended memory strategy reports [7,8,12], the same system that Waris et al. [11] This category was created afterwards, as a second look at the unclassified strategy responses indicated several instances of an explicit mention of creating a story in order to remember the words. royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 Besides these primary strategy categories, we followed Waris et al. [11] by examining strategy use with a broader three-category classification of strategies taken from Fellman et al. [8].Here, the strategies Rehearsal/Repetition and Selective focus were classified as a Maintenance strategy, whereas Grouping, Visualization, Association, Narrative and Other strategy were classified as a Manipulation strategy (the memoranda are manipulated in one way or another to facilitate recall).No strategy remained a single class of its own.After investigating the qualitative features of the strategy reports coded as Other strategy, we decided to lump Other strategy together with Manipulation instead of Maintenance, contrary to what Waris et al. [11] had done.This decision was made because the great majority of those reports included manipulation of the to-beremembered material, such as using the words for forming sentences or linking the words together (e.g.'remembered a sentence in my head to remember as many words as possible'; 'I tried to make them link together').Also, in line with Waris et al. [11] we coded the level of detail (LoD) in the strategy reports.A detail was defined as a specific strategy feature in the response and one point was given for each mentioned feature.In contrast to the Waris et al. [11] study, a detail point was not given for a reported specific strategy, but only for a reported specific feature of a strategy (see the coding scheme in the electronic supplementary material, table S1), and we left out no strategy users from the analysis.This was done to prevent a confound between the variables primary strategy use and the level of detail. The third strategy measure that we coded, the total number of strategy types in a strategy report for each block, was a new variable that Waris et al. [11] had not used.The idea was that this variable could provide further information on participants' abilities to spontaneously generate and apply memory strategies. Following the independent coding, we assessed the interrater reliability for the strategy coding in the Word List Learning task by unweighted kappa (κ) for the first reported strategy type and with linearly weighted kappa (κ w ) for the total number of specific strategy details and the total number of different strategy types used.The data for these analyses comprised all participants including the ADHD group (total n = 328; the independent coders M.L. and T.E.were blinded to the group membership of the participants).The kappa coefficient was κ = 0.74 for the first reported strategy type, κ w = 0.80 for the total number of specific strategy details and κ w = 0.77 for the total number of different strategy types used.As all the reliability coefficients suggested substantial agreement, the raters continued with a subsequent consensus meeting where the discrepancies in their codings were discussed and solved.royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 Bayesian analytical approach Regarding the three main findings by Waris et al. [11], our analytical approach was identical with one exception described below.As in their study, we employed Bayesian factors (BFs) using the 'BayesFactor' package [29] on R version 4.0.0 [30], but also JASP version 0.17.1.With this approach, the evidence either for the null hypothesis (H 01 ) or for the alternative hypothesis (H 10 ) is contested on a continuous scale.A BF of 1 indicates perfect ambiguity, whereas a BF above or below 1 provides evidence for the H 10 or H 01 , respectively.Regarding the interpretation of the BFs, the guidelines put forth by Kass & Raftery [31] were followed: BFs between 1 and 3 represent 'weak evidence', BFs between 3 and 20 show 'positive evidence', BFs between 20 and 150 indicate 'strong evidence', and BFs greater than 150 are taken as 'very strong evidence'.When relevant, we also report estimates of between-group mean differences using a posterior distribution with 10 000 iterations coupled with their 95% credible intervals formed from the highest density interval (HDI) distribution.In each BF analysis, we employed the default prior setting (i.e.Cauchy distribution using a scaling factor r = 0.707).As the Word List Learning task consists of consecutive blocks, LME [32] models were used whenever possible.In the present models, participants were treated as the crossed-random effect, and Block that was coded as a linear contrast represented always one of the fixed effects. The one exception to the analyses conducted by Waris et al. [11] concerned their second finding, namely increased stability of strategic choices when the task advances.As their finding was purely descriptive, here we chose to test with a Bayesian binomial test whether the number of strategy changers decreased from the first to the second block transition.Moreover, as noted above, we coded and analysed a new strategy-related variable, the total number of strategy types in a strategy report for each block, to examine possible changes in that variable across the three blocks. Outlier analysis was conducted in the same way as in Waris et al. [11].We screened task performance for univariate outliers using the summed recall score across the three blocks as the dependent variable.Univariate outliers were defined as scores three times the interquartile range above or below the first or the third quartile.One such outlier was identified and excluded. General findings on learning progress and strategy use in the task As expected, the results showed very strong evidence for a main effect of block (M diff = 2.46, 95% HDI = [2.31,2.60], BF 10 > 150 ± 0.69%), indicating that recall performance improved across the task blocks.As depicted in figure 3, the improvement pattern was more or less linear across the blocks. Strategy use was prevalent in the task, and a clear majority of the participants reported using a strategy already in the first block (table 3).royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 3.2.The first hypothesis: strategy use is associated with objective memory performance Primary strategy type and test performance The first main finding by Waris et al. [11] that we attempted to replicate was that strategy use in word list learning is linked to better objective task performance.For this purpose, we grouped the participants based on their open-ended strategy reports.Similarly to Waris et al. [11], we used the broader categories described above (Maintenance, Manipulation/Other strategy, No strategy).The participants were grouped according to what primary strategy category they had reported most frequently during the three blocks.If a participant reported using different strategies an equal number of times, the most sophisticated strategy was chosen (No strategy < Maintenance < Manipulation/Other [8]).This resulted in the following group sizes: No strategy, n = 16; Maintenance, n = 89; Manipulation/Other, n = 104.LME models were computed to test whether strategy type was associated with recall performance across blocks.We performed pairwise comparisons between each strategy type (table 4 and figure 4).There was very strong evidence for a main effect of strategy type between Manipulation/Other strategy and Maintenance, indicating better performance in users of the former category across the task.This was true also for the comparison between Manipulation/Other strategy and No strategy.In turn, Maintenance versus No strategy comparison did not reveal evidence for a difference.None of the comparisons showed evidence for strategy type × block interaction.In sum, the analyses described above replicated the finding by Waris et al. [11] that the use of a Manipulation strategy is associated with superior Word List Learning performance.However, their finding on a performance difference between Maintenance strategy and No strategy users was not replicated. Total number of strategy types and test performance (table 5) We observed only weak evidence for a main effect of the number of strategy types on memory performance (M diff = 0.39, 95% HDI = [0.09,0.68], BF 10 = 1.52 ± 1.23%), indicating that this variable was not associated with Word List Learning scores.There was also positive evidence against an interaction between the number of strategy types and block (M diff = −0.10,95% HDI = [−0.37,0.17], BF 01 = 14.29 ± 1.03%).As figure 6b indicates, most strategy users reported only a single strategy. The second hypothesis: strategy changes become less frequent across the task blocks This analysis concerned the number of participants who changed their primary strategy type when moving from one block to another.The hypothesis based on Waris et al. [11] was that this number is higher in the first block transition (block 1 → block 2) than in the second one (block 2 → block 3), as strategy use should begin to stabilize quickly after the initial strategy generation stage.We crosstabulated the participants and found that the number of participants who changed their strategy in the first but not in the second block transition was 54.This contrasted with 29 participants who exhibited the opposite pattern.A hundred participants did not change their strategy in either the first or the second block transition, and 26 changed strategy in both transitions.A Bayesian binomial test with JASP (version 0.17.2.1) provided positive evidence for the hypothesis that the number of strategy changers decreased from the first to the second block transition (BF 10 = 5.96).Thus, the finding of a decreased rate of strategy changers across the task blocks was replicated, albeit the present shortened version had three task blocks and thus only two block transitions instead of five blocks and four transitions.The percentages of all strategy changers in the two block transitions are shown in figure 5. The third hypothesis: strategy use increases during the first two task blocks Following the post hoc finding by Waris et al. [11], we hypothesized that strategy use increases during the first two task blocks.No evidence for a main effect of block was observed (M diff = 0.04, 95% HDI = [−0.01,0.07], BF 01 = 2.00 ± 1.21%) (figure 6a).Thus, this finding by Waris et al. [11] was not replicated.However, one should note that their finding was based on an analysis that included the first two blocks of both the real word and the pseudoword task.Moreover, their fig.1b indicates that the increase in strategy use was royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 less marked in the real word task, even though there was no evidence for a task × block interaction.Thus, to obtain a direct comparison, we re-analysed the block 1-block 2 data of Waris et al. [11] by including only the real word task.This analysis gave only weak support for an increase in strategy use (BF 10 = 1.45).Thus, the present hypothesis was misguided as far as only the real word learning task is concerned.Figure 6a suggests a slight average increase in strategy use from block 1 to block 2, followed by a similar decrease at block 3. The pattern for these three blocks is as such very similar as in fig.1b in Waris et al. [11]. Finally, we also examined whether the number of primary strategies showed any change across the three task blocks (figure 6b).The results revealed weak evidence for the null hypothesis (M diff = 0.06, 95% HDI = [0.02,0.11], BF 01 = 1.58 ± 1.69%), indicating that the number of strategies remained unchanged over the three task blocks.When inspecting the same relationship by excluding the third block, we again observed weak evidence for the null hypothesis (M diff = 0.10, 95% HDI = [0.01,0.20], BF 10 = 1.04 ± 1.24%), indicating no evidence for an increase in the number of strategies from the first to the second block.As noted earlier, most strategy users reported only one strategy, so the variability on this measure was limited. Discussion Given the scarcity of detailed studies on spontaneous strategy use during episodic memory task performance, this study set out to replicate the main findings of Waris et al. [11] who examined the strategy-performance relationships and the evolvement of strategies in a Word List Learning task.Their three main findings were: strategy use is associated with better objective task performance, strategy choices become more stable during the first block transitions, and strategy use increases during the first two task blocks.By using a shortened form with one of the stimulus sets employed in the original study, we managed to replicate the first finding partly and the second finding fully.The third finding turned out to be misguided as the original analysis was based on a summative effect of both word and pseudoword learning, not on the real word condition that we employed.In what follows, we discuss these findings and their implications in more detail. The finding that spontaneous strategy use is associated with enhanced episodic memory performance has been observed also in earlier studies [15,33,34] and highlights the importance of strategies in understanding factors that underlie the considerable inter-individual differences in memory performance.More specifically, the present results replicated the finding by Waris et al. [11] that the family of strategies involving manipulation of memoranda in mind was related to superior word recall as compared to Maintenance strategies (Rehearsal/Repetition and Selective Focus) or to No strategy.The Manipulation strategies, here including Grouping, Visualization, Association, Narrative and Other Strategy, arguably represent the cognitively most advanced mnemonics in the present repertoire, requiring executive control.Their spontaneous use in Word List Learning is also quite common.In Waris et al. [11], they were used in the first three blocks by 46% (block 1), 37% (block 2), and 35% (block 3) of the participants (to make their percentages directly comparable to ours, Other strategy users are included in these figures).In the present replication study, the corresponding percentages were even higher (55% for block 1, 47% for block 2, and 43% for block 3), pointing to a certain variability in frequencies of strategic choices from one adult sample to another.One finding by Waris et al. [11] that did not replicate was that in their study, the use of Maintenance strategies was linked to better word recall than No strategy.Thus, the benefit from using these cognitively more simple strategies is less certain.In a review concerning working memory task performance, Oberauer [35] also questioned the facilitatory role of simple rehearsal. The other two findings of Waris et al. [11] that we attempted to replicate concerned dynamic changes in strategy use during the rather short time span it took to perform the Word List Learning task.The first one of these findings that we managed to replicate was that strategic choices became more stable when the task advanced.In Waris et al. [11], this was only a descriptive finding, but here we could ascertain it statistically by comparing the rates of strategy changers in the first versus the second block transition.The decrease in the rates of strategy changers during block transitions speaks for a dynamic process where task-initial strategy generation and adjustment is followed by a more stable use of a chosen strategy.Regarding Word List Learning, spontaneous strategy shifts from more superficial initial strategies to deeper and more effective ones have been reported in some earlier studies [16][17][18].A possible counterargument to the present interpretation is that task-initial strategy changes could be a result of the task structure where the same items are presented repeatedly.However, the clustering of strategy royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 shifts especially in the task-initial phases seems to be a more general phenomenon, as we have found it also in a prospective/episodic memory task with constantly changing task items [36], and in a continuous working memory updating task [12].Theoretically, initial strategy shifts followed by a gradual strategy stabilization in all these different memory tasks fits well to the cognitive skill learning view [19].According to this view, demanding non-routine tasks activate the metacognitive and executive control systems needed for strategy generation and implementation/monitoring, after which task routine starts to gradually develop [19].Being a general framework, the cognitive skill learning view does not take a stance on the time span of these hypothetical stages in specific tasks, but the results from three different memory paradigms, Word List Learning task [11], a complex virtual reality prospective/episodic memory task [36], and a continuous n-back working memory updating task [12], suggest that the most intensive strategy generation and adaptation stage is rather short-lived, being most prominent in the first task blocks.These memory tasks are, after all, quite straightforward (albeit cognitively demanding), and it may very well be that strategy generation and adaptation would take longer for example in complex problem-solving tasks. The third finding by Waris et al. [11], increase in strategy use during the first two task blocks, failed to replicate.However, as we describe in the Results section, this hypothesis turned out to be misguided, as a post hoc analysis of the original data showed that the effect was not present in the real word condition that was the focus of this replication attempt.Thus, one might rather turn the argument the other way round and note that the lack of positive evidence for such an increase was replicated.A plausible reason for a lack of increase in strategy use is the fact that the percentage of strategy users was high already in the first real word task block, leaving limited room for a further increase.The percentages of strategy users in the first three blocks were 85%-91%-85% in Waris et al. [11], and 89%-92%-89% in the present replication study.As such, these values are quite close to each other.In the block-by-block strategy analysis of the working memory updating task where an initial increase in strategy use was reported [12], there was much more room for improvement, as the percentage of strategy users in the first task block was about 50% and going up to ca 65% in the second block in both n-back experiments.Regarding the new strategy variable that we included in this study, the number of strategy types employed, the rates were at a low (mostly single strategy) and stable level throughout the blocks.All in all, possible increase in strategy use at the task-initial stages remains an elusive phenomenon that may require specific task conditions to appear.As compared to strategy change discussed above, the strategy increase variable is also much more limited in scope.Strategy change encompasses three types of changes, shifts between different primary strategy types, changes from a primary strategy to no strategy, and changes from no strategy to a primary strategy, while strategy increase concerns only the last type of change. An important theoretical aim of the study by Waris et al. [11] was to test the cognitive routine framework proposed by Gathercole et al. [20].This framework is closely linked to cognitive skill learning discussed above and was developed to account for the very limited transfer observed in working memory training studies (e.g.[37,38]).However, it has implications for memory task practice in general.According to this framework, repeated practice with a memory task leads to development of new cognitive routines (i.e.strategies) under the condition that the trained task is unfamiliar.In turn, a familiar memory task like the present one where participants learn a list of high-frequency real words should not trigger strategy development.Gathercole et al. [20] note that for a familiar task like verbal serial recall, novel material-specific strategies would be adopted only 'under conditions of extensive and prolonged practice' ( p. 23), which is very different from the present single-session setup.Waris et al. [11] contrasted the cognitive routine framework to a competing hypothesis that they coined as the task demand hypothesis.This alternative hypothesis states that besides novelty, also task demands play a role in the spontaneous adoption of strategies.Thus, strategies can be generated also when faced with a familiar task if the task is demanding enough.This is partly in line with Belmont & Mitchell [39] who proposed that cognitive tasks which participants perceive as moderately difficult (not easy or very difficult) are more likely to elicit strategic behaviour.Considering these two hypotheses, the present findings are in line with the task demand hypothesis, as the rather demanding 18-item real word learning task triggered frequent strategy use right from the start and exhibited strategy adjustments (changes of strategy) especially during the first two task blocks.However, as noted by Waris et al. [11], the task demand hypothesis is complementary rather than opposite to the cognitive routine framework.Thus, one can conclude that both novelty and task demands affect strategy use when performing a cognitive task. The same general limitations as those of Waris et al. [11] concern also the present study.Firstly, both are online studies conducted with anonymous participants, with no control over the conditions under royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 which the participants took the task.However, the instructions emphasized that the participants should ensure that they work alone in a quiet space.Previous studies have also shown a close correspondence between the cognitive task effects obtained in online and laboratory experiments (e.g.[40,41]).Secondly, strategy information was based on introspective reports that may not cover all relevant strategic behaviours that the participants employed.Nevertheless, the fact that self-reported strategies were strongly associated with actual recall performance speaks for their relevance in strategy research.In future studies, one could look for opportunities to gather simultaneously both subjective and objective strategy data (e.g.degree of semantic clustering of recalled words in free recall).Thirdly, it is worth noting that the present data on the relationships between strategy use and memory performance are correlative.However, performance improvements in previous studies where participants received memory strategy instructions speak for a causal relationship between strategy employment and performance (e.g.[3,4,[7][8][9][10][42][43][44]). Conclusion In summary, the present study replicated partly the finding by Waris et al. [11] that self-reported spontaneous strategy use is associated with superior recall performance when learning real words.This was shown here for the Manipulation strategies but not for the simpler Maintenance strategies.Moreover, we replicated fully the finding that changes in strategy use clustered especially to the first two task blocks.These findings were taken as support to the view that when faced with a demanding memory task, even a familiar one, adult participants are prone to use and adjust mnemonic strategies right from the start.Thus, complex memory tasks should not be considered as straightforward capacity measures, because the use or non-use of an effective strategy can make a considerable difference in task outcomes.This harks back to the notion by Atkinson & Shiffrin [45] in their seminal 1968 paper: any theory of human memory that aims at generality must include control processes such as rehearsal, coding and search strategies.The more general interpretation of the present findings is that performance on a demanding cognitive task represents cognitive skill learning where task-specific strategies are adopted.In memory tasks that are quite straightforward, the first stages of cognitive skill learning, strategy generation and adaptation, appear to be short-lived.These hidden task-initial dynamics of cognitive task performance become visible only when detailed block-by-block analyses of strategy use are employed. Figure 1 . Figure 1.Flowchart of the data collection procedure.WLL = Word List Learning task. 'Figure 2 . Figure 2. The structure of a task block in the Word List Learning task.Altogether three blocks were presented to the participants. 3 Figure 3 . Figure 3. Average number of correctly recalled words across the three blocks.Whiskers in this and the following figures represent 95% confidence intervals. 3 Figure 4 . Figure 4. Average number of correctly recalled words in users of Manipulation/Other strategy (MNP), Maintenance strategy (MNT), and No strategy (NS) across the three blocks. Figure 6 . Figure 6.(a) Percentage of strategy users across blocks.(b) Average number of different strategy types used per block among strategy users in that block. Figure 5 . Figure 5. Percentage of strategy changers from one block to another. Table 1 . Background characteristics of the sample (n = 209).weekdays.There was at least a twelve-hour interval between sessions, and the whole study was finished within 14 days.The duration of each session was about 40 min, and the total study duration was approximately 3 h and 20 min.Completion of EPELI in the first session was a prerequisite for partaking in the other sessions. variable distribution gender (F/M/other) 157/52/0 age (M, s.d.) 31.82(8.67) royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230651 separate employed.The categories used for coding Word List Learning task strategy responses were the following: No explicit strategy use, Rehearsal/Repetition, Grouping, Association, Visualization, Selective focus, Narrative 1 and Other strategy.Electronic supplementary material, table S1, gives the detailed coding scheme with concrete examples. Table 3 . Percentage of participants using different strategy types across the three blocks. Table 4 . Pairwise comparisons of Word List Learning performance between users of Manipulation/Other strategy, Maintenance strategy, and No strategy.MNP/OTHER: Manipulation or Other strategy; MNT: Maintenance; NS: No strategy.HDI: highest density interval of the posterior distribution; BF: Bayesian factor.Estimates are the mean group differences from 10 000 samples of the posterior distribution.Bolded values are the results that provide evidence for the alternative hypothesis..org/journal/rsosR. Soc.Open Sci.11: 230651 M Diff [95% HDI] BF ± error (%) M Diff [95% HDI] BF ± error (%) M Diff [95% HDI] BF ± error (%) strategy 1.72 [1.01-2.4]BF 10 > 150 ± b Positive values represent greater performance in the MNT.royalsocietypublishing Table 5 . Average number of correctly recalled words across the three blocks among strategy users by level of detail and number of different strategy types used.LoD, level of detail; STRAT, number of different strategy types used.
2024-02-16T05:08:12.498Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "5a12a1aab2776d1f9cea160092ae8ef320129f2d", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "5a12a1aab2776d1f9cea160092ae8ef320129f2d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
12712517
pes2o/s2orc
v3-fos-license
NIR-Cyanine Dye Linker: a Promising Candidate for Isochronic Fluorescence Imaging in Molecular Cancer Diagnostics and Therapy Monitoring Personalized anti-cancer medicine is boosted by the recent development of molecular diagnostics and molecularly targeted drugs requiring rapid and efficient ligation routes. Here, we present a novel approach to synthetize a conjugate able to act simultaneously as an imaging and as a chemotherapeutic agent by coupling functional peptides employing solid phase peptide synthesis technologies. Development and the first synthesis of a fluorescent dye with similarity in the polymethine part of the Cy7 molecule whose indolenine-N residues were substituted with a propylene linker are described. Methylating agent temozolomide is functionalized with a tetrazine as a diene component whereas Cy7-cell penetrating peptide conjugate acts as a dienophilic reaction partner for the inverse Diels-Alder click chemistry-mediated ligation route yielding a theranostic conjugate, 3-mercapto-propionic-cyclohexenyl-Cy7-bis-temozolomide-bromide-cell penetrating peptide. Synthesis route described here may facilitate targeted delivery of the therapeutic compound to achieve sufficient local concentrations at the target site or tissue. Its versatility allows a choice of adequate imaging tags applicable in e.g. PET, SPECT, CT, near-infrared imaging, and therapeutic substances including cytotoxic agents. Imaging tags and therapeutics may be simultaneously bound to the conjugate applying click chemistry. Theranostic compound presented here offers a solid basis for a further improvement of cancer management in a precise, patient-specific manner. Introduction The success in extent and sustainability of therapeutic interventions against cancer primarily depends on the quality and precision of the diagnostic technologies. The interaction of newly synthesized ligands with aberrantly expressed receptors as target proteins, can fulfil requirements for molecular diagnostics by the use of molecular imaging (MI) and for therapeutic regimens for individualized patient-specific treatment (targeted therapy). The identification of specific amino acid sequences as structures of target proteins and the ensuing development of such ligand molecules require very exigent chemical ligation methods. We combined the well-established Diels-Alder reaction (DAR) [1,2] Ivyspring International Publisher and the DAR with inversed electron demand (DAR inv ) with the solid phase peptide synthesis (SPPS) [3][4][5]. These techniques are considered as critical pillars for reformulation of classical drugs. Technically, the functionalization of nucleic acids and their derivatives with thiol-groups for oxidative connection of functional peptides was documented [6]. The synthesis of disulfide-functionalized molecules was also described [7][8][9][10]. Our approach relies on the coupling of SH-groups to fluorescent dyes via a 3-mercapto-propionic acid linker using a cell penetrating peptide (CPP) for membrane transport. By the use of SPPS we synthesized the reaction product 3-mercapto-propionic-cyclohexenyl-Cy7-bis-norborne nyl-bromide-CPP conjugate. This methodology may be used to assess not only morphological structures but also metabolic processes within tissues or cells employing positron emission tomography (PET), magnetic resonance imaging (MRI) and single photon emission computed tomography (SPECT) [11]. Here, we describe the development and the first synthesis of a fluorescent dye with similarity in the polymethine part of the Cy7 molecule whose indolenine-N residues were substituted with a propylene linker. This linker can be functionalized with norbornene, and/or alkyne groups. These functional molecules are not only restricted to a function as a linker. They also act as a dye in near infrared (NIR) imaging and, as an additional feature; they can be functionalized as a bi-functional linkers. The variability of conjugated SH-groups of the different Cy-based dyes facilitates the use of a broad spectrum of FI molecules. Such linkers may in particular be applicable for the development of agents used in MI employing NIR fluorescence methods [17][18][19][20][21] during patient-specific treatment under non-invasive conditions. As a therapeutic modality, here we functionalized a Cy7-like dye initially with {8-carbamoyl-3-(2chloroethyl)imidazo[5,1-d]-1,2,3,5-tetrazine-4(3H)-one} temozolomide (TMZ) acting as a methylating agent [22]. We functionalized TMZ using a ligation procedure described previously [23]. The effects on the phenotypical change of different primary glioblastoma cells were reported by the Wiessler group [24]. The prototype theranostic conjugate described here acting as both a contrast agent (CA) and as a multi-faceted therapeutic molecule, functionalized with old fashioned anti-cancer drugs can improve effectiveness of anti-cancer interventions and enable monitoring of cellular phenotype change and cell killing effects caused by local enrichment of a cytotoxic agent [14]. Cell culture We Cell contamination test and authentication report To avoid the issues of cross-contamination and misidentification of cell lines, standardized multiplex cell contamination test (Multiplexion, Germany, purchase No.: 45369600) and cell line authentication test (MCA) (Multiplexion, Germany, purchase No.: 45376089) were conducted confirming the absence of contaminations and the authentication of examined cell lines. Treatment of carcinoma cell lines R3327 cells (5 × 10 5 ) were treated in sterile, four compartments CELLview Cell Culture Dishes 35 × 10 mm, with advanced TC surface (627975, Greiner Bio-One, Germany) in cell culture medium with a dilution of the 3-mercapto-propioniccyclohexenyl-Cy7-bis-TMZ-bromide-CPP conjugate 12 (final concentration 100 µM) which was dissolved as stock solution in 50% acetonitrile/H 2 O bidest . Cells were grown as sub-confluent monolayer in the RPMI (control) and the effects were analyzed 20 min, 24 h and 48 h after the onset of treatment. All studies with the subline R3327-MAT Ly/Lu and with the DU145 and MDA-MB-231 cell lines occurred under identical treatment conditions (supplementary information [ Figure S7]). Functionality studies by confocal laser scanning microscopy (CLSM) We used a Leica confocal microscope TCS SP5 II (objective 63-fold) and examined the images with Leica LAS-AV software. Investigated cells were stained using the wheat germ agglutinin (WGA) Alexa Fluor ® 488 conjugate (4 µg/ml, W11261, Life Technologies GmbH, Germany) according to the manufacturer's instructions. The following laser power was used for Cy7, DAPI and WGA Alexa Fluor ® 488 conjugate: 643-800 nm, 415-466 nm and 498-547 nm, respectively. Cell cycle analysis High resolution flow cytometric analyses were performed using a PAS II flow cytometer (Partec, Munster/Germany) equipped with a mercury lamp 100W and filter combination for DAPI stained single cells. Natively sampled cells were isolated with 2.1% citric acid/0.5% Tween  20 according to the method for high resolution DNA and cell cycle analyses at the room temperature under mild shaking. Phosphate buffer (7.2 g Na 2 HPO 4 × 2H 2 O in 100 ml distilled H 2 O) pH 8.0 containing DAPI was used [25]. Each histogram represents DNA-index and cell cycle for 20000 cells. For histogram analyses we used the Multicycle software (Phoenix Flow Systems, San Diego, CA). Multiparametric flow cytometry analysis Multiparametric analysis was done using a Galaxy pro flow cytometer (PARTEC, Münster, Germany) by stimulating the fluorochromes DAPI with a mercury 100 W vapour lamp, FITC with a 488 nm air cooled argon laser and measuring the fluorescence intensities at 530/30 nm, and Cy7 with an red laser diode excitation 635 nm and emission at 780/60 nm. Green and red fluorescence were measured in a logarithmic whereas DAPI stained DNA in linear mode. For each measurement 20000 cells were used. Multiparametric acquisition and analyses were obtained using the Flowmax software (PARTEC, Münster, Germany). Synthesis of the Cy7-like fluorescent dye building blocks The schemes describe the steps of the synthetic ways from the educts via intermediates to the different 3-mercapto-propionic-cyclohexenyl-Cy7-based monomer products: the symmetrical 12 (bis-norbornenyl-) variant. Molecule numbers and compound names: supplementary information [ Figure S1]. Corresponding reaction schemes and chromatograms are listed in supplementary information. Synthesis of the norbornenyl-aminopropylindolenium-bromide (5) The synthesis of the norbornene-5-exo-carboxylic acid chloride 4 was carried out according to the synthesis protocol published by Boehme et al [26]. The synthesis of aminopropyl-indolenium-bromide 3 was published by the Maiti group [27]. The residues 3 and 5 were purified by preparative HPLC on a Kromasil 100-C18-10 µm reverse phase column (30 × 250 mm) using an eluent of 0.1% trifluoroacetic acid in water (A) and 80% acetonitrile in water (B). The product was eluted with a linear gradient of 10% B to 80% B in 30 min at a flow rate of 23 ml/min. The calculated mass [m/e] of 3: 217.17 (100.0%); molecular formula C14H21N2. Corresponding reaction scheme: supplementary information [ Figure S2]. The product 11 was achieved with a fivefold excess of the Fmoc-protected amino acid and was activated in situ with 5 equivalents 2-(1H-benzotriazole-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate (HBTU) and N,N-diisopropylethylamine In a final step, the 3-mercapto-propioniccyclohexenyl-Cy7-bis-norbornenyl-bromide 9 was coupled. The reaction time was 40 min. After each step the resin was washed 5 times with DMF and after completion the resin was washed three times with dichloromethane (DCM) and iso-propanol and dried. Purification of the conjugates Purification of the end product conjugate 12 ( Figure 1) was performed by semi-preparative reversed-phase HPLC (Dionex (Idstein, Germany): Ultimate 3000 LPG-3400A pump and variable four wavelength Ultimate 3000 VWD-3400RS UV/VIS detector (222 nm, 254 nm, 280 nm); column: Chromolith Performance RP-18e column (100 × 10 mm; Merck, Darmstadt, Germany). The solvent gradient was raised from 5% to 100% acetonitrile in 5 min at a flow rate of 6 ml/min. The aqueous phase consisted of water containing 0.1% TFA. tR = 3.58 min. The fractions containing the purified peptide were lyophilized. The purified material was characterized with analytical HPLC and matrix assisted laser desorption mass spectrometry. After 24 h treatment with 11 none of the tested cell lines showed any morphological change. Each cell line clearly exhibited all tested fluorescence signals, showing the following cellular structures: nucleus (blue), cytoplasm (red) and cell membrane (green) (row ). In the DU-145 (C) and in the MATLy/Lu cells (L) a slight trend to speckle formation was noticed. Of note, DU-145 (C) and R3327 (P) cells reveal a clear localization of the red fluorescence signals around the nuclei with increased signal intensity in the direction of the nuclei in DU-145 cells, whereas in R3327 cells a radial distribution around the nuclei was visible. We observed that the tested cell lines did not exhibit signs of morphologic membrane damage. It was conspicuous that the R3327 cells and its metastatic subline MATLy/Lu showed a trend towards formation of spherical structures (L), whereas the DU-145 grew as a monolayer (C). The cell membranes regularly revealed a clear green fluorescence arising from the exposure to Alexa Fluor® 488-WGA (row ). Cell localization studies of the 3-mercaptopropionic-cyclohexenyl-Cy7-bis-norbornenylbromide CPP conjugate 11 using CLSM After 48h treatment with the conjugate 11 the signal intensity of red fluorescence decreased as seen in the respective images (D,H,M,Q) shown in row . This observation also applied for the blue signal intensity. The fluorescence intensity of the green fluorescence signal derived from the WGA staining of the cell membranes was increased and remained nearly unchanged, with the exception of cell membranes of the DU-145 cells, which exhibited hardly detectable green fluorescence signals. However the trend for the speckle formation continuously increased with time, a phenomenon which could be explained by aging-caused depletion processes during the length of time of the cell culture experiments. All these findings could be explained with tentative general changes of the cellular phenotype or DAPI (DNA) depletion processes during the progressing experimental time. The aim of this step of the experiment was to monitor the cellular phenotype during the course of the experiment after treatment with intracellular fluorescent dye 3-mercapto-propionic-cyclohexenyl-Cy7-bis-norbornenyl-bromide-CPP 11 without the methylating agent TMZ. These findings are of importance for the forthcoming studies during which pharmacologically active cytotoxic molecules are used. Analyses of the cell phenotype after treatment with 3-mercapto-propionic-cyclohexenyl-Cy7-bis-TMZ-bromide-CPP 12 using CLSM Here, we assessed the impact of 12 on the phe-notype of the investigated cancer cell lines ( Flow cytometry cell cycle analyses of R3327 cells after treatment with 3-mercaptopropionic-cyclohexenyl-Cy7-bis-TMZbromide-CPP (12) No morphological difference was observed between the WGA and DAPI-stained, 20 min 3-mercapto-propionic-cyclohexenyl-Cy7-bis-TMZ-bro mide-CPP-treated R3327 cells and the untreated control cells. All control cells presented as morphologically unaffected at a nearly identical rate in the S-phase (35.8% and 36.6%) and in the G 2 /M-phase (14.7% and 16.5%). The cell cycle analysis of cells treated with 12, revealed 49.8% G 0 /G 1 -phase, similar to the G 0 /G 1 -fraction in the control cells (49.5% and 46.8%), but an increased amount of cells in the S-phase (40.6%) and a decreased cell number in the G 2 /M-phase (9.5%) ( Table 1 A). After 24 h treatment with 12 an initial change of the cellular phenotype occurred (Figure 3 Cell membrane structures after 48 h were dispersed to a large extent and hardly detectable. Nevertheless, the blue nuclear (DAPI) and red fluorescent perinuclear structures (Cy7) were still visible indicating the cell killing effect of the theranostic molecule 12 by the TMZ. The cell cycle analysis after 48 h treatment showed the following distribution: untreated controls and the WGA/DAPI-control showed a nearly identical increase of cell number in the G o /G 1 phase, namely 67.9% and 68.4%. The cells treated with 12, revealed a strong decrease of cell number in the G 0 /G 1 -fraction (38.1%). In the G 2 /M-phase, cell number ratio of 11.6% and 15.3% in untreated and WGA/DAPI-control cells, respectively, appeared unchanged. The cell fraction in the S-phase accounted for 31.7% after treatment with 12. The differing S-phase of the untreated control and WGA/DAPI-control cells decreased (20.4%) and (16.1%), respectively. The decreased amount of cells in the G 0 /G 1 -and in the G 2 /M-phase, as well as the increase in the S-phase postulates an arrest of the cell cycle in the S-phase. Additionally, it suggests a post-mitotic cell death after the passage across the G 2 /M-phase. Originally, such a process was observed as a consequence of accumulated cell damage caused DU-145 Control WGA/DAPI TMZ-Cy by different irradiation effects in radio-sensitivity studies [34][35][36]. Such a phenomenon was already documented in cell cycle studies with functionalized TMZ on primary TMZ-resistant TP366 glioblastoma cells [24]. (Table 1). Importantly, the time dependent increase of the number of cells in the S-phase was observed in all cell lines (averaged between 21.1% and 31.7%) except in the DU-145 cells, which showed a decrease from 30.3% to 23% whereas the controls (untreated cells and WGA/DAPI-stained) showed fractions of the S-phase between 7.3% and 6.8%, respectively. In parallel, a strong decrease of the fraction of the cell lines in the G 2 /M-phase treated with 12 for 24 h and 48 h could be assessed: 11.8% and 6.6% (DU-145); 11.4% and 8.2% (MDA-MB-231), respectively. The G 2 /M-phase cells fraction of the Mat Ly/Lu cells was constant at 20.7%, 21.5% whereas the R3327 cell fraction increased from 26.1% to 30%. The G 2 /M-phase fractions of the controls showed an inconspicuous pattern (Table 1 A-D). Multiparametric flow cytometry studies High resolution multiparametric flow cytometry analyses were performed to investigate the damaging processes on R3327 cells, caused by 12 (Figure 3). We tested two major cell parameters: dot plots representing the cells size and granularity, influenced by the morphology of the cell, the structure of the nuclei and the quantity of cytoplasm-localized compartments (including mitochondria and Golgi) whose count number of intact cells in quadrant analysis QB1 increased over time after the application of 12 from 48.6% (20 min) and 59.4% (24 h) until 71.2% (48 h). Quadrant analysis QB3 assessed cell shrinking, influenced by cell death and/or apoptosis. The cell fraction characterized by clearly increased cell granularity from 54.3% (20 min) to 89.2% (24 h) pointed out strong membrane effects, whereas the subsequent decrease to 28.6% at 48h after the onset of treatment with 12 suggested disintegration processes of mor-phological membrane structures (Figure 3, column 2). Discussion Theranostic conjugate presented here is the prototype of multifunctional adducts based on the 3-mercapto-propionic-cyclohexenyl-Cy7-norbornenyl -bromide-CPP, characterized by a modular structure consisting of units with different functional molecules. CPP is a peptide-based module which harbours a helical structure possessing physico-chemical properties of an amphiphilic behaviour which facilitates the passage of molecules across cellular membranes [37][38][39]. Its mechanisms of action and the transport efficiency are well documented [40][41][42]. As a cargo, the Cy7-like molecule is here covalently linked to CPP using the SPPS according to the protocols of Merrifield [4] and Carpino [43]. Of note, we functionalized Cy7-like molecule at the cyclohexene via a thioether with a propionic acid for SPPS chemistry (9). The Cy7 can act as a NIR fluorescent dye [14,44]. In order to broaden the application field of Cy7, not restricting it to the imaging purposes, we further functionalized the 2,2,3-trimethyl-3H-indole 2 of the Cy7 at the indolenium nitrogen to the aminopropyl-indolenium-bromide 3 which in turn was reacted to the product norbornenyl-aminopropylindolenium-bromide 5 (supplementary information [ Figure S2]). The resulting 3-mercapto-propionic acid-cyclohexenyl-Cy7-bis-norbornenyl-bromide 9 possesses functional groups which open a broad spectrum of conjugating properties: (i) the carboxylic group serving as the coupling site for SPPS; (ii) the functionalization with a norbornenyl molecule to the N-indolenium via an aminopropyl group acting as a dienophile reaction partner for click chemistry [45] of the classical DAR [1,2] and of the DARinv [46]; and (iii) the Cy7-like molecule, as described here, not limited to the NIR imaging function but also serving as a spacer in order to avoid sterical interactions between the cargo molecules (imaging components and/or active substances) which may adversely affect the different ligation reaction processes. We demonstrated here the functionalization of the Cy-based linker by ligation of the pharmacologically active substance TMZ. The combination of chemotherapeutics with different modes of action and of targeting and variability of the number of ligated molecules [47] enables local concentrations sufficient for cell killing of therapy-resistant tumor cells. Here we employed the multiparameter flow cytometry for a detection of influence of the theranostic conjugate on the cell metabolism, including cell cycle phase, accumulations of cells blocked in particular cell cycle phase, as reviewed by Larsen [48]. Our observations of an increase of the cell debris as well as the decrease of the fraction of the G2/M-phase after treatment with 12 and accumulation of the cells in the S-phase, suggest S-phase blockage followed by a cell death process. In an analogous manner, the extension of the variant 11 by ligating compounds applicable in additional imaging options (MRI, PET, SPECT and CT) could help in delineating the tumor from the surrounding healthy tissue, a feature that no currently applied imaging modality fulfils exactly. Development of imaging modalities for diagnostic purposes progressed significantly during the last decade, yet imaging readouts did not reach the accuracy of histological evidences from the patient biopsy material. Although the applicability of MRI for the diagnosis of prostate cancer (PC) advanced substantially, its use in early-stage prostatic cancer, for example, remains very limited, in particular due to the relatively low MRI sensitivity and specificity. Unenhanced MRI is the diagnostic modality of choice since the use of contrast agents (e.g. gadolinium-based) does not improve the diagnostic properties significantly. Therefore, histological verification of the tissue obtained by means of transrectal biopsy remains a golden standard in the prostate cancer diagnosis. The existence of a selective CA could enable the localization of small, intra-tumour areas of specific characteristics and their treatment. This approach in diagnostic imaging is of interest also for the visualization of invasive properties of the tumor. Recent molecular imaging studies using MRI alone or in combination with MR spectroscopy (MRI, MRSI, PET) [49] allow a certain optimism that applications of imaging modalities with improved, specific in-vivo histology characteristics may be a matter of near future. New approaches to visualize, image and to treat these crucial areas of actively invading tumor have the potential to improve treatment outcome. Finally, coupling of diagnostic molecules with therapeutic agents (cytostatic, immuno-modulatory or radioactive) and ligation of radio-and chemo-sensitizing molecules may significantly improve chemotherapeutic approaches broadening the spectrum of cancer treatment options.
2016-05-04T20:20:58.661Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "f7b8a32838f89531855a8aef945cae42c09ec2eb", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.7150/thno.11460", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a336d1a788d4154b24d15b86f1fcc76f9d68c09d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119626441
pes2o/s2orc
v3-fos-license
Modifications of high-pT di-hadron correlations for identified triggers Angular correlations with high transverse momentum hadrons have become an established tool for studying properties of the medium created in ultra-relativistic heavy ion collisions. When investigated in two dimensions, relative azimuth and relative pseudorapidity, the small-angle part of such correlations is commonly attributed to QCD jets, while long-range correlations, specifically ridge observed at low- and intermediate momentum, are often described as manifestations of hydrodynamic expansion of the system, e.g. flow. Comparative analysis of both features, presented in this work for correlations with identified leading hadrons, explores the particle type dependence of the jet structures, and challenges the flow-only hypotheses behind the ridge. Introduction The energy densities achieved in nuclear collisions at the Relativistic Heavy Ion Collider (RHIC) and, more recently, at the Large Hadron Collider (LHC) provided an experimental environment to study new form of QCD matter. The increasing amount of evidence supports the QCD prediction of partonic deconfinement at the limits of high temperature and/or energy density. The creation of the strongly interacting Quark Gluon Plasma (sQGP) is evident, for example, in the collective behavior of the bulk particles in the soft sector (below transverse momentum p T < 1.5 GeV/c). The production rates of identified hadrons (including strange and multi-strange species), the mass-dependent modifications of the spectral distributions, and the strength of azimuthal correlations with respect to reaction plane (elliptic flow, v 2 ) are all consistently described by a thermalized, hydrodynamically expanding medium with properties resembling a "perfect fluid" [1]. More recently, new theoretical developments led to recognition of the importance of initial state non-uniformity due to density/geometry fluctuations. These fluctuations result in significant anisotropies of higher orders (v 3 and above), while originally it was expected that these terms would be negligible or canceled by symmetry. Investigation of particle-type dependence of v n anisotropies is one of the goals of this work. It has been now experimentally established that azimuthal anisotropies extend far above the soft momenta, although different physics mechanisms may be responsible for the effect in different kinematic regimes (see recent review in [2]). The hydrodynamic expansion under pressure gradients dominating the long-range correlations of soft hadrons, gives way to the pathdependent energy loss, or jet quenching, at high p T . The jet quenching effect in the hard-sector (p T > 6 GeV/c for hadrons, E T > 30 GeV for jets) is also established through the suppression of the hadron and jet production rates at both RHIC and LHC. While little modifications of the surviving jets in the high-p T regime of heavy ion events compared to the reference measurements from pp collisions has been observed [3,4], a number of novel phenomena has been discovered at the intermediate momentum range, (1.5 < p T < 6 GeV/c), which is the focus of this study. Of particular interest are prominent excess of the near-side (small relative azimuth, ∆φ) yields that extends to large relative pseudorapidities (∆η), the ridge. This long-range ridge was discovered in 2-dimensional (∆η, ∆φ) correlations along with shape modifications of the away-side (∆φ ∼ π) in associated hadron distributions for high-p T trigger hadrons [5]. A number of physics mechanisms has been initially proposed to describe the ridge, however, recently the most-widely accepted interpretation attributes the phenomenon to the higher order flow terms, the final state imprint of the inhomogeneous initial conditions [6]. Another important discovery of the intermediate momentum range is the observation of constituent quark scaling behavior in the measurements of elliptic flow for baryons and mesons, as well as in the different trends of nuclear modification factors for baryons and mesons [7]. The most natural explanation for the observed trends was put forward by recombination /coalescence models of hadron production [8,9], in which constituent quark number scaling comes naturally. The recombination approach also provides effortlessly an explanation for the so-called baryon/meson puzzle: it has been observed that hadron composition in central Au+Au collisions at intermediate p T is very different from pp events, with large relative baryon over meson enhancement across all measured hadron flavors reported. For the di-hadron correlation measurements in the intermediate momentum range the dominance of hadronization through recombination mechanisms is expected to lead to a "trigger dilution" for the jet-like near-side per-trigger yields measured in Au+Au events compared to the pp reference. That is, the correlations measured in 200 GeV pp collisions for leading hadrons above 4 GeV/c are dominated by the QCD fragmentation contributions and capture the back-to-back di-jet structures left by fragmentation of the hard-scattered partons. In case of recombination of the thermal quarks from the medium created in the Au+Au collisions at the same energy, no near-side jet-like peak would accompany the resulting hadron, thus effectively reducing (diluting) the per-trigger associated yields. Moreover, this dilution effect would be stronger for baryon as compared to meson triggers. There is an earlier measurement of associated near-side yields for protons and pions [10] that shows an increasing difference of the correlation strength for more central events for proton, but not pion triggers, but in this work no separation of the short-and long-range contributions has been performed, complicating the interpretation. For the long-range correlations associated with baryons and mesons, if indeed dominated by the higher order flow terms from hydrodynamic expansion of the fluctuating initial state, one would expect the v 3 (and higher) harmonics to exhibit the same scaling behavior as elliptic flow (v2), e.g. stronger correlations for baryons than for mesons. Systematic studies of the two-dimensional di-hadron correlations with various identified leading hadrons are needed to provide further differential tests for establishing model interpretations. The STAR detector has a great advantage for these types of studies, relying on its full azimuthal and extended pseudorapidity coverage at nearly uniform acceptance. One of the STAR main detectors, the Time Projection Chamber (TPC), provides multiple means to perform particle type identification for the charged hadrons recorded. The momentum information combined with measurements of ionization energy loss in the detector material allows identification of "common" charged hadrons, π ± , K ± , and (anti)protons in the soft sector, and, more recently, at higher momenta of the relativistic rise region [11]. Together with invariant mass calculations, this information also allows STAR to topologically identify neutral hadrons decaying weakly into charged particles, particularly, Λ and K 0 s , as well as statistical identification of multiple resonances. The high purity samples of Λ and K 0 s from the intermediate momentum range have been The band shows an estimate of systematic uncertainties errors due to subtraction of elliptic flow contributions. The figure is from [12]. used as leading hadrons to construct 2D di-hadron correlations in a first attempt in STAR to quantify the particle-type dependence of correlation strength [12]. In this early work measured correlations have been decomposed into a jet-like part (small ∆φ, small ∆η) and a ridge (small ∆φ, large ∆η) components. The v 2 -modulated background have been subtracted from both parts of the correlations for further analysis. The extracted yields have then been studied as a function of event centrality (expressed via number of participants, N part , estimated by Glauber model) for Au+Au and Cu+Cu collisions at 200 GeV. It has been reported that while little or no dependence on N part could be seen in the jet-like yields associated with different types of hadronic triggers ( Fig. 1, right), the ridge yield increases approximately linearly for all correlations studied (Fig. 1, left). For our discussion the interest is in the comparative analysis of baryon vs. meson related yields. Unfortunately, the uncertainties of these preliminary measurements prevent decisive conclusions. However, we note that the jet-like yields associated with lambdas seem lower than these for kaons, while a reverse tendency is presented in the ridge measurements. The goal of this work is to revisit the question of particle-type dependence in the correlation structures for leading baryons and mesons taking advantage of the new high luminosity data samples recorded by STAR in the years 2008 and 2010 for 200 GeV d+Au and Au+Au collisions, respectively. We focus on the Au+Au collisions from the top 10% centrality selection, where medium effects are expected to be maximal. The new d+Au dataset is used as a reference establishing a baseline in absence of the hot nuclear matter. Statistical separation of charged pions from (anti)proton and charged kaon triggers, performed in this work for correlation studies, provides new constraints for theoretical description of the data. Analysis of Di-hadron Correlations with Identified Leading Hadrons The highest p T charged particle in an event is first selected as a trigger hadron. We require a trigger momentum threshold for all correlations to be between 4 and 5 GeV/c. Statistical hadron identification techniques employing measurements of ionization energy loss in the TPC material in the relativistic rise region are used, following established techniques [11]. The separation between the typical energy deposition (dE/dx) of pions and non-pions in the STAR TPC in the kinematic range of our trigger selection allows trivially obtaining a "pure pion" sample with a single cut on the dE/dx-related variable. For the pion triggers presented in this work the purity of the sample is estimated to be 98% and contains approximately 50% of all charged pions recorded within the imposed p T range. The remaining charged hadrons between 4 and 5 GeV/c that fell below our pion selection cut, are then kept as a "pion-depleted" trigger set, which contains majority of (anti)protons and charged kaons, and the remaining 50% of the charged pions. These two trigger samples are used to construct corresponding 2D di-hadron correlations following procedures and corrections established in previous di-hadron works [13]. The charged tracks used to construct the correlations with respect to trigger hadrons are required to have transverse momentum between 1.5 and 4 GeV/c, excluding potential overlap with the leading hadron pool. Figure 2 presents di-hadron correlations, resulting after the efficiency, acceptance, and track splitting/merging effects are taken into account. The left panel shows a baseline correlation made for all leading charged hadrons with 4< p trig T <5 GeV/c and associated tracks with 1.5< p assoc T <1 GeV/c without any identification. The middle panel shows correlation obtained with identical kinematic selection for the sample of pure-pion triggers. Even in these raw correlations (without subtraction of combinatorial background) the emerging differences are evident: the jet-like peak appears larger for the pion triggers than that for inclusive charged hadrons, while decrease in the strength of the long range ridge can also be spotted. Once the 2D correlation for the pion triggers has been measured, this contribution can be directly subtracted from correlation with the pion-depleted sample, which after proper renormalization to account for the number of non-pion triggers left, provides the correlation measurement for proton and kaon mix of leading hadrons ("p+K" or "non-pions" in the following). The relative composition of the non-pion trigger sample is 60% (anti)protons vs. 40% charged kaons. Although further separation of the non-pion sample into pure-protons and pure-kaons is not performed in this work due to significant overlap of the ionization energy loss distributions for these species, comparative analysis of high-precision pure-pion and non-pion triggered correlations allows to infer the differences for baryon vs. mesons, assuming recombination as a dominant mechanism behind constituent quark scaling behavior. No additional treatments or data modeling is needed to note significant differences in both shortand long-range correlation components between the pion and non-pion trigger samples. As seen in the right panel of Fig.2 the non-pion triggered correlations has significantly smaller jet-like peak while the ridge amplitude is much higher than that for pion triggers (as expected from comparison of pion-triggered correlation to that with inclusive charged hadron triggers). Jet-like peak We separate the short-and long-range components for each 2D correlation by direct subtraction of the ridge-dominated part (0.9< |∆η| <1.5) from the small-angle (|∆η| <0.9) region (after proper normalization per unit ∆η). The ridge/jet separation cut at 0.9 ensures that over 98% of jet-like peak is contained within the inner selection. Such subtraction assumes that contributions other than the jet-like peak are ∆η-independent; multiple fitting tests performed justify this assumption. The direct subtraction of the ridge region removes the contributions from combinatoric background modulated by hydrodynamic flow, minimizing the uncertainties on the extracted jet-like yields (modulo the assumption of rapidity independence of the flow harmonics in the kinematic region covered). We observe significantly larger jet-like yields associated with pion triggers compared to non-pion measurement. In theoretical calculations the leading pions and non-pions (specifically, protons) have different contributions from fragmentation of quarks and gluons. Thus, it is interesting to test if the color-charge effects in jet-medium interactions can be seen in the correlation data with identified triggers. To differentiate between the cold and hot nuclear matter effects we compare the jet-like yields for pion and non-pion triggers from central Au+Au collisions with the reference measurements obtained in identical way from the d+Au data. We notice no sizable changes in the peak shapes in either ∆η or ∆φ dimensions between Au+Au and d+Au data for the same trigger types. The integrated jet-like yield for the pion triggers from central Au+Au data is significantly higher than for pion triggers from d+Au collisions, while non-pion triggers show little to no change. The relative enhancement of the pion-triggered yield is on the order of 30%, and could be attributed to the jet energy loss resulting in additional softer hadrons along the direction of the jet and/or modification of the jet fragmentation pattern due to the medium presence. Qualitatively, this result is consistent with the findings of the jet-track correlation analysis [14]. No change in the jet-like yield for non-pions is unlikely due to color-charge effects, as larger energy loss for leading gluon vs. quark jet would result in the opposite effect, increasing the associated yields even more. The alternative mechanism that would lead to a modification of relative non-pion to piontriggered yields can be readily provided by the recombination model. The larger contribution from recombination of thermal quarks to proton production compared to that of pions, which expected to be largely formed by through fragmentation by 4 or 5 GeV/c, would result in a trigger dilution effect mentioned earlier. Since the systematic uncertainty on the integrated jetlike yields is dominated by the overall uncertainty on tracking efficiency, which is uncorrelated between the d+Au and Au+Au data samples, but fully correlated for pions and non-pions in the same data, we construct a double ratio of the yields to eliminate this uncertainty source. We find that this double ratio of non-pion to pion triggered jet-like yields from Au+Au data over the same in d+Au is 0.7 ± 0.1 stat , pointing to an additional decrease of the associated yields for non-pion triggers in central Au+Au collisions with respect to a reference measurement from d+Au. Ridge The dependence of the ridge and away-side yield on trigger identity is another focus of this study. Recently, the ridge and away-side structures are commonly described in terms of higher order Fourier harmonics and rather successfully modeled hydrodynamically. Correlations with identified triggers allow to study scaling features (such as mass or quark number) to support or challenge this picture. To characterize the trigger-type dependence of the long-range components of the measured correlations, the corresponding jet-like near-side peak discussed in the previous section is first subtracted from full 2D measurement. We observe no rapidity-dependent residuals in the obtained correlations, and thus could continue the study with a 1D projection on relative azimuth. Left panel of Fig. 3 overlays the resulting projections for pion and non-pion triggers. Significantly larger ridge and away-side yields are clearly seen for p+K triggers compared to pion triggers. Fourier fits including harmonics up to fifth order are performed on each of the projections, and are also shown in the Figure. The extracted Fourier coefficients, V n , each containing the contributions from the corresponding n-th order term for trigger and associated hadrons, are plotted in the middle panel of Fig. 3. For comparison, Fourier coefficients for the correlation in the same kinematic range with unidentified charged triggers are also presented. We find a very small V 5 terms for all correlations, with higher harmonics vanishing completely. For the coefficients V 1 through V 4 , the values from non-pion-triggered correlation fit are consistently above those from pion-triggered correlation (with unidentified trigger results placed in between, as expected). The lines illustrate results of the Fourier fit described in the text. Middle: Coefficients of the Fourier fit for long-range correlations associated with pion, non-pion, and inclusive charged trigger particles from the same data. Right: The ratio of the V 3 /V 2 coefficients of the Fourier expansion for correlations with pion and non-pion triggers (symbols), and the extrapolated estimate for this ratio for proton triggers (box). The extrapolation assumptions are detailed in the text. The straight line through the pion-related point is intended to guide the eye. In the hydrodynamic picture the Fourier coefficients extracted from the data fits contain combined information about the flow of the trigger and of associated hadrons. For the separation of these contributions, the v n factorization is widely accepted, e.g. V n =< v trig n >< v assoc n >. For all correlations studied in this work, the kinematic selection for associated hadrons is identical, therefore one can assume that the associated hadron contributions will cancel in the ratios of the extracted coefficients. Moreover, under the factorization assumption, the constituent quark scaling behavior, if it hold for the third term as well, should lead to the identical ratio of the V 3 /V 2 harmonics from each set of the identified triggers, as it would simply reflect per-quark v 3 /v 2 . The ratios of the V 3 /V 2 harmonics extracted from the Fourier decomposition of pionand non-pion triggered correlations is presented in the right panel of Fig. 3. A significantly larger ratio is observed for p+K triggers, despite consisting of a mix of baryons and mesons, indicating breaking of the scaling for the v 3 . For illustration, we extrapolate the measured ratio to baryons-only, assuming (under the constituent quark scaling hypothesis) that the kaon contribution is identical to that of pions, which naturally leads to an even bigger discrepancy. We conclude that triangular harmonic does not follow the same scaling pattern as the elliptic flow. In the context of di-hadron correlation studies, this indicates that the interpretation of the rapidity-independent terms through hydrodynamic flow only is incomplete. Summary In summary, new measurements of two-dimensional di-hadron correlations with identified leading hadrons from 200 GeV central Au+Au collisions recorded by STAR detector have been presented. For particle identification purposes the relativistic rise of the ionization energy loss in STAR/TPC for charged hadrons between 4 and 5 GeV/c was utilized to separate charged pion triggers. Statistical separation methods were used to extract the correlation measurements for a trigger sample without pions, consisting of about 60% (anti)proton and 40% charged kaon triggers. To study the medium-induced effects on the jet-like part of the correlation, a reference measurement from the minimum bias d+Au events at the same energy has been used. We find that for the pion-led correlation, the associated hadron yield in the jet-like peak is enhanced significantly in central Au+Au collisions compared to d+Au data for the charged hadrons with momenta between 1.5 and 4 GeV/c, studied in this work. At the same time, the non-pion triggers show no such trend for the jet-like peak associated yields. It is hard to expect such differences for the two trigger sets studied to come from color-charge dependence of energy loss, however, the relative decrease of non-pion to pion associated yields in the jet-cone from Au+Au events vs. d+Au events could have a natural explanation from recombination models. For the long-range component of the correlation functions, we observe significantly higher ridge and away-side amplitude associated with the non-pion triggers. Fourier expansion of the long range correlations yields non-zero terms up to the fifth order and allows comparing the constituent quark scaling behavior for the second and higher harmonics. We find that the ratios of V 3 /V 2 Fourier coefficients from the fits to the pion-triggered data is significantly lower than from nonpion correlations, indicating the constituent quark scaling pattern is not preserved for the v 3 . This new observation strongly suggests that interpretation of the long-range component of the two-dimensional di-hadron correlation in terms of hydrodynamic flow terms only is incomplete.
2019-04-18T13:09:12.533Z
2015-03-10T00:00:00.000
{ "year": 2015, "sha1": "6a280de39cbf2a350aa64d491c05d16ae3f5ce97", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/589/1/012006", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "15ee1a68992b18a7315dd255cacdca1f1a0ae6c5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1353587
pes2o/s2orc
v3-fos-license
Tunable Magnetic Properties of Heterogeneous Nanobrush: From Nanowire to Nanofilm With a bottom-up assemble technology, heterogeneous magnetic nanobrushes, consisting of Co nanowire arrays and ferromagnetic Fe70Co30 nanofilm, have been fabricated using an anodic aluminum oxide template method combining with sputtering technology. Magnetic measurement suggests that the magnetic anisotropy of nanobrush depends on the thickness of Fe70Co30 layer, and its total anisotropy originates from the competition between the shape anisotropy of nanowire arrays and nanofilm. Micromagnetic simulation result indicates that the switching field of nanobrush is 1900 Oe, while that of nanowire array is 2700 Oe. These suggest that the nanobrush film can promote the magnetization reversal processes of nanowire arrays in nanobrush. Introduction With the development of nanotechnology, magnetic materials with different shapes have been studied widely for their importance to fundamental research and potential technological applications recently [1][2][3][4]. Nanomaterials including magnetic materials have been defined as zerodimensional nanoparticle, one-dimensional nanowire, and two-dimensional nanofilm. Their special low-dimensional structures cause their unique physical and chemical properties especially magnetic properties compared with their bulk materials [5][6][7][8]. Among these nanomaterials, magnetic nanowire arrays have been attracted much attention on their significance about the magnetization reversal mechanism, high density magnetic recording media, or sensor [9][10][11][12][13]. The properties and applications of magnetic nanowire arrays are mainly determined by their magnetic anisotropy. Generally, magnetic nanowire arrays have uniaxial anisotropy along the long axis of wire for their shape anisotropy, so it is hard to change their total anisotropy for a certain magnetic nanowire arrays with fixed length and diameter. Fortunately, hexagonally close-packed (HCP) Co nanowire array has strong magnetocrystalline anisotropic constant that is comparable with its shape anisotropic constant; this gives us a chance to adjust the total anisotropy of magnetic nanowire array via changing the preferred growth orientation of HCP Co nanowire array. Up to now, many groups attempt to adjust the magnetic anisotropy of Co nanowire arrays via adjusting their microstructure or multilayer nanowire arrays [10,14,15]. In these cases, the total anisotropy of nanowire arrays increases if the direction of magnetocrystalline anisotropy is parallel to that of the shape anisotropy, while decreases if they are perpendicular to each other [15]. However, it is not very easy to obtain HCP Co nanowire arrays with expected microstructure or crystalline texture, and it is still a challenge to change the easy magnetization direction of the arrays of other materials with lower magnetocrystalline anisotropic constant, for example, Ni, Fe, or other magnetic alloys [16][17][18]. Nanobrush can be regarded as a combination of nanofilm and nanowire arrays, and it is studied as one of the nanodevices with high efficiency function made of nanowires of other nanostructures [19][20][21][22]. In this paper, magnetic nanobrushes, made of Co nanowire arrays and Fe 70 Co 30 nanofilm, have been fabricated firstly to study the tunable magnetic anisotropy. Magnetic measurement indicates that the magnetic anisotropy of nanobrush can be adjusted by the thickness of Fe 70 Co 30 layer. Micromagnetic simulation was also used to study the magnetization reversal process of the nanobrush compared with nanowire array. Experimental Section Magnetic nanobrushes, which consist of Co nanowire arrays and Fe 70 Co 30 nanofilm, have been fabricated via AC voltage electrodeposition combining with sputtering technology. AAO template was prepared by anodic oxidation of 99.999% pure Al sheet undergone a two-step anodizing process in oxalic acid solution [23]. The process of fabricating AAO template is similar to our previous study [10]. Especially, the Al foils were anodized in 25.6 g l -1 H 2 C 2 O 4 solution under a constant DC voltage of 40 V for 3 h in the second anodizing step. Second, the cobalt nanowire arrays were electrodeposited into pores of the AAO template. Using a standard double electrode bath, the Al with AAO template was used as one electrode and the graphite as another. The electrolyte consisted of 0.3 M CoSO 4 and 45 g l -1 boric acid with pH = 3 [14]. In addition, ac electrodeposition was conducted at 200 Hz, 12 V for 5 min. Third, the surface of the template was smoothed by dilute nitric acid solution. And then, a layer of Fe 70 Co 30 was sputtered on one side of the Co nanowire array through a sputtering technique. After that, the Al substrate and the other cobalt nanowire arrays were removed by HgCl 2 solution. At last, nanobrush containing Co nanowire arrays and a Fe 70 Co 30 layer was obtained nominally. The process of preparing the nanobrush was described in Fig. 1. Scanning electron microscopy (SEM, Hitachi-S4800, Japan) was used to investigate the morphology of the AAO template and nanobrush. The magnetic properties were measured using a vibrating sample magnetometer (VSM, Lakeshore 7304, USA) at room temperature. Micromagnetic simulations are performed with the three-dimensional (3D) object oriented micromagnetic framework (OOMMF) method [10]. We simulated nanobrush consisting of nanowire arrays with amount of sixteen, the diameter of 20 nm and length of 400 nm and FCC Co nanofilm with the thickness of 12 nm. The unit cell size is 2.5 9 2.5 9 2.5 nm 3 , which is approximately the same as the exchange length, l ex / ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2A=l 0 M 2 s p . Results and Discussion The morphologies of the AAO templates and nanobrush are shown in Fig. 2. Figure 2a, b show the typical top view and side view of AAO template, respectively. The straight nanoholes with average diameter around 50 nm demonstrate a symmetrical distance for each other, which is propitious to gain regular nanowire arrays. It can be found in Fig. 2c, d that nanowires are filling into the porous of AAO template and parts of nanowires appear after etching AAO template by using NaOH solution. After sputtering Fe 70 Co 30 layer on one side of the nanowire arrays membrane, magnetic nanobrush with AAO template can be obtained nominally as in Fig. 1. It is worthy to note that nanowire arrays are about 3 lm long with diameter of 50 nm, and the thickness of nanofilm is lower than 60 nm; thus, Co nanowire array is still the main body of nanobrush. Figure 3a shows the normalized loops of Co nanowires, nanobrushes with different thickness of Fe 70 Co 30 layer, and Fe 70 Co 30 nanofilm with the applied field perpendicular to the plane of membrane. The coercivity and Sq (M r /M s ) as a function of the thickness of nanofilm have been described in Fig. 3b. It is found that the coercivity and Sq are the largest for cobalt nanowire arrays, decrease regularly with the increase of thickness of Fe 70 Co 30 layer, and smallest as the thickness reaches 60 nm. Subsequently, the coercivity and Sq of Fe 70 Co 30 nanofilm with the thickness of 60 nm were drawn too, it has lower coercivity and Sq compared with nanobrushes. This indicates that the static magnetic properties of nanobrush can be controlled by changing the thickness of ferromagnetic layer like Fe 70 Co 30 layer. On the other hand, Fig. 4a shows the normalized loops of samples mentioned above under the applied field parallel to the plane of membrane. The coercivity and Sq of nanowire as a function of the thickness of Fe 70 Co 30 layer were drawn in Fig. 4b. It demonstrates that the coercivity of nanobrushes decreases regularly as increasing the thickness of Fe 70 Co 30 layer, which is similar to the results as the applied field perpendicular to the surface of membrane. Furthermore, the Sq increases regularly as the thickness of Fe 70 Co 30 layer increasing. The coercivity and Sq of nanobrush are nearly equal to those of Fe 70 Co 30 film with the thickness of 60 nm. It indicates that the magnetic properties of nanobrush can be seen as the transition from nanowire to nanofilm. As well known, the magnetic moment distribution of ideal magnetic nanowire is almost along the long axis of nanowire for its strong shape anisotropy; thus, its easy magnetization direction will be parallel to the long axis of nanowire and hard magnetization direction perpendicular to its long axis. For nanofilm, its magnetic moments lie in the surface of membrane, which results in an in-plane easy magnetization direction and an out-of-plane hard magnetization direction. To investigate the relationship between magnetic anisotropy and the thickness of Fe 70 Co 30 layer of nanobrush, the effective anisotropy fields of nanobrush were calculated [24], and listed in Table 1. Table 1 indicates that nanowire array and nanobrush with 20 nm thickness of Fe 70 Co 30 layer show easy-axis type of anisotropy, and their easy magnetization direction is along the long axis of nanowire, whereas nanobrush with Fe 70 Co 30 layer higher thickness and nanofilm are easyplane type of anisotropy will be magnetized easily in the film plane. The transition from easy-axis type to easy-plane type can be obtained as the thickness of Fe 70 Co 30 layer increases. We also find that the effective easy-axis anisotropy field decreases from nanowire arrays to Sample A, while the effective easy-plane anisotropy field increases as the thickness of Fe 70 Co 30 layer increases. Therefore, it is an effective method to control the magnetic anisotropy of magnetic nanobrush by adjusting the thickness of magnetic film. In order to prove this result, micromagnetic simulation was applied to study the magnetization reversal processes of magnetic nanobrush and Co nanowire arrays. Figure 5 shows normalized hysteresis loops of nanobrush and Co nanowire arrays with the applied paralleled to the long axis of wire, and their magnetic moment distributions at the applied field of 2700 Oe were also shown in the Fig. 5 via micromagnetic simulation. The result also indicates that the magnetic property of nanobrush is determined by the competition of magnetic anisotropy between magnetic film and wires. Firstly, the magnetic moment of nanobrush combining film with wires is out-ofplane and does not parallel the long axis of wires. These magnetic moments are the natural nucleation in the magnetization reversal process of nanobrush, which makes the reversal process easier. To make the magnetization reversal processes of magnetic nanobrush clear, we also simulate the hysteresis loop of Co nanowire arrays. It is found that the coercivity and Sq of nanobrush are lower than those of Co nanowire arrays, which agree well with the experimental results. As well known, the magnetic moment of nanofilm lies in the plane for its shape anisotropy, the magnetic moments of nanowire are along the long axis of wire for the same reason. OOMMF simulation result shows that the direction of the magnetic moment in the film relies on the magnetic moment at the end of wire that is close to the film. It will incline to the ?Z direction if the magnetic moment of the end part wire is parallel to the ?Z direction, while incline to the -Z direction parallel to the -Z direction. Thus, the magnetic moment will be like a consecutive U-shaped semicircle shown in the Fig. 5 for the two wires with anti-parallel magnetic moment direction and film links them, which lead to the interaction of neighbored wires increases. Furthermore, we chose point A and point B corresponding to the magnetic moments of nanowire array and nanobrush at the applied field of -2700 Oe in Fig. 5. For the nanowire arrays, all the magnetic moments of sixteen nanowires align along the ?Z direction at point A, whereas the magnetic moments of eleven nanowire align along the ?Z direction and the other five along the -Z direction in nanobrush as shown at point B. The result means that the magnetic moments of five wires reversed in nanobrush and no one reversed in nanowire arrays at the applied field of -2700 Oe. Figure 5 also demonstrates that the adverse fields of nanobrush and nanowire arrays are 1900 Oe and 2700 Oe, respectively. Thus, the magnetic moment of nanowire arrays in nanobrush can be reversed easily compared with the general nanowire arrays. These also agree with the magnetic measurement results that magnetic layer sputtered on the nanowire arrays film will be propitious to the magnetization reversal of nanowire arrays. Conclusions Nanobrushes have been synthesized via the bottom-up assemble process. The magnetic hysteresis loops of nanobrushes show their magnetic properties depend on the thickness of Fe 70 Co 30 layer. The magnetic anisotropy of nanobrush is similar to nanowire arrays with thinner Fe 70 Co 30 layer, while similar to nanofilm with thicker Fe 70 Co 30 layer. Micromagnetic simulation also proves that the presence of magnetic nanofilm will assist in the magnetization reversal of nanowire arrays. However, Fe 70 Co 30 nanofilms have not a preferred growth orientation in plane; thus, the magnetic moment of Fe 70 Co 30 nanofilms is isotropic in the surface of membrane. We believe that the controllability of anisotropy of nanobrush will enhance if the magnetic moment of magnetic layer is uniaxial anisotropy in plane; and magnetic nanobrush may be used as function device with tunable magnetic anisotropy.
2014-10-01T00:00:00.000Z
2010-03-14T00:00:00.000
{ "year": 2010, "sha1": "8a2dda3f3477f8520c79bc120004005bfc9a80e0", "oa_license": "CCBYNC", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1007/s11671-010-9574-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19b428ba04e9559ef7681f8fec9f48766e1d876b", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
10909014
pes2o/s2orc
v3-fos-license
A branching process model for flow cytometry and budding index measurements in cell synchrony experiments We present a flexible branching process model for cell population dynamics in synchrony/time-series experiments used to study important cellular processes. Its formulation is constructive, based on an accounting of the unique cohorts in the population as they arise and evolve over time, allowing it to be written in closed form. The model can attribute effects to subsets of the population, providing flexibility not available using the models historically applied to these populations. It provides a tool for in silico synchronization of the population and can be used to deconvolve population-level experimental measurements, such as temporal expression profiles. It also allows for the direct comparison of assay measurements made from multiple experiments. The model can be fit either to budding index or DNA content measurements, or both, and is easily adaptable to new forms of data. The ability to use DNA content data makes the model applicable to almost any organism. We describe the model and illustrate its utility and flexibility in a study of cell cycle progression in the yeast Saccharomyces cerevisiae. 1. Introduction. In this paper we describe a novel branching process model that characterizes the temporal evolution of population heterogeneity in cell synchrony experiments. These experiments are designed to measure the dynamics of fundamental biological processes related to the cell's progression through the cell division cycle. Careful characterization of these dynamic processes requires experiments where quantitative measurements This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Applied Statistics, 2009, Vol. 3, No. 4, 1521-1541. This reprint differs from the original in pagination and typographic detail. 1 2 ORLANDO, IVERSEN, HARTEMINK AND HAASE are made over time. In many cases, accurate measurements cannot be made on single cells because the quantitative methods lack the sensitivity to detect small numbers of biomolecules. For example, accurate quantitative measurements of genome-wide transcript levels by microarray require more mRNA than is physically available within a single cell. Thus, researchers are forced to work with populations of cells that have been synchronized to a discrete cell cycle state. Two distinct problems arise in these synchrony/time-series experiments. First, synchronized populations are never completely synchronous to begin with, and tend to lose synchrony over time. The lack of perfect synchrony at any given time leads to a convolution of the measurements that reflects the distribution of cells over different cell cycle states. Second, multiple synchrony experiments are often needed to measure different aspects of a process, and it is often desirable to compare the temporal dynamics of these aspects. However, synchrony/time-series experiments, even in the best of experimental circumstances, exhibit considerable variability which make time-point to time-point, cross-experiment comparisons imprecise. Thus, a mechanism is required to accurately align the data collected from each of the synchrony/time-series experiments. The model we describe addresses both of these problems. Most of the numerous models designed to measure cell population dynamics in synchrony/time-series experiments fall into two related classes: population balance (PB) and branching process (BP) models. PB models are usually formulated as partial-integro-differential equations and are often very difficult to work with except under special conditions [Liou, Srienc and Fredrickson (1997), Sidoli, Mantalaris and Asprey (2004)]. BP models are stochastic models for population dynamics that have been used to study both the asymptotic [Alexandersson (2001)] and short term behaviors [Larsson et al. (2008), Orlando et al. (2007)] of populations; certain BP models have PB analogues [Arino and Kimmel (1993)]. Several models that do not explicitly account for reproduction, and hence are neither PB or BP models, have also been used to model data from asynchrony experiments [Bar-Joseph et al. (2004), Lu et al. (2004)]. The most critical distinction between models, however, is in the sources of synchrony loss the model includes. Most describe synchrony loss as the result of a single parameter, equivalent to a distribution over division times [Bar-Joseph et al. (2004), Chiorino et al. (2001), Larsson et al. (2008)]. In contrast, the model we describe here (the CLOCCS model, in reference to its ability to Characterize Loss of Cell Cycle Synchrony [Orlando et al. (2007)]) is the only model to account for variability in cell-division time, initial asynchrony in the starting population and variability due to asymmetric cell division [Chiorino et al. (2001)], all of which we will show to be important. The CLOCCS model is based on a novel branching process construction and can be written in closed form. Its formulation is constructive, based on an accounting of unique cohorts in the population at any given time. Hence, the model can attribute one-time effects to specific subsets of the population, demonstrating flexibility not available using the PB and BP models historically applied to these populations. Further, the model's construction allows full Bayesian inference without the use of approximations to the likelihood. The Bayesian approach to inference has the additional advantage that it sidesteps many of the difficulties encountered by frequentist inference for BP models [Guttorp (2001)]. In this paper we present a model which can utilize two forms of data that provide information regarding the cell cycle position of Saccharomyces cerevisiae, baker's yeast: DNA content data and budding index data. An overview of the yeast cell cycle and these data types can be found in Section 2. While applied here to yeast, the ability to fit DNA content data, described in Section 3, is a critical advance that allows the CLOCCS model to be applied to an array of more complex organisms that do not undergo the kinds of morphological changes that yeast do (e.g., budding) during the cell division cycle. In Section 4 we apply the model to fit budding index and DNA content data from a synchrony/time-series experiment in yeast. Using these data, we compare the model to a collection of nested alternative parameterizations with subsets of the novel asynchrony sources removed. We conclude with a discussion of the model and of the results of this analysis in Section 5. Fig. 1. Over the course of its life, the cell repeatedly traverses the cell cycle, which is divided by landmark events associated with asexual reproduction into the G1, S and G2/M phases. In the figure, this corresponds to the cell in light gray traveling around the circle in A or from left to right in B. At each completion of G2/M it spawns a daughter cell. This process begins with development of a bud (dark gray) and the start of DNA replication (denoted by the appearance of a second red bar) and is completed when the daughter cell (dark gray) separates from the mother cell at the end of G2/M with a full complement of DNA. 2. Yeast cell cycle. One organism commonly studied using synchrony/timeseries experiments is the common baker's yeast, S. cerevisiae, because many features of its cell cycle are well characterized. Figure 1A depicts the landmark events that can be used to determine the cell cycle state of individual cells [Gordon and Elliott (1977), Hartwell (1974)]. The first, bud emergence, is a distinct morphological landmark easily detected by simple light microscopy. It first appears near the time that a cell transitions from G1 into S phase. Cells become unbudded after the completion of mitosis (M) when the cell and its bud separate. We refer to the progenitor cell as the "mother" and what had been the bud as the "daughter." In S. cerevisiae, this division is often asymmetric: the mother cell is often larger and progresses more quickly through the cell cycle than the daughter [Hartwell and Unger (1977)]. Cell cycle position can also be determined by measuring genomic DNA content of the cell, which increases as cells progress through the S phase of the cell cycle [Haase and Reed (2002)]. Haploid yeast cells begin the cell cycle with one copy of genomic DNA (red bar in Figure 1). During the S phase, DNA is replicated such that, at the completion of the S phase, the cell has two copies of genomic DNA. Counts of budded cells and cell-level DNA content are typically measured in independent samples, drawn at regular time points after the population's release from synchrony. The resulting time series of budded cell counts is referred to as a budding index. DNA content is measured by flow cytometry. Budding index and DNA content data can be used to fit accurate models of the underlying cell cycle position distributions. 3. Model. The model we describe is comprised of two components: an underlying model for the population dynamics of the cells in a synchrony/timeseries experiment, and independent sampling models for the budding index and DNA content measurements made on samples drawn from the population. We refer to the population dynamics model component as CLOCCS. CLOCCS is a branching process model for position, P t , of a randomly sampled cell in a linearized version of the cell cycle ( Figure 1B)-which we refer to as a cell cycle lifeline-given the experimental time, t, at which the cell was sampled. The sampling models for the budding index and DNA content measurements are conditioned on the distribution of lifeline position and time. In what follows, we describe the model's components in greater detail. 3.1. Model for position given time. The CLOCCS model specifies the distribution of cell positions over an abstract cell cycle lifeline as a function of time. We define λ to be the amount of time, in minutes, required by a typical mother cell to undergo one full cell cycle. We divide the lifeline into λ units, thus the average cell will move one lifeline unit per minute. The advantage of using a lifeline characterization is that it allows for introduction of one-time effects, such as the recovery period following release from synchrony or the delay in cell cycle progression of new daughter cells. We model position as having three independent sources of variability: the velocity with which the cell traverses the cell cycle, the time it spends recovering from the synchronization procedure, and the additional time spent by a daughter cell as it traverses its first cell cycle [Hartwell and Unger (1977)]. It is well known that cells in synchrony experiments progress through the cell cycle with varying speeds. We assume that each cell moves at a constant velocity along the lifeline, and that this velocity is random, following a normal distribution. While this is technically inappropriate as velocities must be positive, in practice it is reasonable: fitted distributions give almost no mass to the negative half line. We measure velocity, V , in lifeline units per minute; by definition, the mean cell velocity is 1.0. The velocity distribution's variance, σ 2 v , is unknown. When released from synchrony, cells spend more time in their first G1 phase than they spend in G1 during subsequent cell cycles. The added time reflects a period of recovery from the synchronization process, whose length varies from cell to cell. We term this recovery period Gr as if it were a distinct cell cycle phase. We model this effect as a random offset, P 0 , in the starting position on the lifeline. While this offset should be strictly positive, we let P 0 be distributed N(µ 0 , σ 2 0 ) for convenience. Later, we comment further on this choice. Daughter cells tend to be smaller and require additional time in G1 before they begin to divide. We term this daughter-specific period of growth Gd and model it by introducing a fixed offset, δ, to the cell's lifeline position. With each wave of division, the population expands in size. If cells in the culture remained synchronous, the population would branch and double in size every λ minutes after an initial delay of µ 0 minutes. Because they do not, the dynamics of this expansion is more complex: at any point in time, the population may represent a number of distinct cohorts, each defined by its lineage. Cohorts are determined by g, their "generation"-the number of daughter stages in their lineage-and r, their "reproductive instance"-the wave of division that gave rise to the cohort. Figure 2A depicts the branching dynamics of this process and a snapshot in time projected onto a common lifeline ( Figure 2B). In A, four distinct time periods are color coded with each cohort distribution labeled with its {g, r} index. At time zero there is a single cohort, {0, 0}, depicted in black, whose position distribution is located in Gr and centered at −µ 0 . As time passes (red), this cohort enters its second cell cycle and spawns a daughter cohort, labeled {1, 1}, which begins on its own lifeline in Gd. Later (blue), cohort {0, 0} gives rise at its second reproductive instance to another first generation cohort, {1, 2}. At the same time, cohort {1, 1} cells are progressing through G2/M. At the last depicted time point (green), the population is comprised of four distinct cohorts, representing three generations of cells arising at three distinct reproductive instances. Figure 2B is a plot of the population at this time point on the common lifeline. The CLOCCS model is a distribution over position along this common lifeline as a function of time. In what follows we use a description of the behavior of individual cells as a device for deriving population level cohort position distributions. Each such distribution is normal with parameters that depend on the starting position and velocity distributions, time t and the cohort's indices g and r. Since cells in the {0, 0} cohort are unaffected by the daughter specific delay, δ, their positions, P t , at time t are determined only by their starting positions, P 0 , and their velocity, V . For these cells, In contrast, cells in cohorts at generations greater than zero have their position distributions truncated at the beginning of Gd, −δ on the lifeline, and are set back by g daughter offsets of length δ and r cell cycle offsets of length λ. The remaining contributions to such a cell's position are the velocity by time contributions of each of its ancestors and the initial position of its ancestor in cohort {0, 0}. For simplicity, we assume that daughter cells inherit their mother cell's velocity. With this, the velocity by time contribution to position simplifies to V t, where V is the common velocity and t is total time since population release. For these cells, Thus, we write the model for position, P t , given time, t, in closed form by enumerating the population's cohorts using the latent variables r and g. In particular, where Θ = (µ 0 , σ 2 0 , σ 2 v , λ, δ) and where the sum is over possible cohorts, C = {{g, r} : (g = 0 ∧ r = 0) ∨ (0 < g ≤ r ≤ R)}. While the number of cohorts represented in the population could theoretically be large, in practice, their number is limited by the number of cell cycles that cohort {0, 0} is able to undergo during the experimental period. In most cases, synchrony experiments are terminated after 2 or 3 cycles, so choosing R = 4, 5 or 6 is usually sufficient. For notational clarity, we use C to represent the sufficient number of cell cycles examined. The marginal probability of drawing a representative of cohort {g, r} from the population at time t is p(g, r|Θ, t). For example, in the scenario depicted in Figure 2B, p(1, 1|Θ, t) is the ratio of the mass under the cohort {1, 1} density to the total mass under all of the cohort densities present on the lifeline. The mass under the cohort {1, 1} density is the probability that a randomly drawn member of the {0, 0} cohort has completed its first cell cycle and contributed a daughter cell to cohort {1, 1}. This probability is ). The mass under the cohort {2, 2} density is the probability that a randomly drawn member of the {1, 1} cohort has finished its first cell cycle; this, in turn, is the probability that a randomly chosen member of the {0, 0} cohort has traveled δ units into its third cell cycle. The δ appears because the {1, 1} cohort's progress through its first cell cycle is δ 8 ORLANDO, IVERSEN, HARTEMINK AND HAASE units longer than the {0, 0} cohort's progress through its second cell cycle. In this way, the relative contribution of any cohort in the population can be determined by calculating the probability that the position of a randomly drawn member of the {0, 0} cohort is past a threshold position that is a function of g and r. Let M Θ (g, r, t) denote the mass under cohort {g, r}'s position distribution at time t, where Φ(·) denotes the standard normal CDF. The combinatoric term arises from that fact that, for r ≥ g ≥ 1, multiple lineages may contribute members to a given cohort. For example, cohort {1, 1} will contribute to cohort {2, 3} as its members pass the point 2λ on its lifeline (rightmost point on the third branch from top in Figure 2A), while cohort {1, 2} will contribute to the same cohort, {2, 3}, as its members pass the point λ on its lifeline (rightmost point on the second branch from top in Figure 2A). Finally, let Q Θ (t) denote the mass under all cohort distributions in the population at that time, In general, p(g, r|Θ, t) = M Θ (g, r, t)/Q Θ (t). Sampling models. To utilize the CLOCCS model, it is necessary to relate distributions over the artificial cell cycle lifeline to observable cell features. In the next two sections we present two sampling models which allow CLOCCS to utilize commonly collected landmark data, namely budding index and DNA content data. While time series of budding index and DNA content data are each sufficient to estimate the CLOCCS parameters, Θ, they provide complementary information on the cell cycle timing of distinct landmark events. Timing of these events is of independent interest, and estimates of the same may improve the utility of the model as a tool for deconvolution of transcription data and other types of downstream analysis. 3.3. Sampling model for budding index data. Presence or absence of a bud is an easily measured landmark tied to a cell's progression through the cell cycle (see Figure 1). Buds emerge and become detectable near the transition between G1 and S phases, at a fraction β of the way through the normal cell cycle and split off as daughter cells at cell cycle completion (Figure 3, dashed line). Assume that budding index samples are drawn at T time points, t i , i = 1, . . . , T , and that n i cells are counted at time t i . Let b ji = 1 if the jth cell at time t i is budded and b ji = 0 otherwise. The event that b ji = 1 implies that the position of the jth cell at time t i , P ji , falls into the lifeline interval ((c + β)λ, (c + 1)λ] for some cell cycle c ≥ 0; the probability of this is dictated by the CLOCCS model. Following the development of Section 3.1, we calculate p(b ji = 1|β, Θ, t i ) by introducing cohorts and marginalizing over them. In particular, let where p(b ji = 1|β, Θ, g, r, t i ) is the probability that a cell randomly sampled from cohort {g, r} is budded at time t i . For the progenitor cohort, {0, 0}, while, for subsequent cohorts, 0 < g ≤ r, Fig. 3. Plot of expected flow cytometry channel for a cell given its lifeline position in units of λ (black curve, left vertical axis). An indicator function for the cell's budding status is also plotted (grey dashed curve, right vertical axis). We model bud presence as a Bernoulli random variable with success probability p(b ji = 1|β, Θ, t i ) and assume that samples drawn at the various time periods are independent conditional on the CLOCCS model. 3.4. Sampling model for DNA content data. DNA content data measured by flow cytometry provides an ordinal measurement of the DNA content of each cell in a sample: each cell appears in one of 1024 ordered channels on the basis of its fluorescence, which is proportional to its DNA content [Pierrez and Ronot (1992)]. In practice, channel number is often log 2 transformed and treated as a continuous measurement. Adapting the CLOCCS model to DNA content data requires that we annotate the lifeline with the positions, measured as fraction of cell cycle length, at which S phase begins and ends. We denote these locations γ 1 and γ 2 , respectively. As the population loses synchrony, the distribution of cells over channels will typically be bimodal, with one mode corresponding to cells in G1 (centered at α 1 ), and the another corresponding to G2/M (centered at α 1 + α 2 ). Cells transiting the S phase will fall between these points in expectation. Further, we assume that DNA content increases linearly over the course of the S phase. In particular, the expected DNA content of a cell is where ω 1t = α 2t λ(γ 2 −γ 1 ) and ω 0t (c) = α 1t (γ 2 −γ 1 )−α 2t (γ 1 +c) . The black line in Figure 3 is a plot of this curve. Measurement of DNA content by flow cytometry is imprecise. Machine noise, variation in the cell's orientation to the laser beam and variation in the performance of the fluorescent stain each contribute to measurement error [Pierrez and Ronot (1992)]. Hence, a flow cytometry measurement made on a sample of cells drawn at a particular time point will be a sample from the convolution of a noise distribution and the CLOCCS position distribution. In particular, where f ji denotes the log fluorescence intensity of cell j at time t i and where Ψ denotes the vector of parameters in the model for f ji not in Θ. From above it follows that p(f ji |P t , Ψ, Θ, t) can be modeled as a normal with mean given in equation (2) and variance τ 2 t . The log normal distribution is a common choice in this setting [Gray and Dean (1980)]. Additionally, the noise characteristics of the flow cytometer typically vary from one sample to the next, causing the locations of the G1 and G2/M modes, as well as the level of machine noise (τ ) to vary. Hence, we allow the parameters of the DNA content sampling distribution, p(f ji |P t , Ψ, Θ, t), to vary across time periods. Note that equation (3) can be written as where is a convolution of two normals, one of which is truncated. Let l gr denote the left limit to the support of cohort {g, r}'s position distribution, where l gr = −∞ if g = r = 0 and l gr = −δ otherwise. Further, let G grt (x) denote the normal cumulative distribution function with mean −µ 0 + t − r · λ − g · δ and variance σ 2 0 + t 2 · σ 2 v evaluated at x and let S cgrt (x) denote the normal cumulative distribution function with mean (G grt ((c + γ 1 )λ) − G grt (cλ)) 12 ORLANDO, IVERSEN, HARTEMINK AND HAASE and where φ(·) is the standard normal density function. In the equation above, the first line of the right-hand side corresponds to cells in G1, the second to cells in G2 or M, and the third to cells in S. We assume that cell-level DNA content measurements are conditionally independent within and between samples drawn at the various time periods conditional on the CLOCCS model, Ψ and the sampling times. DNA content and budding index measurements are made on separate samples drawn from a population's culture, sometimes at the same points in time, sometimes not. Because they are distinct samples, we model the DNA content and budding index data as conditionally independent given the CLOCCS parameters Θ, the budding parameter β, the DNA content parameters Ψ and sampling times. 3.5. Prior distribution. What follows is a description of, and justification for, the prior choices used in our analysis. Columns 2 and 3 of Table 1 tabulate prior expected values and 95% equal-tailed intervals for each parameter as implied by these choices. Lord and Wheals (1983) estimate S. cerevisiae cell cycle length in culture at 30 degrees Centigrade-the temperature employed by our lab-to be 78.2 minutes with a standard deviation of 9.1 minutes. To allow for differences in experimental protocol, we place a normal, mean 78.2, standard deviation 18.2 prior on cell cycle length, λ. In S. cerevisiae, duration of the S phase, (γ 2 − γ 1 )λ, is about one quarter of the cell cycle; it begins a short time before buds can be visually detected and continues until mother and daughter cells separate [Vanoni, Vai and Frascotti (1984)]. Based on an analysis of 30 DNA content measurements made on an asynchronous population conducted using the same protocol as used in the synchrony experiment described in the next section, we estimate that γ 1 is approximately 0.1 and that β is approximately 0.12. Hence, we expect γ 1 < β < γ 2 . With this in mind, we let γ 1 ∼ Beta(2, 18), β ∼ Beta(2.4, 17.6) and γ 2 ∼ Beta(7, 13), constrained as above. Bar-Joseph et al. (2004) estimates the standard deviation of the velocity distribution in S. cerevisiae to be 0.09 and observed a range of values 0.07 to 0.11 across 3 experiments. For this reason, we place an independent inverse-gamma(12, 1) prior distribution on σ v . Aspects of experimental protocol, most notably the method used to synchronize the population, have a strong influence on the parameters of the starting position distribution and on duration of the daughter-specific offset, δ. Centrifugal elutriation, the method used in the experiment we describe in the next section, selects for small unbudded cells, while other methods, such as α-factor arrest, do not. Because of their size, elutriated cells tend to spend more time in Gr and their daughters spend more time in Gd than their counterparts in α-factor experiments [Hartwell and Unger (1977)]. We have chosen to specify our prior distributions on these parameters to accommodate-not condition on-this source of protocol dependent uncertainty. In particular, we place an inverse-gamma distribution with shape parameter 2 and mean 78.2/3 on σ 0 and the minimally informative exponential, mean 78.2 prior distribution on µ 0 . The former reflects our belief that almost all cells will be in Gr at release; the latter places highest prior likelihood on a short Gr, as is expected in an α-factor experiment, but allows for the longer Gr that is expected in elutriation experiments. Similar reasoning was behind our choice of an exponential mean 55 prior distribution on δ: in α-factor experiments, δ can be very brief, while in elutriation experiments it can exceed 40% of the length of a typical cell cycle [Hartwell and Unger (1977), Lord and Wheals (1983)]. Analysis. In what follows, we utilize the model to analyze budding index and DNA content data from a cell cycle synchrony experiment in S. cerevisiae using cells synchronized by centrifugal elutration and cultured at 30 • C. Details of the strain and growth conditions used can be found in Orlando et al. (2007). After synchronization, 32 samples were collected at 8 minute intervals starting 30 minutes after release. Two aliquots were taken from each sample, one for each type of measurement. Budding index was measured by microscopically assessing at least 200 cells for the presence of a bud and recording the number of budded and unbudded cells observed. The relative DNA content of 10,000 cells in each sample was measured by flow cytometry as described previously [Haase and Reed (2002)]. The observed fluorescence values for each measured cell in each sample were log 2 transformed prior to analysis. The DNA content measurement of the 38 minute sample was not available due to a technical problem encountered during preparation of that sample. We compare parameter estimates given both the budding index and DNA content data, given the DNA content data alone and given the budding index data alone. In addition, using only the budding index data, we estimate Bayes factors for the full CLOCCS model to submodels obtained by systematically removing each novel source of asynchrony, δ, µ 0 and σ 0 separately and in combination. 4.1. Estimates given the experimental data. We use a random walk Metropolis [Gilks, Richardson and Spiegelhalter (1996), Metropolis et al. (1953)] algorithm for each model fit. In each case, the algorithm was tuned to mix well and the chain was given a lengthy burn-in period. Subsequent to this, we ran the chain for 400,000 iterations and saved every fourth for inference. Plots of sampled values appear stationary, and the Raftery and Lewis diagnostic [Raftery and Lewis (1996)], implemented in the R package CODA, indicates that the sample is sufficient to estimate the 0.025th quantile of any marginal posterior to within 0.01 with probability 0.95. All coefficients and associated interval estimates are based on summary statistics of marginal sample distributions. We tested our implementation of the model and the Markov chain Monte Carlo sampler by analyzing simulated data sets. Parameter estimates derived from these analyses were consistent with their true values. Table 1 provides marginal summaries of the prior (columns 1 and 2) and of the posterior distributions after fitting the model to both the DNA content and budding index data (columns 3 and 4), to the DNA content data only (columns 5 and 6) and to the budding index data only (columns 7 and 8). Note that point and interval estimates of common parameters derived using both the budding index and DNA content data are very close to their counterparts fit only to the DNA content data. This is not surprising given the information rich nature of the DNA content data: at each time period approximately 10,000 cells are assayed for DNA content, while only approximately 200 are assayed for presence of a bud. On average, point estimates of the common parameters differ by less than 1% and the associated posterior interval estimates are only about 2% narrower when the budding index data is added. The parameter β can only be estimated with budding index data, but it is estimated more accurately when DNA content data is included, owing to the fact that it is constrained by γ 1 and γ 2 . Figure 4A is a plot of the observed budding index curve (black) overlayed with 95% pointwise interval estimates from the analysis of only the budding index data (green) and of both the budding index and DNA content data (red). The latter analysis estimates the recovery period (Gr) to be slightly shorter and more variable and estimates cell cycle length to be longer and less variable than estimated with the budding index data alone. This is evident in the red confidence bands positioned to the left of the green between 70 and 100 minutes and to the right of the green between 190 and 225 minutes experimental time. Note that both curves increase more smoothly and sooner than the observed budding index following recovery from synchronization. This is likely due to our choice of the normal distribution to characterize time spent in Gr. It appears that a left skewed distribution may give a better fit to this feature in the data. . Estimated budding and DNA content curves accurately reflect complex, biologically relevant patterns in the data. A: plot of observed budding index curve (black) and 95% pointwise interval estimates from budding index only analysis (green) and budding index/DNA content analysis (red). B-F: DNA content densities (gray) and their posterior mean estimates (red) at five points in time, highlighting the population's transition from G1 ( B) through the S phase to G2/M ( C and D) and the effect of its growing asynchrony ( E and F). The corresponding time points are labeled above the budding index curve. In all cases, the G1 and G2/M modes are accurately scaled and located, as is the shape of the distributions between the modes. and transforming them, via the change of variables formula, to the log 2 scale. The estimates are extremely good: in all cases, the G1 and G2/M modes are accurately scaled and located and capture the shape of the distributions between the modes, suggesting that the model is accurately accounting for the cells transiting the S phase. Model evaluation. In what follows, we estimate Bayes factors (BFs) [Kass and Raftery (1995)] for a series of pairs of models nested under the fully parametrized CLOCCS model using importance sampling. These quantities allow us to measure the weight of evidence in the budding index data in favor of alternate parametrizations of the model, including variants that drop the daughter offset and/or one or both parameters of the starting position distribution. The hierarchy of models we examine is not complete but accounts for all reasonable alternatives to the full model. The simplest model, where we set µ 0 = 0, σ 2 0 = 0 and δ = 0, corresponds to a branching process version of the Bar-Joseph et al. (2004) model. We employed a separate sampler to estimate each marginal likelihood and used 100 degrees-of-freedom multivariate t densities as the importance densities, each with mean and covariance matrix matching that estimated from a Markov chain Monte Carlo analysis of the associated model. For purposes of this calculation, we used only the budding index data to inform the model and drew 10,000 importance samples for each calculation. The variance of the normalized weights was less than 1.45 in all cases. Hence, the effective sample size [Liu (2001)] for estimating the marginal likelihood was never smaller than 4000. Table 2 reports estimates of log e Bayes factors (lBFs) for various nested model comparisons given the budding index data. In these tables, the model indexed by an entry's column is the larger of the models and is represented in the numerator of the lBFs in that column; the model indexed by an entry's row is the smaller of the two. As a guide to interpreting these numbers, Kass and Raftery (1995) classify lBFs between 0 and 1 as "not worth more than a bare mention," those from 1 to 3 "positive," those from 3 to 5 "strong" and those greater than 5 "very strong." Using this scale as a guide, the full CLOCCS model is very strongly preferred to all alternatives, including the model of Bar-Joseph et al. (2004). The worst alternative sets only µ 0 = 0. When µ 0 is constrained to be zero, better fits to the data are achieved by setting one or the other, or preferably both, of δ and σ 2 0 to zero. Figure 5 depicts posterior mean fits to the budding index data under each of the competing models. We estimated the posterior means using the MCMC output that was used to determine the importance distributions. Each MCMC analysis followed the same procedure, described in Section 4.1, used for the primary analyses. Note that the fits achieved by all model vari- Table 2 Estimates of log Bayes factors (lBFs) for various nested model comparisons given the budding index data alone. The model indexed by an entry's column is the larger of the models and is represented in the numerator of the lBFs in that column; the model indexed by an entry's row is the smaller of the two. The last two rows of the table provide the average and standard deviation of the RMSE of the model's fitted values to the observed budding index data over a sample of 1000 draws from the posterior 5. Plot of observed budding index curve (black) and posterior mean fitted curves under each of the competing models for the budding index data. The full model is plotted in red; the competing models are obtained by constraining the parameter(s) indicated in the figure legend to be zero. Quantitative summaries of these fits can be found in Table 2. ants that set µ 0 = 0 are visually indistinguishable and markedly inferior to any variant that allows µ 0 > 0. The last two rows of Table 2 provide estimates of the root mean squared error (RMSE) of the fits to the budding index data achieved by each model's posterior mean curve. These estimates reinforce what is evident from the marginal likelihood and graphical analyses, namely, that models that do not allow for a nonzero location in the distribution of initial cell position are markedly inferior to those that do and that accounting for a mother/daughter offset is particularly important, at least in the case where the cell population was arrested using centrifugal elutriation. Finally, these results demonstrate that the extremely good fits depicted in Figure 4 are the result of a parsimoniously parametrized model and not due to over-fitting. 5. Discussion. Synchrony/time-series experiments on populations of cells are essential for understanding the dynamic processes associated with the cell cycle. In this paper we have described the CLOCCS model, sampling models for fitting this model to both budding index and DNA content data, and a detailed model evaluation. We have demonstrated that accurate model fits can be obtained using budding index, DNA content data or both. While previous models only account for one source of asynchrony, namely, variation cell cycle length [Bar-Joseph et al. (2004), Chiorino et al. (2001), Larsson et al. (2008), Liou, Srienc and Fredrickson (1997)], the CLOCCS model adds two novel sources of asynchrony. These are variation in initial synchrony and variation due to asymmetric cell division. In Section 4.2 we showed that the CLOCCS model is very strongly preferred to all nested alternatives, including a branching process version of the model of Bar-Joseph et al. (2004). The more accurate description of population dynamics achieved by the CLOCCS model will allow more accurate deconvolution of dynamic measurements such as transcript abundance. Additionally, because the model maps time-series data onto a common cell cycle lifeline, different data types (e.g., mRNA levels, protein levels, protein localization, etc.) from multiple synchrony/time-series experiments can be aligned such that the dynamics of multiple events can be temporally compared. Furthermore, DNA content measurements are commonly used to measure cell cycle position in organisms from yeast to mammals. Thus, the model permits the alignment and comparison of dynamics of cell cycle events across species, potentially providing an accurate view of evolutionary changes in cell cycle progression and regulation. The model's parameter estimates are also interpretable in terms of biological quantities associated with the cell cycle, so their estimates are of independent interest. For example, the measure of initial synchrony, σ 0 , can be used to tune synchrony protocols for optimal results. When using budding index data, λ and β allow researchers to map temporal events to pre-or post-G1 cell cycle phases. When DNA content data is used, this resolution is increased and events can be placed accurately into the G1, S or G2/M phases of the cell cycle. The CLOCCS model is unique, to our knowledge, for providing a closed form expression for the likelihood function in a complex branching process. This expression is written by enumerating and then marginalizing over the distinct cohorts present in the population at a given time. The explicit accounting of cohorts allows for extensions of the model that introduce cohort dependent effects such as one-time events and effects, such as the motherdaughter offset, that may diminish with generation. The approach we describe is very general and has the potential to provide a flexible and efficient alternative in a range of problems where population balance or branching process models are used to describe the short term dynamics of a branching population. While CLOCCS is better than its nested alternatives, the model can be improved to better fit experimental data and to better reflect biological reality. First, our data suggest that a left skewed distribution with finite support may be more realistic a choice for the initial position. Second, while our data do not contradict a linear accumulation of DNA during the S phase, others have suggested alternative parametrizations [Larsson et al. (2008), Niemistö et al. (2007)]. We are currently exploring a flexibly parametrized S phase function that will allow inference on its functional form and, by doing so, address a question of fundamental interest to the greater biological community. Third, we plan to generalize the model to allow for an unspecified correlation between mother and daughter cell velocities; this parameter is currently set to one. Finally, we assume that the delay due to asymmetric cell division (δ) is constant over time. Evidence exists, however, that the magnitude of this effect may change as the experiment progresses. This issue can be addressed with a suitably parametrized cohort-specific delay term, although the duration of a typical time-course experiment may limit power to detect this effect. The strength of the CLOCCS modeling framework lies in its flexibility. It is adaptable to new experimental measurements, and given its ability to use DNA content data, is already applicable to virtually all biological systems where synchronized populations are studied, most notably human cell-culture systems. Further integration of the model with deconvolution and alignment algorithms will provide researchers with a powerful new tool to aid in the study of dynamic processes during the cell division cycle. Software implementing the CLOCCS model can be found at http://www.cs.duke.edu/~amink/software/cloccs.
2010-09-28T09:22:58.000Z
2010-09-28T00:00:00.000
{ "year": 2010, "sha1": "5b8c794b38e584d57a393656853d83c1188366c5", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1214/09-aoas264", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5b8c794b38e584d57a393656853d83c1188366c5", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Mathematics" ] }
268058965
pes2o/s2orc
v3-fos-license
An Upstream G-Quadruplex DNA Structure Can Stimulate Gene Transcription Four-stranded G-quadruplexes (G4s) are DNA secondary structures that can form in the human genome. G4 structures have been detected in gene promoters and are associated with transcriptionally active chromatin and the recruitment of transcription factors and chromatin remodelers. We adopted a controlled, synthetic biology approach to understand how G4s can influence transcription. We stably integrated G4-forming sequences into the promoter of a synthetic reporter gene and inserted these into the genome of human cells. The integrated G4 sequences were shown to fold into a G4 structure within a cellular genomic context. We demonstrate that G4 structure formation within a gene promoter stimulates transcription compared to the corresponding G4-negative control promoter in a way that is not dependent on primary sequence or inherent G-richness. Systematic variation in the stability of folded G4s showed that in this system, transcriptional levels increased with higher stability of the G4 structure. By creating and manipulating a chromosomally integrated synthetic promoter, we have shown that G4 structure formation in a defined gene promoter can cause gene transcription to increase, which aligns with earlier observational correlations reported in the literature linking G4s to active transcription. ■ INTRODUCTION Certain guanine-rich DNA sequences can fold into intramolecular fold-back four-stranded secondary structures called G-quadruplexes (G4s). 1 G4s comprise stacked, Hoogsteen hydrogen-bonding G-tetrads that can be stabilized by a centrally coordinated cation (Figure 1a), with connecting loops of varying lengths and sequences that can be arranged in different orientations.Intramolecular G4s can be thermally stable under conditions of physiological salt, pH, and temperature.Computational prediction 2,3 and a G4-sensitive DNA sequencing approach 4 have suggested the potential for folded G4 structures to form at hundreds of thousands of sites in the human genome.Methods developed to detect G4 structures in cellular chromatin have observed only thousands of G4 structures in human cells, 5−9 suggesting that chromatin context suppresses the folding of most G4 structures.Computational prediction, sequencing experiments, and chromatin mapping approaches all show the enrichment of G4s immediately before the transcription start site (TSS) of many protein coding genes in gene regulatory regions called promoters. 3,10,11ultiple lines of evidence highlight promoter G4s as having biological importance for transcription.In vitro, G4s forming at a transcribed gene can stall RNA polymerase transit. 12,13−18 Transcription of VEGF and KRAS oncogenes is also stimulated by promoter G4 folding arising from oxidative damage to DNA bases in promoters. 19,20−26 Furthermore, the differentiation of human stem cells into defined lineages revealed that the dynamic alteration of where folded G4 structures were retained, gained, or lost had a positive correlation with transcriptional activity, active histone marks, and open chromatin. 24−30 Collectively, such studies lend support for the formation of promoter G4s as positive regulators of transcription. There are limitations in the existing data that link G4s to transcription.Interpretations rely largely on correlations.Many studies did not include the actual detection of folded G4s and rely on inference from the G4 sequence motifs.Plasmid-based studies do not position the G4 structure within the native locus in a natural chromatin context.Studies from our own laboratory and other laboratories have shown that G4-targeting small molecules can lower transcription at genes with G4 motifs in promoters.A possible interpretation of such studies is that stabilization of G4s inhibits transcription, and therefore, G4s are negative regulators of transcription.There are several reasons to be cautious about interpreting such experiments.Small molecules can bind to thousands of folded G4 targets in the cellular genome with many potential downstream effects.Small molecules can compete off G4-bound proteins (e.g., transcription factors), 28 and so a small molecule-G4 complex is not just a more stable G4.Last, many small molecules that target G4s, for example, pyridostatin, cause proximal DNA strand cleavage and a downstream cellular DNA damage response, 22 which confounds a simple interpretation of transcriptional changes in a single gene.Thus, small molecule experiments cause complex effects that can preclude a clear and reliable interpretation.There is a need for studies that can more clearly and directly resolve the relationship between a promoter G4 and transcription. Herein, we describe the design and assembly of a synthetic promoter-reporter gene regulatory system that was exploited to study the consequential effects of a promoter-G4 on transcription.The observations show that the formation of G4 structures in this context positively regulates gene transcription, with transcription levels being related to the intrinsic thermal stability of the G4 structure. ■ RESULTS AND DISCUSSION Experimental Design.We developed a synthetic gene regulatory construct to directly evaluate the relationship between G4s in the promoter and transcription (Figure 1b,c).Using the Flp-In-293 human cell line, we placed G4 sequences, which are in front of a minimal core promoter (minP) construct that drives the expression of a green fluorescent protein (eGFP) reporter (see Supplementary Methods) into human chromatin.The G4 sequences were always positioned on the template strand.Cell lines were generated by integrating each construct into the same genomic position by site-directed recombination (Supplementary Methods) and confirmed to have a single copy of the correct sequence by Sanger DNA sequencing (Supplementary Methods, Supporting Information). G4 folding of test sequences was detected and quantified using a folded-G4 structure-specific antibody BG4 25 to affinitycapture folded G4 structures from isolated chromatin, 5 followed by quantitative polymerase chain reaction (G4 ChIP-qPCR) using primers adjacent to the G4 site.Fluorescence microscopy or flow cytometry (see Supplementary Methods) of live cells by detecting eGFP fluorescence was used to estimate expression levels.Accurate quantification of transcription was achieved from isolated RNA by quantitative reverse transcription qPCR (RT-qPCR) with gene-specific primers for eGFP using actin expression as a reference. Selection of Natural Promoter G4s for Insertion.Experimental mapping of folded G4s in human cell lines 31 shows that promoter G4s are enriched in front of the transcription start sites of actively transcribing genes (see Supplementary Methods and Figure S1).Glutathione peroxidase 4 (GPX4) and chloride intracellular channel 4 (CLIC4) G4s were selected as representative single G4s present in the proximal promoters of genes undergoing high transcription (see Supplementary Methods and Figure S2).−36 Folded G4 formation was disrupted either when tetrad-forming G bases were mutated or when folded in 100 mM LiCl.UV thermal melting spectroscopy 37,38 indicating that GPX4 and CLIC4 G4s have high thermal stability with melting temperatures (T m ) of 73.5 and 81.0 °C, respectively (Figure S3b,d). Promoter G4s Stimulate Transcription.We assessed whether the insertion of GPX4 or CLIC4 G4 sequences, or their mutated variants, causes folded G4 structures in our cell lines as determined by G4 ChIP-qPCR.The RBBP4 G4, situated elsewhere in the genome, was used as a positive reference for G4 folding, and background correction was performed by subtracting the signal detected from cells carrying only the minimal promoter without the G4 insertion.Cells with the GPX4 or CLIC4 G4 sequence exhibited a detectable, folded G4 structure, which was lost upon mutation of critical G bases (Figure 1d).We have therefore created detectable folded G4 structures at specific sites by inserting G4 sequences into the cellular human genome. We then investigated whether G4 formation was linked to active transcription.For this, we quantified eGFP expression driven from GPX4 or CLIC4 G4-containing promoters compared to mutated promoters unable to form a G4.Fluorescence microscopy of live cells revealed that cells with either a GPX4 or CLIC4 G4 promoter had easily detectable eGFP fluorescence compared to those lacking a G4 or carrying a mutated G4 (Figure 1e).With expression analysis by reverse transcription followed by quantitative polymerase chain reaction of cDNA (RT-qPCR), cells with a GPX4 or CLIC4 G4 showed more than a 40-fold increase in eGFP mRNA compared to cells with only the minimal promoter reporter lacking a G4 (Figure 1f).A similar G4-driven increase (70fold) in expression was observed when we measured the mean fluorescence intensity (MFI) for eGFP by flow cytometry (Figure S4).Therefore, in our synthetic regulatory system, the addition of a folded G4 structure in the proximal promoter stimulates transcription.When we generated cells carrying a CLIC4 G4 but lacking the minimal core promoter, eGFP reporter expression was ∼90% lower than when both components were present (Figure S5). The combination of a minimal core promoter and G4 is thus required for maximal transcriptional output. To rule out whether the activation of gene expression in the G4-positive cell lines is due to intrinsic duplex G-richness, we shuffled the CLIC4 G4 sequence to break up the G4 motif but preserve the overall nucleotide composition.We confirmed that the oligonucleotide for the shuffled G4 did not fold into a G4 structure, as judged by CD spectroscopy (Figure S6a,b).Insertion of the shuffled sequence into the proximal promoter in cells led to a loss of eGFP reporter expression by over 97% to basal levels compared to CLIC4 G4-containing cells (Figure S6c−e).These findings demonstrate that it is the folded G4 structure rather than intrinsic G-richness that stimulates transcription. G4 Stimulate Transcription Independent of the SP1 Consensus Motif.Many G4 sequence motifs overlap with the consensus duplex DNA binding site for the transcription factor SP1, which makes it challenging to distinguish the potential effects of each motif. 39We noted that the CLIC4 G4 and GPX4 G4 sequences, described earlier, also contain a SP1 consensus target sequence, which may confound our explanation.We therefore designed a second minimal core promoter system in which a fixed, separate, SP1 consensus sequence was incorporated with an unnatural G4 sequence in front to rule out effects due to simultaneously changing both the G4 and the SP1 binding site (Figure 2a). We designed a simple G4, (G3T2) 3 GGG (G3T2), that excludes motifs for the SP1 canonical duplex binding.Affinity pull-down of the SP1 transcription factor from Flp-In-293 cell lysates using G3T2 G4 and duplex G3T2 (dsG3T2) oligonucleotides and analyzed by western blotting shows that SP1 binds strongly to a folded, but not duplex, G4.As a control, SP1 was shown to bind to G4 structures (G4Myc) and the SP1 duplex consensus (Figure S7).The G3T2 oligonucleotide folds into a stable G4 structure as assessed by CD spectroscopy and UV thermal melting spectroscopy (T m = 72.5 °C) (Figure 2a−c).G4 folding was also abolished in 100 mM LiCl or on mutation of the central tetrad Gs to T ((GTGT2) 3 GTG). We next measured G4 folding and eGFP expression in cell lines in which either the G3T2 G4 or the mutated G4 was inserted into the promoter construct (Figure 2a).G4 ChIP-qPCR showed that cells carrying a G3T2 G4 exhibit a stronger signal for folded G4 compared to cells carrying a mutated G4 motif (Figure 2d).This increase in detectable folded G4 was also accompanied by a ∼3-fold increase in eGFP RNA/protein expression (Figure 2e,f and Figure S4). These findings confirm that a folded G4 can stimulate transcription independent of the canonical SP1 binding site. We constructed cell lines with each unnatural G4 sequence inserted in front of the minimal promoter of our assay system with a constant SP1 consensus binding sequence.We then quantified folded G4 folding by G4 ChIP-qPCR.The level of folded G4 folding detected at this site in cells varied systematically with loop length and thermal stability (Figure 3c).The greatest G4 ChIP-qPCR signal was seen in cells with G3T1, the signal was reduced in cells with G3T2, and no G4 folding was discernible for cells with G3T4.In the context of our synthetic cellular system, the extent of folded G4 in cells is therefore directly related to the thermal stability of the G4. Increased G4 Stability Leads to Increased Transcription.We then assessed transcriptional output for the G3T1, G3T2, and G3T4 cellular insertions by RT-qPCR of eGFP mRNA.The level of eGFP reporter transcription varied systematically with loop length and thermal stability with G3T1 cells exhibiting the highest transcription, followed by G3T2 and G3T4 cells giving ∼1.6and 9.7-fold lower transcription, respectively (Figure 3d,e).Similar results were obtained using flow cytometry measurement of eGFP fluorescence (Figure S4; ∼1.5and ∼12.7-fold reduction in MFI for 2-and 4-loop G4s compared to the 1-loop G4).Increased transcription therefore correlates with increased thermal stability of the G4 and the level of folded G4 formation in cells. ■ CONCLUSIONS We have presented a systematic study of G4 folding in the context of a synthetic gene promoter that drives a reporter gene in chromatin of human cells.The insertion of natural and unnatural G4 sequence motifs led to a measurable folded G4 structure at the insertion site, with a concomitant increase in transcription of the proximal gene.Systematic variation in the thermal stability of folded G4s, by controlling loop lengths, showed that higher stability gives rise to an increased level of folded G4 signal in cells and higher level of transcription of the reporter gene.The main outcomes support the finding that the presence of folded G4 structures in front of transcribed genes can have a positive effect on transcription, probably through the recruitment of proteins such as transcription factors and chromatin remodelers. Detailed description of experimental procedures, materials, and additional figures as mentioned in the text (PDF) ■ AUTHOR INFORMATION Figure 1 . Figure 1.GPX4 and CLIC4 promoter sequences fold into G4 structures in cells and promote eGFP transcription.(a) G-tetrad stabilized by Hoogsteen base pairing and a monovalent cation.(b) Architecture of the G4 promoter eGFP reporter system.A G4 is placed at the proximal promoter region on the template strand.A minP sequence containing a TATA box and Inr acts as a core promoter to mediate transcription initiation and is placed between the G4 and eGFP protein coding sequences.The sequences of TATA box and Inr are in bold.Panel (b) was created with BioRender.com.(c) Sequences of GPX4 G4/mutated GPX4 G4 and CLIC4 G4/mutated CLIC4 G4.Point mutations for mutated G4s are indicated by lowercase letters.(d) G4-ChIP-qPCR analysis of G4 formation at the CLIC4 and GPX4 G4 promoters normalized to a positive control G4 in the host genome (RBBP4) and after background signal subtraction.(e) Representative fluorescence microscopy images of Flp-In expression cell lines showing that promoter GPX4 and CLIC4 G4s drive eGFP protein expression.(f) Quantification of eGFP RNA expression by RT-qPCR.G4-negative promoter: the Flp-In-293 expression cell line in which G4 is absent from the eGFP promoter.Flp-In-293: the parental Flp-In-293 cell line without the integration of the eGFP reporter (mean ± s.d.; two-tailed unpaired t test). Figure 2 . Figure 2. A G4 structure lacking SP1 consensus sequences still promotes enhanced eGFP expression.(a) Architecture of the eGFP reporter system.A synthetic (G3T2) 3 GGG G4 or a mutated version (termed G3T2 or mutG3T2, respectively) is placed in the proximal promoter.A SP1 binding site is also placed between the proximal and core promoters.(b, c) Biophysical characterization of the G3T2 G4 structure.Left, circular dichroism spectroscopy for G3T2 or mutG3T2 G4 oligonucleotides showing spectra characteristic of a G4 structure for G3T2 but not mutG3T2 (buffer conditions: 10 mM Tris−HCl (pH 7.4) with 100 mM KCl or LiCl; oligonucleotide concentration: 10.0 μM).Right, UV-melting curve for the G3T2 G4 oligonucleotide.T m is indicated on the graph (buffer conditions: 10 mM Tris−HCl (pH 7.4) with 100 mM KCl; oligonucleotide concentration: 5.0 μM).(d) G4-ChIP-qPCR quantification of G4 formation for G3T2, as described in Figure 1, compared to mutG3T2 cells.(e) Representative fluorescence microscopy images showing that the G3T2 G4 has elevated eGFP protein expression compared to mutG3T2, G4negative, and host cell line expression.(f) Quantification of the increase in eGFP transcription for G3T2 cells compared to mutG3T2 cells by RT-qPCR as described in Figure 1 (mean ± s.d.; two-tailed unpaired t test). Figure 3 . Figure 3. Increased transcriptional activity correlates with increased G4 structural stability in promoters.(a) Sequences of the synthetic G4s G3T1, G3T2, and G3T4 and the corresponding decrease in thermal stability as assessed by oligonucleotide UV melting.(b) Circular dichroism spectra confirming G4 folding for G3T1 and G3T4 in 100 mM KCl or LiCl as per Figure2(data for G3T2 is in Figure2b).(c) Confirmation that increasing G4 stability leads to increased G4 formation in cells, as assessed by G4-ChIP-qPCR.(d) Representative fluorescence microscopy images of the synthetic G4 cell lines showing that eGFP protein expression correlates with G4 stability (data for G3T2 is from Figure2e).(e) Quantification of the eGFP expression for synthetic G4 cell lines by RT-qPCR showing that expression levels correlate with G4 stability (mean ± s.d.; two-tailed unpaired t test.).
2024-03-01T06:18:24.029Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "747a6888274482d13306b26a2c61b6a2765ab228", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acschembio.3c00775", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8b8ac48046569a3452653027c3da17c2fab6ea0b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
159495735
pes2o/s2orc
v3-fos-license
The penny’s dropped: Renegotiating the contemporary coin deposit This article examines the status of coins as contemporary deposits in the British Isles. With a focus on both historical and contemporary sites, from the Neolithic long barrow of Wayland’s Smithy, Oxfordshire, to the plethora of wishing-wells and coin-trees distributed across the British Isles, it demonstrates the popularity of coins as ritual deposits. The author considers how they are perceived and treated by site custodians, and concludes with a case study of an archaeological excavation, the 2013 Ardmaddy Wishing-Tree Project, which recovered a large amount of contemporary coin deposits. This article does not aim to locate itself within the debates of site custodianship and accessibility, nor does it propose to address the broader dilemmas of a site’s ritual continuity or resurgence. Instead, its aim is to encourage archaeologists to consider the contemporary deposit as an integral part of the ritual narrative of a site, rather than as disposable ‘ritual litter’. Introduction Ritual deposition is not an activity that many people in the Western world would consider themselves frequent -or even infrequent -participants of. However, many of us are. For many of us have peered into the coin-gorged depths of a wishing-well or fountain and felt the inclination to fish in our pockets or purses for some loose change and drop it into the water amidst the growing accumulation. The motivations behind such behaviour varyto make a wish, for luck, pandering to a child -and whether or not there is any actual belief behind such actions will be dependent upon the individual participant (see Houlbrook, 2014a). However, a (necessarily brief) consideration of various definitions of 'ritual' reveal that such an act can indeed be classed as ritual deposition. Sociologist Robert Bocock (1974: 36) defines 'ritual' as 'bodily action in relation to symbols' (emphases in original); anthropologist Barbara Myerhoff (1997: 199) as 'an act or actions intentionally conducted by a group of people employing one or more symbols in a repetitive, formal, precise, highly stylized fashion'; while Susanna Rostas (1998: 92) identifies 'corporeal performativity' as a necessary aspect. Certain features of these definitions are clearly evident in the action outlined above: bodily action, symbolism (of the coin and the place of deposition), intentionality, repetition, performativity. The dropping of a coin into a wishing-well can therefore constitute ritual deposition; and the coins, consequently, ritual deposits -especially when adhering to archaeologist Ralph Merrifield's definition. In his seminal work on The Archaeology of Ritual and Magic (1987), Merrifield defines the ritual deposit as an object 'deliberately deposited for no obviously practical purpose, but rather to the detriment of the depositor, who relinquishes something that is often at least serviceable and perhaps valuable for no apparent reason ' (p. 22). The classification of a coin -something that is often at least serviceable and perhaps valuable which is consciously dropped into a fountain -deliberately deposited for no obviously practical purpose -as a ritual deposit is therefore relatively untenuous. Taking this viewpoint, this article aims to consider how such coins, as contemporary ritual deposits, are perceived and treated. This is not the first piece of research to make such a consideration. In 1997, Christine Finn examined how Chaco Canyon, a prehistoric complex in the Southwest US, had become a focus for New Age ceremony and deposition. Considering the contemporary objects deposited there, which ranged from crystals and shells to wooden imitations of native American ritual objects, Finn (1977: 169) questioned whether these deposits should be considered '"junk" or archaeological objects of meaning and value'. LoPiccolo, curator of the site, viewed them as the latter, claiming that these modern-day deposits 'were of value as signifiers of continued use of the Chaco Canyon site'. Believing it to be his responsibility to collect these objects for the future archaeological record, rather than simply disposing of them, LoPiccolo catalogued them, entering their details into a database. Finn clearly approves of LoPiccolo's actions, and proposes that others should follow his lead. 'What should be classified as "junk"', she writes, 'and how we deal with it at a time of broader acceptance of "other" practices are issues that archaeologists and those involved in heritage management should, I suggest, be considering' (p. 178). Nearly 10 years later, Jenny Blain and Robert Wallis (2006a), examining Neo-Pagan uses of prehistoric sites in Britain, also advocate greater academic attention given to contemporary ritual deposits: 'Whatever form this material culture takes, it is clearly worthy of serious study, not only for issues of site conservation, but also in terms of the construction and performance of identity' (p. 103). However, despite the recognition that more attention needs to be given to contemporary ritual deposits, they still do not appear to have established themselves as a significant feature on the archaeological agenda. Indeed, in many cases whereby modern-day objects are deposited at sites of historical or religious importance, these objects are viewed as intrusive or damaging, and are subsequently removed and disposed of. This article does not aim to locate itself within the debates of site custodianship and accessibility, nor does it propose to address the broader dilemmas of a site's ritual continuity or resurgence. Such issues are far too complex and convoluted for its scope. Instead, its aim is more specific: to examine how contemporary ritual deposits are perceived and treated by site custodians nearly two decades after Finn's advocation. Adopting a necessarily narrow focus, this article will consider the treatment of the coin as a ritual deposit at both historical and contemporary sites in the British Isles, and will conclude with a case study of an archaeological excavation, the 2013 Ardmaddy Wishing-Tree Project, which recovered a large amount of contemporary coin deposits. The history of the coin as ritual deposit St. Mary's Well at Culloden was visited on the first Sunday of May; about a dozen years ago or so it was calculated that about two thousand persons made the pilgrimage. Its waters were held to have the power of granting under certain conditions the wish of the devotee … A visitor some years ago wrote regarding the ritual: '… The procedure to be gone through is this: A draught of the water is taken, the drinker at the same time registering a wish or desire for success in some form or another throughout the coming year. To facilitate the wish a coin of small value is usually dropped into the water … How small a price to pay for so great a boon …' (Henderson, 1911: 322-323) Coins are one of history's most popular ritual deposits. They have been a highly common votive object in Britain since the Roman period, with caches containing hundreds -some even thousands -of coins discovered at numerous sites throughout Roman Britain (see Dowden, 2000: 176;Leins, 2007Leins, , 2011Lewis, 1966: 47;Priest et al., 2003;Score, 2006Score, , 2011Williams, 2003;Woodward, 1992: 66). Some of the most notable examples include the caches at Lydney, Gloucestershire; Hallaton, southeast Leicestershire; and the sacred spring at Bath. It was in 1911 -and writing of the late 1890s -when Henderson recorded this custom in relation to St Mary's Well in the Highlands of Scotland. However, the description of dropping a coin into a well in exchange for a wish or good luck is not dissimilar to scenes we witness today. The British Isles are teeming with wishing-wells and fountains; collection boxes and coin-trees (Houlbrook, 2014a), all of which evince the coin's prevalence as a ritual deposit in contemporary Britain. In the last year alone, the author has come across a plethora of modern-day coindeposit accumulations, from the 200+ coin-trees catalogued as part of her doctoral thesis (Houlbrook, 2014b; see Figure 1), to the numerous bodies of water, incidentally encountered, containing deposited coins ( Figure 2). Examples include fountains at Lyme Park, Cheshire; Tatton Park, Cheshire; the Trafford Centre, Manchester; St Anne's Square, Manchester; Queen's Park, Lancashire; the Mall at Cribbs Causeway, Bristol; and the Wales Millennium Centre, Cardiff. As outlined above, these coins, dropped into fountains or hammered into coin-trees, constitute ritual deposits by definition. Despite this, however, contemporarily deposited coins are rarely given the same status as historically deposited objects, and tend to be classified under the pejorative category of 'ritual litter'. The coin as 'ritual litter' Robert Wallis and Jenny Blain (2003: 310) employ the terms 'ritual litter' and 'sacred litter' (Blain and Wallis, 2006a: 100) to encompass objects deposited by Neo-Pagans at historical sites and structures, objects which include 'flowers and other offerings, candlewax and tea-light holders … the insertion of crystals, coins and other materials into cracks' (Wallis and Blain, 2003: 310). Phillip Lucas (2007: 49-50), in his work on contemporary 'nature spirituality' at megalithic sites in Western Europe, similarly lists coins amidst the offerings he terms 'ritual litter that can become piles of trash over time'. Kathryn Rountree (2006: 100), however, notes the derogatory connotations of the term 'ritual litter', opining that those who tend to apply it to contemporary deposits are 'those inclined to disapprove of their deposition'. It is an unambiguously belittling term, 'litter' being defined in the Oxford English Dictionary (2014) as 'rubbish' and 'a disorderly accumulation of things lying about'. As Rountree (2006: 100) also points out, a candle or a written prayer may be deposited in a church without being designated 'ritual litter'; a contrast she attributes to the sanctioned status of churches as 'sacred places' and to the sanctioned status of officially allocated receptacles for such deposits, such as prayer boxes, collection boxes, and votive candle stands. It is not, therefore, only the contemporaneity of the modern-day coin as deposit which appears to belittle it, but also the unsanctioned nature of the deposit. Removing deposits It is not, however, only how contemporary coin-deposits are termed which evinces the dismissive attitude expressed towards them, but also how they are treated. As Wallis and Blain (2003: 309) note: 'So-called "ritual litter" is an increasing problem at many sacred sites.' This 'problem' stems from the perceived negative effects of deposits on the physical and aesthetic nature of the sites. Certain offerings, such as flowers and liquid libations, are viewed as less 'intrusive' because they are biodegradable or transient. Diuturnal material deposits, however, such as coins, are more controversial because they can often prove detrimental to the physical preservation of the site (Blain and Wallis, 2006a: 103). In many cases, therefore, coin deposits are often removed from sites with religious or historical significance, often due to the physical damage they can cause. At Wayland's Smithy, Oxfordshire, for example, this Neolithic chambered long barrow has been subject to coin deposition for the last 50 years at least (Grinsell, 1979: 68). Coins are lodged into the rocks of this monument by modern visitors, a custom which is believed to stem from a much earlier tradition, recounted in a letter by the wife of a local clergyman in 1738 and reproduced by Ellis Davidson (1958: 147): At this place lived formerly an invisible Smith, and if a traveller's Horse had lost a Shoe upon the road, he had no more to do than to bring the Horse to this place with a piece of money, and leaving both there for some little time, he might come again and find the money gone, but the Horse new shod. Modern-day visitors may similarly deposit coins at Wayland's Smithy only to 'come again and find the money gone'; however, they were not taken by an invisible Smith, but by the National Trust rangers who are tasked with the removal of deposits. Andy Foley, the on-site ranger, regularly checks and removes coins, and informs me that English Heritage have recently altered the site's interpretation panel, deliberately excluding information about the traditional custom of coin deposition (Foley, 2013). Discouraging deposition In other cases, coins are not only removed from a site but measures are implemented to actively discourage their deposition. For example, at the site of St Colmcille's birthplace in Gartan, Co. Donegal, accompanying a modern cross is a flagstone, originally part of a prehistoric burial mound. St Colmcille is believed to have been born on this particular flagstone in the 6th century, which is said to cure loneliness (Ó Muirghease, 1963: 153). Since the early 2000s, pilgrims who visited the site would deposit a coin in the cupmarks of the flagstone. However, the coins were perceived as negative additions to the site; as Martin Egan of the Colmcille Heritage Trust, based in nearby Gartan, explained: 'when it rained they discoloured the stone as well as making it unsightly. The Gartan Development Association and the Colmcille Heritage Trust decided to discourage this practice and have cleaned up the stone' (Egan, 2014). The coins were therefore removed and a sign was erected, requesting: 'PLEASE DO NOT LEAVE COINS ON STONE AS IT IS DAMAGING THE STONE' (Figure 3). A similar attempt to discourage coin deposition is ongoing on the island of Gougane Barra, Co. Cork. Gougane Barra, a site dedicated to St Finbarr, is a popular pilgrimage destination, and has been for at least the past 200 years. In the 18th and 19th centuries the island's remote location, in Gougane Lake, made it a prominent site for rituals which combined Christianity with pagan practices (McCarthy, 2006: 21). On 23 June, several hundred pilgrims flocked annually to the island for the Eve of St John's feast, to bathe in the island's holy well in the hope of cures to certain ailments. This pilgrimage is described by Thomas Crofton Croker (1968Croker ( [1824: 277ff), who partook in the celebrations there in 1813. Croker describes the popular custom of attaching votive rags and bandages to a wooden pole standing in the centre of the Pilgrim's Terrace, which was apparently all that remained of a large cross. These rags and bandages were 'intended as acknowledgments of their cure' (p. 276). These 'pagan rituals' were banned in 1818 by the Catholic Bishop of Cork, John Murphy (McCarthy, 2006: 21). However, this does not appear to have deterred pilgrims from attaching their offerings to the wooden post in the Pilgrim's Terrace, and then to the replacement wooden cross which was commissioned by Fr. Patrick Hurley, the parish priest, in the early 1900s (McCarthy, 2011a). By this time, the rags and bandages seem to have been replaced by coins, which were embedded into the cross (McCarthy, 2011b). It was not, however, only the cross which was subject to this custom. The trees of Gougane Barra, like the original wooden pole from the 19th century, were also affixed with rags, and Kieran McCarthy -a local resident and historian -remembers this custom of rag-trees surviving until the 1990s. From the early 20th century, however, the trees also began to be embedded with coins, the custom having spread from the wooden cross (McCarthy, 2011b). Local resident and custodian of Gougane Barra, Finbarr Lucey, describes a 'magnificent ash tree' in the main cells enclosure, which was embedded with so many coins that it eventually died. It stood beside the cross already described as being similarly encrusted with coins, but it fell in a storm in 1973 (Lucey, 2011). Both the remains of the coin-tree and the cross have since been removed, and in the early 2000s, the present cross was erected in the Pilgrim's Terrace, but no coins appear to have been inserted into it. The custom of coin insertion has been discouraged by the custodians of the island who, considering the fate of the original coin-tree, have been attempting to protect other trees from similar copper poisoning (Lucey, 2012). McCarthy informs me that this decision to discourage the custom was made by the local church committee, who 'wished to clean up the site's appearance' (McCarthy, 2011b); they subsequently attached a sign to the current primary coin-tree, stating: 'I AM A TREE; PLEASE DO NOT PUT COINS INTO ME'. The site managers are similarly hoping to discourage the deposition of coins into the holy well of St Finbarr, situated at the causeway to the island. Above the holy well is a sign requesting: 'NO MONEY IN HOLY WELL PLEASE. BOX IN PILLAR FOR SAME' (Figure 4). Visitors are thus referred to a donation box in a stone pillar a few feet away, and are encouraged to deposit their coins into that instead. A similar strategy has been implemented at the Roman Baths and Pump Room in Bath, Somerset. According to Verity Anthony, Collections Assistant, visitors to the site have been depositing coins into the spring there since the 1970s. However, she informs me that deposition in the spring is discouraged: 'In order to preserve the site, we request that people deposit coins in a designated bath, the circular bath, as this is a manageable space which can be monitored and coins removed from it with relative ease' (Anthony, 2013). She went on to say that they … regularly remove coins from areas of the site where we don't actively encourage deposition, in order to dissuade people from following suit, but we do find on the whole that the use of a designated place to deposit coins works very well (and is easy to remove coins from). The Glastonbury Thorn Another historical site within the British Isles has been subject to similar levels of contemporary deposition: the Glastonbury Thorn. This is a hawthorn (Crataegus) growing atop Wearyall Hill, Somerset, which is believed to be the offspring of the original Holy Thorn. This tree is said to have sprung from St Joseph of Arimathea's staff, which he thrust into the ground on his visit to Britain in the 1st century AD. Together with its offspring, this tree purportedly blossomed annually at Christmas in commemoration of Christ's nativity (Walsham, 2011: 492). According to Milner (1992: 141), it is England's 'most celebrated sacred tree'. There are currently several 'Holy Thorn' offshoots within the town. One, however, is most widely associated with the original because it is said to stand where St Joseph thrust his staff into the ground. This tree (known hereafter as the Glastonbury Thorn) was planted in 1951 by members of Glastonbury Town Council but was vandalised in 2010, with unknown vandals cutting down its branches. New shoots began to grow and tourists continued to visit it, but its popularity is believed to put this fragile tree at risk; I first became aware of the site following an article on BBC News (Jenkins, 2012), which describes how visitors threaten the vandalised tree's recovery by inserting coins into its bark. On my visit to the site, John Coles, former mayor of Glastonbury, accompanied me to Wearyall Hill where the current, vandalised Glastonbury Thorn stands, together with a young sapling, also said to be the offspring of the original Thorn. Both are protected within metal enclosures. Although there were no coins inserted into the Glastonbury Thorn on the day of my visit, there were numerous ribbons, some adorned with names or personal messages, affixed to the railings of the protective fence ( Figure 5). Several of these messages refer to the 'solstice', indicating that their depositors were at the site during the summer and winter solstices (one at least in 2012, according to the message), which is a particularly popular time for Neo-Pagan pilgrimage to the site, according to John Coles. John Coles explains that the ribbons, when densely clustered, prevent sunlight from reaching the trees, and so he visits Wearyall Hill at least once a month in order to remove them. He also comes equipped with a knife to dislodge any coins he finds inserted, asserting that the copper will kill the trees. There have been other deposits which he has felt inclined to remove: pieces of paper with what he terms 'pagan or atheist obscenities' written on them, as well as a number of rather obscene items, such as condoms. He estimates that this custom of depositing objects at the Glastonbury Thorn began in the early 2000s. It is unclear who has been depositing the coins -and why -for no participants were present on the day of my visit. However, John Coles views this as a pagan custom also and perceives it as a negative, destructive practice, hoping to prevent damage to the tree by removing coins whenever he sees them. Following removal This article is not intended to criticise or question the removal of deposited coins, especially where material deposits threaten the physical preservation of a site. However, it does aim to question what happens to these deposits following their removal. Once the coins are taken from a site, what is done with them? In most cases, they are put to philanthropic use. The coins removed from Wayland's Smithy are donated to local charities (Foley, 2013), whilst at Bath, they are donated to projects related to the conservation of the site, such as the Bath Archaeological Trust (Anthony, 2013). Likewise at St Colmcille's flagstone, Gartan, any removed coins are put towards the maintenance of the site (Egan, 2014). This is very much in keeping with the uses of coins removed from contemporary sites: the wishing-wells and fountains encountered in parks, shopping centres and tourist attractions. In fact, wishing-wells and fountains are sometimes installed with the express purpose of encouraging coin deposits. In 1961, Edward Block patented the 'Wishing-Well Type Coin Collector', which he describes as: … a device representing a 'wishing-well,' the 'wishing-well' bearing a religious, or other inscription thereon which creates interest in the aspect of the simulated well and the inscription thereon whereby the observer will have a distinct mental inclination toward the doing, obtaining, attaining of something, or an expression of a wish, often one of a kindly or courteous nature, and to obtain the same the observer will drop a coin, or the like, into the simulated well, the observer knowing the coin will be used for charity, or other almsgiving or public relief or unfortunate or needy persons, the observer leaving the well with a feeling of benevolence. (p. 1) This 'Wishing-Well Type Coin Collector' was intended to be installed in public places, and folklorist Alan Dundes, writing a year later, attests to the success of this type of structure: 'Despite the supposed present-day scientific mindedness, the fact that some charity fund raisers have constructed wishing wells in order to collect contributions attests to the extraordinary appeal of the custom' (Dundes, 1962: 28). This practice of utilising wishing-wells or fountains to collect contributions is widespread today. The Trafford Centre, a shopping centre in Greater Manchester, for example, established the 'intu Trafford Centre Fountain Fund' in 1999, donating all money deposited by shoppers in the centre's fountains to charities in the North West of England (Reid, 2013). Likewise, The Mall at Cribbs Causeway, Bristol, established the Fountain Charity Fund in March 2003. Using the coins deposited in The Mall's fountain -which they estimate can total around £10,000 a year -the Fountain Fund provides grants to local charitable organisations. This process is described on The Mall's website: Once a month, on a Sunday evening when The Mall has closed the Mall team set to work collecting the thousands of coins from the fountain. After draining the water from the fountain, they use heavy-duty wet vacuum cleaners to suck up the coins and transfer them into big black wheelie bins. The team then lay out the coins on large dust sheets to dry for up to a week before they are counted up into money bags ready for banking. For the first five years of The Mall's life, local scout groups helped Mall staff empty the coins from the fountain, drying, bagging and then banking the proceeds to help pay for the new equipment they needed. It was hard work and they earned every penny! Similarly at the National Trust estate of Lyme Park, Cheshire, coins have been deposited into the park's three fountains for over 30 years, and as Jeanette Connolly, Business Support Co-ordinator of the park explains: 'Any money received we treat as a donation and goes towards restoration of the House and Gardens' (Connolly, 2013). Renegotiating the coin deposit When coins are removed from their places of deposition by site custodians, they are often used for philanthropic purposes: as donations to local charities or contributions towards the preservation of the site, and this article is certainly not criticising such uses. However, it is notable that none of the organisations examined so far -from the National Trust at Wayland's Smithy to Bristol's Mall -catalogue the deposits before donating them. This does not reflect negatively on the organisations; they are not archaeologists or anthropologists, and many have neither the time nor the resources to record the large volumes of coins they process. This does, however, reflect negatively on us. Any researcher interested in material culture and ritual deposition should take responsibility for the cataloguing of any removed objects; not just for the benefit of future archaeologists and ethnographers, but in attaining a greater understanding of the social relations of the sites today. Understandably, some pragmatic decisions may need to be made regarding the use of resources in the recording of these deposits. Where not enough time is available for the cataloguing of all coins, quantities should still be recorded, and certain notable deposits could be given greater attention: those of high denomination; those which evince signs of percussion; and foreign currency. All deposited material, whether old or new, contributes to the ritual narrative of a site. Andy Foley, National Trust ranger at Wayland's Smithy, recognises this: the collection of the deposited coins 'forms a large part of the backbone of interpretation over what Wayland's actually is and what is myth/legend' (Foley, 2013). Contemporary deposits are integral to the contextualisation of a site, and it is our responsibility to ensure that whatever can be done to catalogue these deposits before they are donated or disposed of, should be done. LoPiccolo, curator of the Chaco Canyon site (discussed above), ensured that the objects deposited at the site by modern-day visitors were not simply disposed of. Seeing it as his responsibility to collect them for the future archaeological record, LoPiccolo catalogued them, entering their details into a database and demonstrating that such endeavours are feasible (Finn, 1997). Another example of a project which has involved the gathering and cataloguing of contemporary ritual deposits is the 2013 Ardmaddy Excavation. The Ardmaddy Excavation The Ardmaddy Wishing-Tree Project involved a small-scale archaeological excavation at the site of the Ardmaddy 'wishing-tree' in Argyll, Scotland ( Figure 6). This tree is a dead hawthorn (Crataegus monogyna) and is located half a mile south of Ardmaddy Castle, in a pass known as Bealach na Gaoithe: the 'pass of the winds'. It is uprooted and lies prone within a wooden enclosure, 1.2m east of a rough track. The enclosure was erected during the 1990s, following the tree's fall, and is designed to deter livestock rather than people; on the enclosure's eastern side, there is a stile providing access. Rodger et al.'s Heritage Trees of Scotland (2003) describes the wishing-tree as follows: This lone, wind-blasted hawthorn (Crataegus monogyna) growing in the wilds of Argyll is one of the few known 'wishing trees' in Scotland. It is encrusted with coins that have been pressed into the thin bark by generations of superstitious travellers over the centuries, each coin representing a wish. Every available space on the main trunk bristles with money, even the smaller branches and exposed roots. This magical tree provides a living connection with the ancient folklore and customs of Scotland … (p. 25) Despite claiming that coins have been deposited at this site 'by generations of superstition travellers over the centuries' (p. 25, emphases added), Rodger et al. reference no sources, providing no insight into how they came to the conclusion that the site is 'centuries' old. Mairi MacDonald's 1983 hiker's guide, Walking in South Lorn, makes a similarly vague reference to the coin-tree's antiquity, stating that it is 'of considerable age' (p. 9). Likewise, MacDonald offers no further information on how she has determined its maturity and, despite both claims that the Ardmaddy coin-tree is of significant age, MacDonald is the earliest identified literary source which refers to the site. MacDonald's description of the coin-tree and the 'traditional' practice of coin-insertion suggests that this custom was well established at the time she was writing in the 1980s. Another source proves that it was earlier: an Ordnance Survey map from the 1970s pinpoints the coin-tree's location and labels it 'Wishing Tree', while the cointree's custodian, Charles Struthers of Ardmaddy Estate, believes that the custom dates to the 1920/30s: 'When I was a boy here in the 50's [sic] the tree was prolific and could well have been 20-30 years old then' (Struthers, 2011). Regardless of how old the wishing-tree is, it will likely not last much longer. Since its fall in the 1990s, it has become heavily decayed and fragmented, a process no doubt accelerated by the number of coins hammered into its bark. It is estimated that within 10 years there will be little remaining of the tree. The coins, once embedded in its bark, will scatter; visitors to the site may take some and the rest will become buried over time. As little evidence for the wishing-tree lies beyond its material culture, it was agreed by Ardmaddy Estate and the Heritage Lottery Fund that a salvage operation was needed. However, the tree itself could not be conserved without removing it from its natural environment, which would prevent the continuation of the custom. It was recognised, therefore, that the practice should be conserved in a different way: by conducting an excavation of the site in order to uncover and catalogue as many coins as possible, subsequently using the material culture of the deposits to produce a ritual narrative of the site. In September 2013, the author and a team of archaeologists from the University of Manchester conducted a small-scale excavation at the Ardmaddy Wishing Tree. The methods employed were relatively simple: without interfering with the tree itself, 6 small test pits were opened and examined in 10cm spits. Following 5 days of excavation, 703 small-finds were recovered; of these, 691 were coins. Each find was assigned a smallfinds number in the field using a paper record which was later transferred to a digital EXCEL spreadsheet. All artefacts were stored appropriately according to their type and condition, and then transported to the University of Manchester, where they were cleaned, weighed, measured and photographed to provide a visual record. The details of the artefacts were later added to the spreadsheet: their denominations, years of issue and their conditions, which included noting whether they showed signs of damage through percussion and assigning them a corrosion level from 1-4. The highest denomination group was the decimal 1 penny (36%), closely followed by the decimal 2p (35%); there are significantly fewer high-denomination coins. Although the majority were British, there were 14 examples of foreign currency, suggesting that foreign tourists have been participating in the custom also; 33 per cent of the coins exhibited signs of damage through percussion -crooked forms or abraded edges -suggesting that at least one third of them had been hammered into the tree. The earliest datable coin was a 1 penny issued in 1914; 16 more coins were datable as pre-decimal (pre-1971), ranging from 1921 to 1970, whilst a further 7 were identified as pre-decimal based on their size and design. The vast majority of the coins, however, were decimal. The decade that produced the highest quantity of deposited coins was the 1990s. A large volume was also issued in the 2000s, demonstrating that the practice did not cease with the fall of the tree, whilst the presence of coins from the 2010s reveals that the custom is still active today. These data were then collated and presented in an appendix for the author's doctoral thesis (Houlbrook, 2014b), and excavation reports were produced and distributed to Archaeology Scotland and the West of Scotland Archaeology Service. All coins were then returned to Ardmaddy Estate where they are currently being stored in Ardmaddy Castle, and questions still surround their next destination: some are earmarked for local museums, whilst the majority may be donated to charity, displayed at the castle, or returned to their original place of deposition: the site of the wishing-tree. Whilst the next stage in the biographies of these coins is certainly significant, the purpose of this case study was to demonstrate the value of examining and recording contemporary coin deposits. The catalogue produced of the coin deposits from the Ardmaddy Wishing Tree, available through Archaeology Scotland and the West of Scotland Archaeology Service, will not only be valuable to any future researcher attempting to contextualise the site, but also for the present researcher, providing opportunity for a much deeper insight into the ritual narrative of this site. There are a variety of questions that these data can address: What coins did the depositors choose to deposit? Were there significant denomination ratios and, if so, what do they suggest? How did the depositors choose to deposit their coins? In what time-frame has deposition been occurring? Such data also have the capacity to address broader questions. For example, in what ways do contemporary deposits compare to deposits from earlier periods, and what might be inferred about the consistency and/or malleability of the practice over time? What can such contemporary data reveal about ritual practices and -perhaps more importantly -about archaeological assumptions concerning ritual practices, i.e. what can it tell us about the relationship (or lack of) between physical adherence to a ritual and notions of belief (see Houlbrook, 2014b)? Detailed answers to these questions are not the purpose of this article. Its purpose has been to demonstrate how little is currently being done with the wealth of information contemporary deposits can proffer; to illustrate how much can be gleaned about a site's ritual narrative by considering them; and to spark a renegotiation of how modern-day coin deposits are perceived and treated. It is hoped that this article has gone even a little way in addressing the comments of Finn (1997) and Wallis (2006a, 2006b), and in demonstrating that contemporary deposits can -and, in the author's opinion, should -be viewed as 'archaeological objects of meaning and value' (Finn, 1997: 169).
2019-05-21T13:07:34.406Z
2015-03-16T00:00:00.000
{ "year": 2015, "sha1": "88db1d2c02ac935ea34b0819dcb7ffe6e7fe1010", "oa_license": "CCBY", "oa_url": "http://uhra.herts.ac.uk/bitstream/2299/18964/2/Accepted_Manuscript.pdf", "oa_status": "GREEN", "pdf_src": "Sage", "pdf_hash": "64a63f0b09c2cc67f8815e50859b1ce4e1d34ccd", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
219124070
pes2o/s2orc
v3-fos-license
The INESC-ID Multi-Modal System for the ADReSS 2020 Challenge This paper describes a multi-modal approach for the automatic detection of Alzheimer’s disease proposed in the context of the INESC-ID Human Language Technology Laboratory participation in the ADReSS 2020 challenge. Our classification framework takes advantage of both acoustic and textual feature embeddings, which are extracted independently and later combined. Speech signals are encoded into acoustic features us-ing DNN speaker embeddings extracted from pre-trained models. For textual input, contextual embedding vectors are first extracted using an English Bert model and then used either to directly compute sentence embeddings or to feed a bidirectional LSTM-RNNs with attention. Finally, an SVM classifier with linear kernel is used for the individual evaluation of the three systems. Our best system, based on the combination of linguistic and acoustic information, attained a classification accuracy of 81.25%. Results have shown the importance of linguistic features in the classification of Alzheimer’s Disease, which out-performs the acoustic ones in terms of accuracy. Early stage features fusion did not provide additional improvements, con-firming that the discriminant ability conveyed by speech in this case is smooth out by linguistic data. Introduction Alzheimer's Disease (AD), the most common cause of Dementia [1], is a neurodegenerative disorder characterized by loss of neurons and synapses in the cerebral cortex. Its prevalence increases with age, a study on the U.S. census reported that 3% of people aged 65-74, 17% of people aged 75-84, and 32% of people aged 85 and older have AD [2]. As most countries are experiencing a general increase in average lifespan, it is expected a rapidly escalation of AD cases worldwide in the next thirty years [3]. Pharmacological treatments may temporarily improve the symptoms of the disease, but they can not stop or reverse its progression. For these reasons, there is an increasing need for additional, noninvasive, and cost-effective tools allowing a preliminary identification of AD in its early clinical stages. Currently, AD is diagnosed through an analysis of patient clinical history and disability, neuropsychological tests, brain imaging and cerebrospinal fluid exams. Although the prominent symptoms of the disease are alterations of memory and of spatial-temporal orientation, language impairments are also an important factor confirmed by current literature [4,5]. Some of the most well known language impairments found in This work has been partially supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) with reference UIDB/50021/2020 and by European Union funds through Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 766287. AD speech include naming [4], word-finding difficulties [6], repetitions [7], an overuse of indefinite and vague terms [8], and inappropriate use of pronouns [9]. Over the last years, there has been an increased interest from the research community in the automatic identification of AD through the analysis of speech and language abilities. Some studies have focused on syntactic or semantic features [10,11], some targeted plain acoustic approaches [12,13], while other works have investigated a combination of temporal speech parameters and lexical measures [14,15]. Most of these approaches use handcrafted features and traditional classification algorithms. Very recent works investigated the use of automatically learned representations from deep neural networks [16][17][18][19]. Regardless of the approach used, the studies existing in the literature are difficult to analyze and compare due to the different datasets used. In this scenario, the Alzheimers Dementia Recognition through Spontaneous Speech (ADReSS) challenge has been proposed, with the aim of providing researchers with a common, statistically balanced and acoustically enhanced dataset to test their approaches [20]. In this work, we present the multi-modal system proposed by the Human Language Technology Laboratory of INESC-ID for the ADReSS 2020 challenge. Our framework is designed to solve the task of automatically distinguishing AD patients from healthy individuals. In our previous approaches to this topic [21,22] we exploited lexical, syntactic, and semantic features with measures of local, global, and topic coherence, in order to provide a more comprehensive characterization of language abilities in AD and thus a more reliable identification. In this work, we take the challenge of using automatically learned representations instead of traditional and consolidated handcrafted features, which already proven to achieve good classification results. Inspired by recent studies, we push the limit of deep neural models to work with extreme conditions, such the ones in the health domain, in which data scarcity is ordinary. Additionally, we combine both acoustic and linguistic information to have a complete picture of patient's disabilities, in a similar way to the type of information that clinicians receive during their interactions with patients. The rest of this work is organized as follows: Section 2 introduces the relevant state on the art on the automatic identification of AD. Then, in Section 3 and 4, we present the dataset used in this study and a description of our methodology. Finally, classification results are reported in Section 5, while conclusions are summarized in Section 6. Related work The computational analysis of speech and language impairments in AD has gained growing attention in recent years. Initially, existing studies explored engineered temporal and acoustic parameters of speech, linguistic features, or a combination of both. König et al. [12] computed several temporal speech features on a dataset composed of 26 AD and 15 healthy subjects, while performing different tasks of isolated and continuous speech. By considering different features according to the task, the authors achieved an accuracy of 87% in the automatic identification of AD. Fraser et al. [11] used more than 350 features to capture lexical, syntactic, grammatical, and semantic phenomena from the transcriptions of a picture description task. With a selection of 35 features, the authors achieved a classification accuracy of 81.92% in distinguishing individuals with AD from healthy controls. Pompili et al. [21] exploited lexical, syntactic, semantic and pragmatic features from the descriptions of the Cookie Theft picture [23] attaining an accuracy of 85.5% in the task of classifying AD patients. Gosztolya et al. [14] collected a dataset composed of 75 Hungarian speakers (25 AD, 25 MCI, and 25 healthy subjects) performing two tasks eliciting continuous speech. The set of features used considered demographic attributes, acoustic and linguistic features. Using only acoustic or linguistic information the authors achieved an accuracy of 82% in distinguishing AD patients from healthy subjects. When the two types of features were combined, the accuracy increases to 86%. More recently, researchers are shifting their focus towards more complex architectures capable of overcoming the limitations of traditional approaches. Warnita et al. [18] proposed an approach relying only on acoustic data computed from continuous speech and gated Convolutional Neural Network (GCNN). Using majority voting on speaker and the Paralinguistic Challenge (IS2010) feature set, the authors achieved an accuracy of 73.6%. Karlekar et al. [19], on the other hand, investigated linguistic impairments using CNN, LSTM-RNNs, and a combination of both. In this way, they obtained an accuracy of 91.1% in classifying AD patients. Chen et al. [16] went further, proposing a network based on attention mechanism and composed of a CNN and GRU module. In this way, the architecture should be able to analyze both local speech patterns and global macrolinguistic functions. The accuracy achieved in distinguishing AD patients was of 97.42%. Finally, Zargarbashi et al. [17] designed a multi-modal feature embedding approach based on N -gram, i-vectors, and x-vectors. Classification accuracy results achieved with each of these models were, respectively, of 78.2%, 75.9%, and 75.1%. The joint fusion of the three models reached an accuracy of 83.6%. Our work differs from previous studies for several reasons. First, to process the text data, we use contextual embeddings vectors as input to two different systems. One based on the training of a Global Maximum pooling and a bidirectional LSTM-RNNs architectures, and one based on the statistical computation of sentence embeddings. The latter presents the advantage of being a simple approach, which does not require the training of deep, data-demanding architectures. Second, for the audio recordings, we use DNN speaker embeddings extracted from pre-trained models. These learned, speaker representative vectors have recently shown their potential in the discrimination of neurodegenerative disorders [24]. To the best of our knowledge, this is the first work that jointly uses automatically learned representations from neural models, instead of engineered features, for both audio signals and textual data. In fact, although existing studies have shown that linguistic impairments in AD appear to be more important than acoustic ones, traditional literature provide convincing evidence that using both source of information will definitively improve the accuracy of automatic diagnosis methods. Corpus The ADReSS dataset contains the speech recordings and corresponding annotated transcriptions of 156 subjects, 78 AD patients, and 78 healthy control matched for age and gender. Data were divided into two partitions, training and test sets composed of 108 and 48 subjects, respectively. Recorded participants were required to provide the descriptions of the Cookie Theft picture from the Boston Diagnostic Aphasia Examination [23]. Speech recordings were segmented using Voice Activity Detection (VAD) and later normalised [20]. The dataset contained both full enhanced audio, and normalised audio chunks. In our approach, we have used both the full enhanced audio and the transcriptions. The latter were annotated with disfluencies, filled pauses, repetitions, and other more complex events. However, to build an automated system requiring a minimal annotation effort, we removed all the annotations not corresponding to the plain textual representation of words, thus, better resembling the output that can be generated by an Automatic Speech Recognition (ASR) system. Overall, the whole set of transcriptions contained 17127 words, of which 1009 were unique. More detailed information about the duration and size of the ADReSS dataset are reported in Table 1. Proposed methods As shown in Figure 1, our multi-modal framework is based on the independent generation of acoustic and textual feature embeddings. Then, we perform an early fusion of the output of the two systems to obtain a single feature vector containing a compact representation of both speech and language characteristics. Finally, classification is performed with an SVM classifier with linear kernel. The two systems are described in the remainder of this section. Acoustic system The acoustic system is strongly based on two models borrowed from the speaker verification field, i-vectors [25] and xvectors [26]. i-vectors are statistical speaker representation vectors that have been recently used for the classification of Parkinson's Disease and for the automatic prediction of dysarthric speech metrics [27,28]. X-vectors are discriminative deep neural network-based speaker embeddings that have outperformed i-vectors in speaker and language recognition tasks [26,29,30] and have been successfully applied to AD, obstructive sleep apnea and pathological speech detection [24,31]. Both models allow to extract a fixed sized feature vector from variable length audio signal. Taking into consideration the small size of the ADReSS dataset, we preferred to exploit already existing pre-trained models to produce our acoustic feature embeddings, rather than training them using in-domain challenge data. To this end, for the x-vectors framework we use both the SRE and the Voxceleb models. The first was trained mainly on telephone and microphone speech using data from the Switchboard corpus, Mixer 6, Figure 1: Summary of embedding-based approaches and NIST SREs [29]. The latter was trained on augmented Vox-Celeb 1 and VoxCeleb 2 datasets, which contains speech from speakers spanning a wide range of different ethnicities, accents, professions and ages. [29,32]. This dataset was used also to build the i-vectors pre-trained model used in this work. The inputs to the pre-trained SRE and Voxceleb models consisted of 23 and 30-dimensional MFCC vectors, extracted with Kaldi [33] from the full recordings, using default values for window size and shift. Non-speech frames were removed using energy-based VAD. For the x-vectors model, the last layers of the pre-trained model, before the softmax output layer, can be used to compute the embeddings. In this work, we extracted a 512-dimensional x-vectors at layer segment6 of the network. The i-vectors models, is based on GMM-UBM. The universal background model (UBM) is used to capture statistics about intra-domain and inter-domain variabilities and a projection matrix is used to compute i-vectors. We extracted a 400dimensional i-vectors. Linguistic system We followed two different approaches to obtain textual feature embeddings. First, we investigated the feasibility of training deep architectures with a corpus of reduced dimension like the one used in this challenge. Then, this method is compared with a less data-demanding one, based on the statistical computation of sentence embeddings using a pre-trained model. Both strategies rely on contextual word embeddings as input, but they provide different types of learned representations as output. In fact, to combine the information from the linguistic and the acoustic systems, the trained architectures are used only to extract linguistic features from the last layer of the models, before the final classification. In this way, we obtain a single 768-dimensional feature vector for an entire description. The sentence embedding approach, on the other hand, provide a single 768-dimensional vector for each sentence of a description. These features are then used to classify between AD patients and healthy subjects. For both approaches, the first step of the pipeline deals with the normalization of the data provided in the ADReSS dataset. In fact, we recall that besides the plain transcription of the descriptions these also contain additional annotations and information that were removed. Then, we encode each word of the clean transcriptions into a 768-dimensional context embedding vector using a frozen English Bert model pre-trained with 12-layers, 768-hidden. This representation is fed to our two linguistic systems, described hereafter. The first system is derived from the ComParE2020 Elderly Challenge baseline [34], and was obtained by adapting the original code to deal with the classification of AD. With this ap- proach, three different neural models are trained on top of contextual word embeddings: (i) a Global Maximum pooling, (ii) a bidirectional LSTM-RNNs provided with an attention module, and (iii) the second model augmented with part-of-speech (POS) embeddings. During training, the loss is evaluated on the development set. The second system provides the advantage of not requiring an additional phase of model training. Similarly to the approach followed with the acoustic system, we use automatically learned representations extracted from a pre-trained model to directly characterize linguistic deficits in AD. The contextual word embeddings obtained for each word of the clean transcriptions are now used to compute an embedding vector of fixed size for each sentence of a description. Sentence embeddings were successfully employed in tasks of humor detection and more generally sentiments analysis [35,36] and information retrieval [36]. In our approach, sentence embeddings are computed by averaging the second to twelfth hidden layers of each word. Results and discussion The ADReSS dataset contains only training and test partitions and for the latter the ground truth is not provided. Thus, in order to test our approaches, we retain the 20% of the data from the training set and use it as development set. In this way, we are left with 86 subjects for training, 22 for development, and 48 for testing. While creating the additional partition, we kept the dataset gender balanced. As briefly mentioned, our evaluation method relies on SVM [37] with linear kernel, based on a liblinear implementation. The complexity parameter C was optimised during the development phase. The results reported in Tables 2 and 3 are obtained using the best complexity configuration. Features were normalized to have zero mean and unit variance. In the remainder of this section we first describe our results on the development set for each system independently and then for their final fusion. Finally, for the best systems, we report the results obtained on the test set. Acoustic system Results using different automatically learned acoustic features embeddings are summarized in Table 2. Also in this case, we explored different independent models and then we do an early fusion of the best acoustic results attained. From Table 2 is possible to observe that the x-vectors Voxceleb model usually achieve a lower classification accuracy. However, when we combine both i-vectors and x-vectors extracted from this model, the accuracy resulting from their fusion is comparable to that of x-vectors using the SRE model, which is currently our best result on the development set. These outcomes are slightly lower than those found in the literature review for similar works. In fact, we recall that Warnita et al. [18] and Zargarbashi et al. [17] obtained an accuracy of 73.6%, 75.9%, and 75.1%, using, respectively a gated CNN with the IS10 acoustic feature set and the i-vectors/x-vectors paradigms. Our approach, however, is different from the ones of these authors since we are using a smaller dataset and do not rely on DNN training. Nevertheless, since we are interested in corroborating these results on the test set, we select the acoustic feature embeddings extracted from the pre-trained x-vectors SRE model for the evaluation. The use of pre-trained acoustic embedding extractors has been motivated by the reduced size of the ADReSS dataset, that we considered to be insufficient for data hungry deep learning approaches. To confirm this, we also trained an end-to-end LSTM model for AD classification. The architecture consisted of one dense and two LSTM layers with a softmax activation function. The network took as input chunks of 500 voiced frames using 23-dimensional MFCC with delta and delta-delta. Majority voting was performed over all the chunks from the same speaker to generate a single prediction per speaker. This end-to-end approach performed very poorly, with an accuracy around chance result in the development set, confirming our expectations that the ADReSS dataset is not suited for training a deep learning end-to-end system. Linguistic system Results obtained with our different linguistic systems are summarized in Table 3. This table reports the performance for the features trained with the three neural models, their fusion, and finally for the sentence embeddings approach. For the latter, we present only results achieved using a majority voting over the entire description. Our best classification result attained an accuracy of 90.91% on the development set using the fusion of the linguistic features sets generated by the three neural models. Comparing this result with the one obtained by sentence embeddings, we acknowledge that neural models outperform simpler strategies even with constrained training data. This was somehow surprising and in contradiction with similar experiments performed with the acoustic system. We hypothesize that the large amount of contextual information provided by the Bert model is helpful in overcoming the limited size of the ADReSS dataset. Nevertheless, we suspect that the high accuracy attained with neural models may be too optimistic, due to the fact of having used the development set both for testing and evaluating the model's loss. Thus, in spite of their lower outcome, the sentence embeddings approach is selected as one of the systems to be evaluated on the test set. In fact, on the one hand, we think that they may represent a more reliable system, since do not require additional training. On the other hand, we also observe that they achieve higher classification scores, when compared with a similar approach based on GloVe embeddings [38], thus corroborating our decision. Fusion of systems To provide a comprehensive evaluation of speech and language impairments in AD, the best results obtained with both the acoustic and the linguistic systems where combined together in an early fusion fashion. We merged the x-vectors features set obtained with the SRE model with the combination of linguistic feature sets (GMax/LSTM-RNNs/LSTM-RNNs-Pos) generated by the three neural models. Unfortunately, results on the development set using this extended set of features did not provide any further improvements with respect to using the linguistic system alone. We believe that, in this case, the predictive ability of linguistic features completely override acoustic ones. Nevertheless, we select the combination of these two systems as our main system for the evaluation. Results on the test set Overall, we submitted three systems for the evaluation: (i) the fusion of the best results achieved by the linguistic and acoustic systems, (ii) sentence embeddings, (iii) the best acoustic system. A summary of these results is reported in Table 4. In general, we found a consistent impoverishment of the performance of our methods when evaluated on the test set, even for those systems based on features that do not required a training phase. The first system submitted achieved the best result, with an accuracy of 81.25%, showing that the use of deep architectures with contextual word embeddings are actually able of overcoming the limitation of a constrained dataset. The worse result is achieved by the acoustic system alone, with an average accuracy of 54.17%. This outcome is lower than the one found in the ADReSS baseline (62.50%) [20], indicating that there is still room for improving our acoustic approach. We relied on pre-trained models to overcome the lack of data, but we ended up with a similar problem. It is likely the case that an adaptation of these models to the characteristics of elderly speech would allow for better performance. Conclusions In this work we presented a multi-modal approach to the classification of AD based on automatically learned feature representations. Both for the acoustic and linguistic systems, we investigated feature embedding vectors extracted from pre-trained models, as well as the feasibility of training deep neural architectures. Using a combination of both approaches, we attained an accuracy of 90.91% and 81.25% on the development and test sets, respectively. Our results showed that acoustic systems, in comparison to linguistic ones, require more data in order to improve the predictive ability of neural models and obtain finetuned features representations. Nonetheless, it is worth noting that linguistic systems used manually generated transcriptions. In the presence of potential ASR errors -which are commonly exacerbated in the case of atypical speech, such as AD speech-, acoustic systems may play a more relevant role. The impact of these errors could be an interesting analysis for future work, as well as the investigation of robust acoustic methods and models specially tailored to the elderly and AD speech characteristics.
2020-06-01T01:00:40.530Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "8de5ba9bc1c66f8be5162bbde1507dcf76d61801", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.14646", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8de5ba9bc1c66f8be5162bbde1507dcf76d61801", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
201198684
pes2o/s2orc
v3-fos-license
Specific Sialoforms Required for the Immune Suppressive Activity of Human Soluble CD52 Human CD52 is a small glycopeptide (12 amino acid residues) with one N-linked glycosylation site at asparagine 3 (Asn3) and several potential O-glycosylation serine/threonine sites. Soluble CD52 is released from the surface of activated T cells and mediates immune suppression via its glycan moiety. In suppressing activated T cells, it first sequesters the pro-inflammatory high mobility group Box 1 (HMGB1) protein, which facilitates its binding to the inhibitory sialic acid-binding immunoglobulin-like lectin-10 (Siglec-10) receptor. We aimed to identify the features of CD52 glycan that underlie its bioactivity. Analysis of native CD52 purified from human spleen revealed extensive heterogeneity in N-glycosylation and multi-antennary sialylated N-glycans with abundant polyLacNAc extensions, together with mainly di-sialylated O-glycosylation type structures. Glycomic (porous graphitized carbon-ESI-MS/MS) and glycopeptide (C8-LC-ESI-MS) analysis of recombinant soluble human CD52-immunoglobulin Fc fusion proteins revealed that CD52 bioactivity was correlated with a high abundance of tetra-antennary α-2,3/6 sialylated N-glycans. Removal of α-2,3 sialylation abolished bioactivity, which was restored by re-sialylation with α-2,3 sialyltransferases. When glycoforms of CD52-Fc were fractionated by anion exchange MonoQ-GL chromatography, bioactive fractions displayed mainly tetra-antennary, α-2,3 sialylated N-glycan structures and a lower relative abundance of bisecting GlcNAc structures compared to non-bioactive fractions. In addition, O-glycan core type-2 di-sialylated structures at Ser12 were more abundant in bioactive CD52 fractions. Understanding the structural features of CD52 glycan required for its bioactivity will aid its development as an immunotherapeutic agent. INTRODUCTION CD52 is a glycoprotein composed of only 12 amino acid extensively modified by both N-linked and possible O-linked glycosylation, anchored by glycosylphosphatidylinositol (GPI) to the surface of leukocytic, and male reproductive cells (1,2). The conserved CD52 peptide backbone probably functions only as a scaffold for presentation of the large N-linked glycan, which masks the small GPI-anchored peptide and acts as the prime feature of the CD52 antigen with respect to cell-cell contacts (1,2). This notion is supported by the recent discovery of the immune suppressive role of soluble CD52 in vitro and in vivo (3-5). Activated human T cells with high expression of CD52 were found to exhibit immune suppressive activity via phospholipase C-mediated release of soluble CD52, which was shown to bind to the inhibitory sialic acid-binding immunoglobulin (Ig)-like lectin-10 (Siglec-10) receptor on neighboring T cell populations (3). This sialic acid interaction was subsequently shown to require initial binding of soluble CD52 glycan to the damage-associated molecular pattern (DAMP) protein, high-mobility group box 1 (HMGB1). Complexing of soluble CD52 with HMGB1 promoted binding of the CD52 N-glycan, preferentially in α-2,3 sialic acid linkage, to Siglec-10 (4). In the only previous mass spectrometric analysis, the N-glycans on human leukocyte CD52 exhibited extensive heterogeneity with multi-antennary complexes containing core α-1,6 fucosylation, abundant polyLacNAc extensions, and variable sialylation (6). With recent insights into the function of soluble CD52, and its potential as an immunotherapeutic agent, the glycan structure-function determinants of CD52 warrant more detailed investigation. In particular, although the CD52 Nglycan is known to be required for bioactivity (3, 4), its structure is not fully elucidated and the glycoforms required for bioactivity have not been identified. In addition, even with a total of six potential serine or threonine attachment sites, O-glycosylation of CD52 has not been analyzed. We aimed therefore to identify the structural features of CD52 glycan required for its bioactivity using both purified native human CD52 and recombinant soluble CD52 expressed as a fusion protein with immunoglobulin Fc. Human Blood and Spleen Donors Cells were isolated from human blood buffy coats (Australian Red Cross Blood Service, Melbourne, VIC, Australia) or blood of de-identified healthy volunteers with informed consent through the Volunteer Blood Donor Registry of The Walter and Eliza Hall Institute of Medical Research (WEHI), following approval by WEHI and Melbourne Health Human Ethics Committees. Peripheral blood mononuclear cells (PBMCs) were isolated from fresh human blood on Ficoll/Hypaque (Amersham Pharmacia, Uppsala, Sweden), washed in phosphate-buffered saline (PBS) and re-suspended in Iscove's Modified Dulbecco's medium (IMDM) containing 5% pooled, heat-inactivated human serum (PHS; Australian Red Cross, Melbourne, Australia), 100 mM non-essential amino acids, 2 mM glutamine, and 50 µM 2mercaptoethanol (IP5 medium). A cadaveric spleen was obtained via the Australian Islet Transplant Consortium and experienced coordinators of Donate Life from a heart-beating, brain dead previously healthy donor, with informed written consent of next of kin. All studies were approved by WEHI Human Research Ethics Committee (Project 05/12). Purification of Native CD52 From Human Spleen Frozen human spleen tissue (10 mg) was homogenized with three volumes of water as per described in Xia et al. (1). In brief, homogenate was mixed with methanol and chloroform 11:5.4 volumes, respectively. Samples were left to stir for 30 min and allowed to stand for 1 h. The upper (aqueous) phase was collected, evaporated, dialyzed, and freeze dried. NHS-activated Sepharose 4 Fast Flow resin was incubated with 1 mg of purified anti-CD52 antibody in 0.5 mL of PBS for 3 h at RT. The mixture was incubated overnight at 4 • C and quenched with 1 M ethanolamine. A Bio-Rad 10-mL Poly-Prep column was used for packing and resins were washed with sequential treatment of 5 mL of PBS, 5 mL of pH 11.5 diethylamine, and 5 mL of PBS/0.02% sodium azide. The column was stored at 4 • C in 5 mL of PBS/0.02% sodium azide before use. Spleen extracts were solubilized with 2 mL of 2% sodium deoxycholate in PBS, and then added to the packed column and washed with 5 mL of PBS containing 0.5% sodium deoxycholate. The sample was eluted with six times 500 µl of elution buffer (50 mM diethylamine, 500 mM NaCl, pH 11.5) containing 0.5 % sodium deoxycholate. The eluate was collected, neutralized with 50 µl of HCl (0.1 M) and dialyzed against PBS and water. CD52 Recombinant Proteins Human CD52-Fc recombinant proteins; CD52-Fc I (Expi293), CD5-Fc II (FreeStyle HEK293F), and CD52-Fc III (Expi293) were produced as described (3). The signal peptide sequences joined to human IgG1 Fc were constructed with polymerase chain reaction (PCR) then digested and ligated into a FTGW lentivirus vector or pCAGGS vector for the transfection of HEK293F and Expi293 cells. The construct included a flexible GGSGG linker, a strep-tag II sequence for purification (7), and a cleavage sites for Factor Xa protease between the signal peptide and Fc molecule. The recombinant proteins were purified from the medium by affinity chromatography on Streptactin resin and eluted with 2.5 mM desthiobiotin (3). H-Thymidine Incorporation Assay PBMCs are primary cells and cannot be cultured for more than one passage under normal conditions. PBMCs (2 × 10 5 cells/well) in IP5 medium were incubated for up to 3 d at 37 • C in 5% CO 2 in 96-well round-bottomed plates with or without the activating antigen, tetanus toxoid (10 Lyons flocculating units per ml), and various concentrations of CD52-Fc or control Fc protein, in a total volume of 200 µL. To measure cell proliferation, the radioactive nucleoside, 3 H-thymidine (1 µCi), was added for the last 16 h of incubation. 3 H-thymidine is incorporated into newly-synthesized DNA during mitotic cell division. The cells were collected and radioactivity in DNA measured by scintillation counting. De-sialylation and Re-sialylation of Recombinant CD52-Fc Protein De-sialylation and re-sialylation of recombinant CD52-Fc III proteins were performed by a modification of the method of Paulson and Rogers (10). Briefly, CD52-Fc (500 µg/each) was incubated with Clostridium perfringens type V sialidase (50 mU/mL) for 3 h at 37 • C to remove all types of sialic acids. Samples were then passed through a Protein G-Sepharose column, which was washed twice with PBS before the bound protein was eluted with 0.1 M glycine-HCl, pH 2.8 into 1 M Tris-HCl, pH 8.0, followed by dialysis against PBS. Binding to MAL-I lectin was performed to confirm removal of sialic acids. CD52-Fc III from Expi293 cells was then incubated with either of two sialyltransferases, PdST6GalI which restores sialic acid residues in α-2,6 linkage with underlying galactose or CstII which restores sialic acid residues in α-2,3 linkage with galactose, in the presence of 0.46 mM-0.90 mM CMP-N-acetylneuraminic acid sodium salt (Carbosynth, Compton Berkshire, United Kingdom) for 3 h at 37 • C. The different CD52-Fc (III) proteins with different linkages (α-2,3 or α-2,6) were passed through Protein G-Sepharose columns, washed twice with PBS and eluted with 0.1 M glycine-HCl, pH 2.8, into 1 M Tris-HCl, pH 8.0, followed by dialysis against PBS. Samples were freeze-dried, re-suspended in PBS at 200 µg/mL and stored at −20 • C. Fc Fragment Removal CD52-Fc III recombinant protein fractions (50-200 µg) were incubated with 4 µL of Factor Xa protease (purified from bovine plasma, New England Biolabs, Ipswich, USA) in a total volume of 1 mL of cleavage buffer (20 mM Tris-Hcl, pH 8, 100 mM NaCl, 2 mM CaCl 2 ). Samples were incubated overnight at RT. Samples were mixed three times with Protein G-Sepharose beads for 1 h at RT and centrifuged at 10,000 rpm for 15 min. Fc fragment removal was confirmed by Western blot using anti-human IgG (Fc specific produced in goat; Sigma Aldrich, St. Louis, USA) and anti-CD52 (rabbit) antibodies (Santa Cruz Biotechnology, Dallas, USA). N-and O-Linked Glycan Release for Mass Spectrometry Analysis Mono Q fractionated and whole (non-fractionated) recombinant CD52-Fc III were dot-blotted on a PVDF membrane. Soluble CD52 with the Fc removed was kept in-solution prior to Nglycan release by an overnight incubation with 2.5 units of Nglycosidase F (PNGase F from Elizabethkingia miricola, Roche, Basel Switzerland) at 37 • C followed by a NaBH4 reduction (1 M NaBH4, 50 mM KOH) for 3 h at 50 • C. The O-glycans were subsequently released by overnight reductive β-elimination using 0.5 M NaBH4, 50 mM KOH at 50 • C. The released and reduced N-and O-glycans were thoroughly desalted prior to the LC-MS/MS as described previously (11). Mass Spectrometry and Data Analysis of Released Glycans The separation of glycans was performed by using a porous graphitized carbon (PGC) column (5 µm particle size, 180 µm internal diameter × 10 cm column length; Hypercarb KAPPA Capillary Column (Thermo Scientific, Waltham, USA), operated at a constants flow rate of 4 µl/min using a Dionex Ultimate 3000 LC (Thermo Scientific). The separated glycans were detected online using liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS) using an LTQ Velos Pro mass spectrometer (Thermo Scientific). The PGC column was equilibrated with 10 mM ammonium bicarbonate (Sigma Aldrich) and samples were separated on a 0-70% (v/v) acetonitrile in 10 mM ammonium bicarbonate gradient over 75 min. The ESI capillary voltage was set at 3.2 kV. The full auto gain control was set to 80,000 kV. MS1 full scans were made between m/z 600-2,000. All glycan mass spectra were acquired in negative ion mode. The LTQ mass spectrometer was calibrated with a tune mix (Pierce TM ESI negative ions, Thermo Scientific) for mass accuracy of 0.2 Da. The CID-MS/MS was carried out on the five most abundant precursor ions in each full scan by using 35 normalized collision energy. Possible monosaccharide compositions were provided by GlycoMod (Expasy, http://web. expasy.org/glycomod/) based on the molecular mass of glycan precursor ions (12). Analysis of MS/MS spectra was performed with Thermo Xcalibur Qual browser software. Possible glycan structures were identified based on diagnostic fragment ions 368 for core fucosylation and others as reported (13), and B/Yand C/Z-glycan fragments in the CID-MS/MS spectra. A mass tolerance of 0.2 Da was allowed for both the precursor and product ions. The relative abundances of the identified glycans were determined as a percentage of the total peak area from the MS signal strength using area under the curve (AUC) of extracted ion chromatograms of glycan precursor ion (14). Profiling the N-and O-Glycans on the CD52 Peptide MonoQ fractionated and unfractionated CD52 glycoforms without the Fc were desalted on C18 micro-SPE stage tips (Merck-Millipore, Burlington, USA). Elution was performed with 90% acetonitrile (ACN) and samples were dried and redissolved in 0.1% Formic acid (FA). The desalted CD52 glycopeptides were analyzed by ESI-LC-MS in positive ion polarity mode using a Quadrupole-Time-of-flight (Q-TOF) 6538 mass spectrometer (Agilent technologies, Mulgrave, Australia)-HPLC (Agilent 1260 infinity). In parallel experiments, Nglycosidase F was used to remove N-glycans from some samples of CD52 (with a resulting Asn->Asp conversion i.e., +1 Da) to enable better ionization of the highly heterogeneous and anionic CD52 glycopeptides. The N-and O-glycan occupancy was (500 ng) were injected onto a C8 column (ProteCol C8, 3 µm particle size, 300 A pore size, 300 nm inner diameter 10 cm length; SGE analytical science). The HPLC gradient was made starting with 0.1% FA with a linear rise to 60% (v/v) ACN 0.1% FA over 30 min. The column was then washed with 99% ACN (v/v) for 10 min before re-equilibration with 0.1% FA for another 10 min. The flow rate was set to 4 µL/min with an optimized fragmentor positive potential of 200 V with the following MS setting: m/z range 400-2,500, nitrogen drying gas flow rate 8 L/min at 300 • C, nebulizer pressure was 10 psi, capillary positive potential was 4.3 kV, skimmer potential was 65 V. The mass spectrometer was calibrated with a tune mix (Agilent technologies) to reach a mass accuracy typically better than 0.2 ppm. MassHunter workstation vB.06 (Agilent technologies) was used for analysis and deconvolution of the resulting spectra. The previously determined glycans from the PGC-ESI-MS/MS analysis were used to guide the assignment of glycoforms to deconvoluted CD52 peptides based on the accurate molecular mass. Mono Q Column Fractionation CD52-Fc III was diluted into 5 mL 50 mM Tris-HCl, pH 8.3, and applied to a Mono Q column (Mono Q 5/50 GL, GE Lifesciences, Parramatta, Australia). The column was washed with 10 column volumes of 50 mM Tris-HCl, pH 8.3, and then eluted with 50 column volumes of 50 mM Tris-HCl, 500 mM NaCl, pH 8.3 in 0.5 mL fractions. Fractions were then collected and analyzed by isoelectric focusing (IEF). IEF Novex pH 3-10 IEF gels (Life Technologies, Carlsbad, USA) were used for pI determination. CD52-Fc fractions were loaded with sample buffer and run at 100 V for 2 h, then at 250 V for 1 h and, finally, the voltage was increased to 500 V for 30 min. After electrophoresis, the gel was carefully transferred to a clean container, washed and fixed with 20% trichloroacetic acid (TCA) for 1 h at RT, rinsed with distilled water, stained with colloidal Coomasie blue (Sigma Aldrich) for 2 h at RT, and thoroughly de-stained with distilled water. EThcD Fragmentation for O-Glycan Site Localization on the CD52 Peptide Fractionated CD52 glycoforms were treated with PNGase F prior to O-glycan site localization analysis. CD52 peptides were analyzed using a Dionex 3500RS nanoUHPLC coupled to an Orbitrap Fusion TM Tribrid TM Mass Spectrometer in positive mode with the same LC gradient mentioned in "Profiling the N-and O-glycans on intact CD52, " but with a nanoflow (250 nL/min). The following MS settings were used: spray voltage 2.3 kV, 120 k orbitrap resolution, scan range m/z 550-1,500, AGC target 400,000 with one microscan. The HCD-MS/MS used 40% nCE. Precursors that resulted in fragment spectra containing diagnostic oxonium ions for glycopeptides i.e., m/z 204.08671, 138.05451, and 366.13961, were selected for a second EThcD (nCE 15%) fragmentation. The analysis of all fragment spectra was carried out using Thermo Xcalibur Qual browser software with the aid of Byonic (v2.16.11, Protein Metrics Inc, Cupertino, USA) using the following parameters: precursor mass tolerance 6 ppm, fragment mass tolerance 1 Da and 10 ppm to respectively, account for possible proton transfer during ETD fragment formation and the MS/MS resolution, deamidated (variable), and two core type 2 O-glycans, previously seen in intact mass analysis. Data are expressed as mean ± standard deviation (SD). The significance of differences between groups was determined by ANOVA, post-hoc comparisons of pairs and Bonferroni correction, with Prism software (GraphPad Software). p < 0.05 was used throughout as the significance threshold. Human Spleen-Derived CD52 Exhibits Extensive N-and O-Glycosylation Heterogeneity To characterize the natural glycosylation of human CD52, we purified CD52 from human spleen and performed a (Figures 1A,B). We confirmed high N-glycosylation heterogeneity, expressed as multi-antennary sialylated N-glycans with abundant polyLacNAc extensions (Figure 1A). Similar N-glycans have been previously reported for natural occurring human CD52 (5). The O-glycosylation profile was characterized as core type 1 and core type 2 sialylated structures with mainly (66%) di-sialylated core type 2 O-glycans ( Figure 1B). This glycan heterogeneity raises the question whether particular bioactive glycoforms of CD52 exist and whether such heterogeneity is reflected in the recombinant form of human CD52. The yield of purified native soluble CD52 was insufficient to enable us to pinpoint the bioactive glycoforms on the naturally occurring glycoprotein. Therefore, we engineered human CD52 as a recombinant fusion protein conjugated with an IgG1 Fc fragment as described (3). Previously, we demonstrated the ability of recombinant CD52-Fc, but not its Fc component, to suppress a range of immune functions (3, 4). The two recombinant human CD52-Fc batches we generated for this study recapitulated the previously observed immuno-suppressive bioactivity (Figure 2A). However, the Fc has a single Nlinked glycosylated site at N297 (Figure 2Ci), which had to be considered in characterizing and assessing the impact of the N-glycosylation of recombinant CD52-Fc. This was addressed in two ways: (i) by analyzing a recombinant form of human CD52-Fc in which Fc contained a N297A mutation, allowing analysis of CD52 N-glycosylation profile at the released glycan level without interference from the Fc N-glycan (Figure 2Cii), and (ii) by removal of the Fc component from CD52-Fc by Factor Xa proteolysis of a cleavage site appropriately incorporated in the CD52-Fc construct, as shown by a Western blot using a specific antibody for CD52 ( Figure 2B). Bioactive Recombinant CD52 Glycoforms Displays More Abundant tri-and Tetra-Antennary Sialylated N-Glycans We had noted that the specific bioactivity of recombinant CD52-Fc varied from batch to batch. Therefore, we compared two CD52-Fc variants made in different host cells, here referred to respectively, as CD52-Fc I (from Expi 293 cells) and CD52-Fc II (from HEK 293F cells), which displayed higher and lower immunosuppressive activity (Figure 3A). N-glycans were released via in-solution treatment with PNGase F and subsequently analyzed by PGC-ESI-MS/MS (9). N-glycans on cleaved CD52 I had greater relative abundances of bi-, tri-and tetra-antennary sialylated glycans compared to CD52 II ( Figure 3B). Also, CD52 I displayed a significantly higher relative abundance of sialylated structures possibly containing LacNAc moieties ( Figure 3B). Not only the numbers of antennae, but also their degree of sialylation differed between the two recombinant CD52 glycoforms: tetra-sialylated N-glycans were significantly more abundant in CD52 I (6.9 ± 0.1%) compared to CD52 II (4.2 ± 0.6; p < 0.05). In contrast, CD52 II displayed significantly greater abundance of non-sialylated bi-antennary and bisecting structures (35 and 4% compared to 19 and 2%, respectively; Figure 3B). After the removal of Fc, recombinant CD52 I and CD52 II were then subjected to high-resolution intact peptide analysis using C8-LC-ESI-MS. Both proteins showed N-glycosylation profiles similar to those of released glycans. The high resolution of the Q-TOF instrumentation used even in the high m/z range enabled the identification of very elongated sialylated antennary structures including searching for N-glycans carrying Lewistype structures (antenna-type fucosylation). The experimental isotopic distribution of both variants of recombinant CD52 matched the theoretical isotopic distribution of the 90% trisialylated (non-Lewis fucosylated) CD52 glycoforms, indicating Buffer-treated CD52-Fc III that the main glycoforms of recombinant CD52 do not carry Lewis-type fucosylation (Supplementary Figure 1A). The more bioactive CD52 I displayed a higher level of multiantennary sialylated and possible LacNAc elongated structures (Supplementary Figure 1B). α-2,3 Sialylated N-Glycans Are Indispensable for CD52 Activity CD52 N-glycans displaying α-2,3 sialylation preferentially bind to Siglec-10 (4). PGC-ESI-MS/MS glycan analysis and MAL-I lectin blotting were used to identify any differences in sialic acid linkage between the two variants of recombinant CD52-Fc (CD52-Fc I and CD52-Fc II). MAL-I preferentially recognizes α-2,3 sialic acid linked tri-and tetra-sialylated N-glycans (15). Despite the high separation power of PGC for sialoglycans, this technique has difficulty resolving very large multi-antennary sialylated glycans, but can easily discriminate between α-2,3 and α-2,6-sialylation on the more common bi-and tri-antennary Nglycans. Several abundant bi-antennary α-2,3 sialoglycans were observed on CD52 I. For one sialylated glycan, m/z 1140.4 2− (GlcNAc 5 Man 3 Gal 2 NeuAc 1 ), only the α-2,3 sialic acid glycan isomer was observed on CD52 I. On the other hand, the less bioactive CD52 II carried both α-2,3 and α-2,6 sialo-Nglycans ( Figure 3C). This differential sialyl linkage presentation between the two recombinant CD52 variants was supported by MAL-I lectin binding, which was higher for the more bioactive CD52-Fc I ( Figure 3D). The importance of α-2,3 sialylation for bioactivity of CD52-Fc was confirmed in a parallel experiment in which the immunosuppressive activity of sialidasetreated and re-sialylated CD52-Fc was determined relative to the original recombinant variant. Treatment of CD52-Fc with sialidase completely abolished its immunosuppressive activity, which was fully restored upon re-sialylation with α-2,3, but not α-2,6 ( Figure 3E). Overall, these findings indicate that the bioactivity of CD52-Fc is associated with the presence of α-2,3linked tetra-sialylated N-glycans found on CD52. Active CD52 Glycoforms Resolved by Anion Exchange Chromatography We performed anion exchange chromatography on a MonoQ column in order to separate recombinant CD52-Fc III variants based on their degree of sialylation, with the aim of identifying the most bioactive forms (Figure 4A). The increasing degree of sialylation [decreasing isoelectric point [pI]] of CD52-Fc in the collected fractions was confirmed by isoelectric focusing (IEF) (Supplementary Figure 2) and mass spectrometry. The released N-glycans from fractions 46 to 51 (F46-F51) exhibited a gradual increase in sialic acid content, and structures containing a higher number of antennae (Table 1), as shown also from intact glycopeptide analysis (Supplementary Figure 3). Released and intact glycan analysis from fraction 30 revealed various GlcNAc and Gal capped structures and a complete absence of sialic acid moieties (Table 1 and Supplementary Figure 3). Remarkably, only two fractions, F48 and F49, with pIs in the 5-6 range, displayed significant immunosuppressive activity (Figure 4B). The adjacent fractions were not bioactive, even at higher concentrations of protein (Supplementary Figures 4A,B). These late-eluting, uniquely bioactive fractions (F48-49) were highly enriched (60-70%) in tri-and tetra-sialylated glycans. The Highly Anionic MonoQ Fractions Are Enriched in O-Sialylated Glycans Initially, O-glycosylation analysis of de-N-glycosylated CD52 at the intact peptide level revealed that both variants of recombinant CD52 (CD52 I and CD52 II) had very low (4%) O-glycan occupancy (Figure 6A), casting doubt on the relevance of O-glycosylation for CD52 activity. Non-deamidated signatures were absent in the spectra for both CD52 I and II, indicating that the CD52 peptides were fully N-glycosylated ( Figure 6A, black symbols). Like human spleen CD52, the recombinant CD52 proteins were found to contain mainly core type 2 O-glycans with one or two sialic acid residues ( Figure 6A, gray and orange symbols, respectively). Sialylated core type 1 O-glycans were also identified albeit at very low abundance (<0.5%) (data not shown). Interestingly, the most anionic MonoQ CD52 fractions (F46-F51) had a considerably higher O-glycan occupancy (15-20%) compared to the original non-fractionated CD52 (4%). Extracted ion chromatograms (EIC) of the bioactive fractions (F48 and F49) showed an absence of sialo-isomers for the most abundant O-glycan structure m/z 665.2 2− (GalNAc 1 GlcNAc 1 Gal 2 NeuAc 2 ), but not for m/z 1040.4 1− (GalNAc 1 GlcNAc 1 Gal 2 NeuAc) ( Figure 6B). Finally, O-glycan site localization was determined by electron transfer/higher-energy collision dissociation (EThcD), which provided c and z ions, allowing the conclusion that disialylated O-glycans were conjugated to Ser12, and possibly Ser10, whereas the mono-sialylated O-glycans were only found on Thr8 (Figures 6Ci,ii). DISCUSSION In this study, we determined that CD52 from human spleen and recombinant forms of human CD52-Fc carry N-glycans that display complex type core fucosylation, abundant sialylation, and LacNAc extensions. These features corroborate a previous report (6) on the N-glycan of human spleen CD52, but we extended this in several ways. By comparing two recombinant CD52-Fc glycoproteins that differed in specific bioactivity, made in different host cells, we found that the more bioactive form had a significantly higher abundance of tetra-sialylated N-glycan structures with α-2,3 sialic acid linkage. The less bioactive form, on the other hand, exhibited significantly higher bisecting GlcNAc structures. By MonoQ anion exchange chromatography, CD52-Fc was separated into a gradient of anionic glycoforms, which exhibited distinctly different immunosuppressive activities. Again, the most bioactive glycoforms uniquely displayed an abundance of tri-and tetrasialylated glycans (60-70%), high levels of α-2,3 sialylation (58%), and an absence of bisecting GlcNAcylation. Moreover, the most anionic tri-and tetra-sialylated N-glycopeptides had a unique abundance in core type 2 di-sialylated O-glycan on Ser 12. Both glycan-and glycopeptide-based analytical approaches were used to correlate CD52 glycan structure with CD52 bioactivity. The glycan approach depended on the high resolving power of PGC columns to separate glycan isomers and isobaric structures. It was used in conjunction with negative mode ionization to provide fragment ions of certain glycan structural features (11,14). The glycopeptide-based approach allowed analysis of CD52 glycans directly bound to the peptide backbone with the assurance of no interference by Fc glycan. The two approaches largely corroborated each other, adding confidence in the reported structures. Indeed, we found the same results after CD52-Fc fractionation by anion exchange chromatography, as described. Anion exchange was previously employed to fractionate sialylated glycoforms of the soluble and spermassociated form of CD52 in the mouse reproductive tract (16), but glycan structure was not analyzed. We confirmed the importance of the α-2,3 sialic acid linkage for CD52-Fc bioactivity. Previously, we showed that soluble CD52 mediates T-cell suppression by binding to Siglec-10 (3). The diverse family of mostly inhibitory Siglec receptors has evolved to recognize linkage-specific sialic acid residues on host cells and pathogens (17). Siglec-10 is highly expressed on leukocytes (18,19) and plays significant roles in regulating the innate and adaptive immune response to tissue injury, sepsis and viral invasion (20). Previously, Siglec-10 was reported to have no binding preference for α-2,3 or α-2,6-sialylation (18,21). However, we recently found that human CD52-Fc binds to Siglec-10 preferentially through the α-2,3 sialic acid linkage (4). In the present study, bioactive CD52-Fc was characterized by a high abundance of the α-2,3 sialic acid linkage, and re-sialylation with α-2,3 restored the bioactivity of sialidase-treated CD52-Fc. Regarding CD52 O-glycosylation, Ermini et al. (22) deduced the presence of O-glycosylation of CD52 by antibody binding, but did not determine the type, occupancy or localization of O-glycans. We characterized for the first time the Oglycans on human spleen CD52. In addition, recombinant CD52-Fc was found to contain a low abundance (4%) of mainly core type 2 O-glycans with one or two sialic acid residues, on Ser 12 and Thr 8, but this increased significantly (to 15-20%) in MonoQ-purified bioactive CD52-Fc. Due to the proximity of the N-and O-glycosylation sites of CD52 peptide, the low degree of O-glycosylation could be due to steric hindrance from the bulky N-glycan. Determination of the O-glycan sites and occupancies on human spleen CD52 was challenging due to its limited availability. However, with continuing developments in highly sensitive glycoprotemics (20) it should soon be possible to identify the site-specific O-glycosylation of CD52 directly from tissues and bodily fluids without prior purification. Our results also indicate that recombinant human CD52 does not require fucosylated O-glycans for bioactivity, as found for CD52 of the male reproductive tract (23). The polypeptide of recombinant human CD52 is identical to human spleen CD52 and shares the core type 2 and core type 2 sialylated O-glycans with reproductive tract CD52 (24). However, we identified a dramatic enrichment of Oglycosylation in the MonoQ active CD52-Fc fractions, strongly implying a role for both N-and O-glycosylation in the bioactivity of CD52. Another striking observation was the inverse association between CD52 bioactivity and bisecting GlcNAcylation. Previously, N-glycans displaying bisecting GlcNAc were found to correlate with a decrease in tri-and tetra-sialylated structures, since bisecting GlcNAc residues inhibit the activity of GlcNActransferases required to generate multi-antennary sialoglycans (25). Furthermore, an increase in bisecting GlcNAcylation has been linked with a decrease in α-2,3 sialylation (26), which we here show is important for CD52 bioactivity. The functions of bisecting GlcNAc are not fully understood, but they have been associated with a decrease in target-cell susceptibility for NK cellinduced lysis (27). Interestingly, CD52 in recombinant human CD52-Fc resembled naturally-occurring CD52 purified from human spleen with respect to N-and O-glycosylation, except in the degree of polyLacNAc elongation, which was greater in the native form. Although bioactive CD52 was characterized by higher abundance of sialylated structures and polyLacNAcs, the contribution of polyLacNAc units to CD52 activity is yet to be determined. In conclusion, the comparison of native and recombinant human CD52-Fc, and CD52-Fc variants differing in bioactivity, enabled us to identify glycoform features that underlie the immune suppressive activity of CD52. These can be summarized as an abundance of tri-and tetra-antennary α-2,3-sialylated Nglycans, an absence of bisecting GlcNAcylation and the presence of the di-sialylated type 2 O-glycosylation. Further glycomic analysis will be required to detail the length of polyLacNAc extensions and the degree of polyLacNAc branching. The present study extends our knowledge of the glycan structure required for CD52 bioactivity and may assist in the design and production of CD52-Fc as an immunotherapeutic agent. ETHICS STATEMENT Cells were isolated from human blood buffy coats (Australian Red Cross Blood Service, Melbourne, VIC, Australia) or blood of de-identified healthy volunteers with informed consent through the Volunteer Blood Donor Registry of The Walter and Eliza Hall Institute of Medical Research (WEHI), following approval by WEHI and Melbourne Health Human Ethics Committees. Healthy human spleen from cadaveric organ donors were obtained from Australian Islet Transplant Consortium and trained coordinators of Donate Life from heart-beating, brain dead donors with informed written consent of next of kin. All studies were approved by WEHI Human Research Ethics Committee (Project 05/12).
2019-08-23T06:03:54.492Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "19fe8ca9d852b775f0e5ca010e6b52497cea9d53", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.01967/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19fe8ca9d852b775f0e5ca010e6b52497cea9d53", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
225169234
pes2o/s2orc
v3-fos-license
Adapting Competitiveness and Gamification to a Digital Platform for Foreign Language Learning — Due to globalization growth, learning a second language is a necessity to develop in an increasingly demanding multicultural environment. However, at present we still find that some traditional methodologies are still being applied for teaching, which represents a problem for current students, also called digital natives, because these methodologies should adapt to this digital age of technology and knowledge. To solve this problem, a digital resource for foreign language learning was developed taking into account competitiveness and gamification as bases to motivate students and engage them into the course. The objective of this tool is to improve the processing of theoretical information obtained making use of a virtual environment which has competitive activities and gamification elements such as: obtaining medals for completing tasks, user's progress bar and ranking according to the scores obtained, so that students are motivated and improve their learning. This research seeks to analyze the effects of applying competitiveness and gamification in a virtual environment aimed at foreign language learning. It was found that 81.03% of students are more inclined to use gamified digital tools and also 82.76% of students feel more motivated to learn a second language using this methodology. Introduction In a society that is constantly growing by globalization, language proficiency is no longer optional but necessary.One of the main reasons for its importance is that it is essential to understand and share information in a language that everyone can understand, and for this, English is used as main language in results dissemination [1] and multi-disciplinary research [2], [3]. This cognitive process of language acquisition varies in each person and that is why new teaching strategies must be applied in order to cover all the different ways of processing the information obtained.Also, for this digital age of knowledge we have "digital natives", named because they were born in the age of technology [4] as a part of society that has grown hand in hand with technological advances and therefore requires a different way of learning [5], [6]. There are several proposals to solve this problem, one of them being "gamification", which makes use of videogame design elements in a different context in order to motivate and engage students [17], [18], in learning process [19], [20]. In recent research it is concluded that gamification has a positive effect on behavior [21]; according to [22], there are some gamification elements that involve specific psychological aspects such as that obtaining medals and rewards involves personal satisfaction and desire to win; also [23] indicate that the use of these new tools arouses the curiosity of students generating a certain commitment to discover new things, which serves as main element in student motivation [24]. In addition, we find several case studies that demonstrate the success of using this proposal, as is the case of online courses offered on virtual platforms (MOOC: Massive Open Online Course) in which gamification plays an important role in their success [25], [26], by influencing the student in a positive way [27], [28] and being capable of adapting these tools to an specific case [29].Additionally we could mention other research projects which uses gamification for teaching advanced courses, achieving positive results regarding student's learning [30], for example a course for Descriptive Geometry for Architecture [31] or programming courses [32]. Also, competitiveness is used as one of the main factors to be taken into account for this research, since it has been shown that students develop their cognitive skills better in a competitive context to improve their overall performance [33] and optimize the execution of certain activities through gamification [34]. Thus, we have that gamification has rapidly became a subject studied by different fields of research and it still continues to be relevant for them.One of these fields, and the one covered in this research, is foreign language learning. The problem we found is that traditional and best-known methods that exist dedicated to language education, focus only on the distribution of content without taking into account an important factor such as motivation, whose absence not only affects low learning but also the demotivation that the student feels along with problems such as: lack of confidence, discouragement to continue learning and lack of commitment [35], ending with attrition and abandonment in language learning [36].Furthermore, there are still gaps in this field that must be solved and this is why this problem is still relevant at this time. The objective of this research is to analyze the effects resulting from providing a digital resource that reinforces the processing of information acquired in English language learning and whose activities are mainly based on competitiveness.This tool serves as an additional resource to motivate the student in the learning process by applying gamification.Its significance to the field lies in showing that motivation and competitiveness are important factors to take into account when developing these tools. Also, in order to analyze the effects resulting from the use of a gamified digital resource based on competitiveness aimed at language learning, the following research questions were established: • Regarding gamification ─ R.Q.S.1.Can a better learning, by using a gamified digital tool, be verified?─ R.Q.S.2.Is the expected motivation achieved in students?─ R.Q.S.3.How do students perceive the use of these digital resources? • Regarding competitiveness ─ Is the application of competitiveness relevant in this process?─ In this methodology, does competitiveness among students improve learning? 2 Theoretical Framework Competitiveness The premise of obtaining a benefit is what encourages competitive people to outdo themselves and others [37], and also requires a certain level of commitment to the activity in order to complete it.This satisfaction of being in first places [38] is a kind of motivation that is implicitly present in video games, since there are some specific elements in them that produce a benign effect on the psychological need for personal satisfaction [39].Because users strive to win and, in this way, compete ``playing" [40], it is that competitiveness is considered one of the main factors in student motivation. This tool has a ranking system which compares not only user scores but also medals obtained throughout the progress of the activities.The system is updated every time a new record is saved in the score table and medals number table. Motivation Table 1 shows that the most common dropout factor in e-learning platforms is the lack of motivation among others [41], since these platforms focus more on the distribution of course content and not on motivating students so that they can be engaged with the development of activities.We consider it as a main factor because through motivation, the student should be engaged to obtaining better results when learning. To achieve this engagement in students, activities are implemented that meet their psychological and social needs [42], [43], in order to feel committed without any conditioning.Thus, according to the theory of self-determination [44] we can distinguish two types of motivation according to the objectives they pursue. Intrinsic motivation: It refers to performing an activity independently, either because it is fun or because there is an interest in persons in satisfying their personal psychological needs.To achieve this motivation, the platform uses video game elements like: scores, challenges, and levels that pose a challenge to overcome [45].Word repetition activities are included as these exercises have the ability to make students learn indirectly by repeating words and associate them with an action or object [46]. Extrinsic motivation: External stimuli that a person has when performing an activity such as being recognized by others.The platform uses incentives such as virtual medals and assistance through collaboration and competition activities [39], [40], which serve as positive reinforcement to meet the students' personal external needs. To understand how gamification is used in non-recreational activities, it is necessary to understand how the games manage to capture the attention of the players.According to [47], we can compare game design with committed learning, since both have similar characteristics such as: goal-oriented, challenging tasks, standards in the way of solving a problem, reinforcement by some mistake made, performance affirmation, cooperation with others, novelty and variety, and possibility of choice. Narrative The narrative is the way to tell a story from the point of view of a character within a video game and that can include different plots depending on the actions performed by the player [48].As told by [49], it is possible to learn by doing pleasant and entertaining activities, such as watching a movie, reading a book or watching a theater play.These activities teach a message indirectly, which is received by those who perform these activities, who in turn take a role of spectators but not participants.In order to provide this participation, this research uses a gamified digital tool to support the interaction between the student and the English teaching platform [50]. Game design models In order to design a video game [51], certain elements described by some reference models must be taken into account, therefore two models that are taken into account when developing this gamified digital tool are described here.There are reference models that, on the one hand, are used specifically for the design of video games with the aim of making them entertaining and addictive; and on the other hand, those whose objective is to establish guidelines for the development of educational games taking into account learning, as main factor.A model of each type is taken into account in order to create a tool that meets both objectives. MDA Framework: The first model of reference used in this research is the MDA framework (Mechanics-Dynamics-Aesthetics) [52].This model is used because it seeks to make the platform look similar to a video game [53], this model also allows us to analyze and divide the design of a video game into 3 components: • Mechanics.Related to the base components of the videogame such as: rules, player control, algorithms involved, etc.That is, all the actions that the player can perform and that allow the realization of the dynamics.• Dynamics.Related to the context of the game like options, restrictions, competition, etc.They indicate how the mechanics are executed from the player's behavior.• Aesthetics.It is the player's emotional response to how the game is presented. Therefore, the feelings and sensations that the video game produces in the user are involved. Content and language integrated learning approach.Since the platform has an educational purpose, this approach is taken into account because it integrates the gamification processes with the learning of content and language.The phases of this model are: determining the teaching and learning objectives according to the needs of the student and, based on the content and language that educators wish to teach, take into account the subject of the course to finally choose the genre of the video game that more could be adapted to that theme [54] and focus only on the "emotions" since it is thanks to them that a better learning is achieved by the students. Gamification This term comes from the verb "game".Despite the increase in gamification research, a formal scientific definition is not yet established, for example according to [55] gamification consists of creating a playful experience by imitating the same psychological experience that videogames create; according to [56] gamification is the process of making an activity more like a game, and according to [57] gamification is the use of game design elements applied in non-playful contexts.All these definitions revolve around a main concept that is, to transfer the elements of design that use video games to activities outside the games, in order to capture the attention of people involved in that context.That is why, gamification is proposed to be used as an educational strategy with the aim of motivating students for an optimal learning of the English language [58].This tool, designed for web systems, allows the development of different activities such as challenges, review of topics, tests for each topic, among other activities; so that the student has freedom when choosing what to do.To differentiate mechanics and dynamics by groups with similar qualities, the MDA model of reference is followed: Mechanics: The platform has activities such as: completing the word, ordering sentences, translating sentences, relating words, relating images to words, interaction with audio and 3D objects, and videos in lesson format that can be accessed as many times as possible. Player Progress: All the elements that player obtains as story progresses and are quantitative, such as: points, medals, ranking and character level. Tasks: Activities necessary to meet some objectives like missions and minigames Game Content: Type of activities presented by the platform such as world exploration and environments simulation. Additional features: All those characteristics that are not included in the previous classifications like: reinforcement, map, background story, characters, enemies, tutorial, incremental difficulty, clues or advices. Dynamics: The dynamics refer to how the user will execute the mechanics.The platform interface is shown as a 3D classroom where the student can interact with the objects.In this way, the student is allowed a free navigation through the various options offered by the digital tool.We include actions such as: receive medals, character selection, exploring the virtual environment, solution for mini-games, difficulty adjustment, solution clues, user's decisions and evaluation system. Aesthetics: The students enter to a 3d virtual world where they can interact with all the objects in that environment. Foreign languages teaching It is necessary to take into account the skills we need to learn in order to express ourselves correctly in English [59].We have four skills we need to learn: speaking, which refers to the pronunciation of the words; writing, which refers to write correctly; listening, which refers to being able to understand what we hear and reading, it refers to read and understand words in another language.This research covers the four skills in presenting different activities for each of them [60]. For the teaching of foreign languages, we can find different platforms that allow, for example: content creation as interactive questionnaires [61], [62], motivate the commitment of users through mobile applications using: collaboration [63], and multiplatform systems such as "Duolingo" that has different activities implemented such as grammar and English pronunciation [64], [65].However, a few of them includes motivation as a main factor [66]. In this research we want to give real importance to motivation, since through it we can achieve better results regarding student learning. Instruments and procedure The application of the study and the collection of data was carried out between March 2 and 13, 2020 during English classes at the Language Center of the "Universidad Nacional de San Agustín de Arequipa" Arequipa, Peru.The study application lasted 1 hour for 10 days, which makes a total of 10 hours per group, obtaining a total of 40 hours of study application. The study included two tests of language proficiency carried out before and after the application of the methodology to measure real knowledge of students.Two questionnaires were also carried out regarding this new gamification and competitiveness methodology, one for each experimental and control group.These questionnaires together with descriptive statistics, Cronbach alpha, and correlations, helped us answer our research questions Reliability: Since the study application consists of measuring qualities that are not directly observable, we used Cronbach's Alpha.It allows the measurement of qualities that are observable for each of the students and that are directly related to that nonobservable quality.It is the mean of the correlations between the variables that are part of the scale and can be related to the variances and correlations of the items [67]. For a true reliability of Cronbach's Alpha, the items need to be closely related to each other, since the maximum level of correlation is reached when the items are all the same, so ideally, Cronbach's alpha should be as close to 1 as possible.According to [68], when α>=0.7 the instrument is really reliable, however for α<0.7 it could indicate a weak reliability in that instrument. Participants Participants are people who enrolled in language courses offered by this institution and their age range between 16 and 35 years because it is a university center with undergraduate and postgraduate training.There was a total of 114 students.To verify that there was a correct learning, four basic level groups of students were chosen.The English teacher was the same for the 4 groups, of which two control groups and two experimentation groups were randomly selected. Experimentation group: On first day a brief tutorial was made on how to use the platform, and the activities that the students had to do.For the study to be consistent, the teacher taught the same topics in the four groups, using the digital tool in only two groups that are the experimental groups. Control group: The teacher taught same topics as with the experimentation groups with the only difference that the platform was not used, but a manual system of individual scoring for each correct intervention and an English book as support material. Regarding gamification Can a better learning, by using a gamified digital tool, be verified?Table 2 summarizes the result of the students' grades before and after the evaluation.Figure 1 represents the student's progress achieved at learning foreign languages using our methodology with support of a gamified digital tool and figure 2 represents same learning in the traditional way without use of digital tools.Reading pre and post evaluation: Results indicate that there was truly learning in English reading skills.Both mean and standard deviation indicate that the effect of applying the digital tool was really positive for this skill.It is also evident that there was a greater learning for this skill. Writing pre and post evaluation: It is evident that there was more learning for this skill with the use of the tool, than those who did not use it.It can be verified that there is more learning in the experimental groups than in the control groups.The platform is in Spanish language but can be configured in English language, in addition, most activities present grammar as main exercise. Listening pre and post evaluation: The results show significant learning between the experimentation and control groups.Although the initial groups started with an almost similar average, the final results indicate that the experimental groups achieved greater learning than the control groups.The students can practice a language more constantly by listening to audios recorded by foreigners, which allows them to become more familiar with the language. Speaking pre and post evaluation: Results indicate that, for both the experimentation group and the control group, there was learning but not as significant as in the other skills.This is because the tool provides the necessary support to reproduce and be able to listen to words recorded by a foreigner so that the student can repeat the words, however it does not have a word recognition function so that the teacher must review this skill for each of the students. Is the expected motivation achieved in students?How do students perceive the use of these digital resources?According to Table 3, it is shown that the students in the experimentation group who used the tool feel more interest (81.04%) in continuing to learn, as well as feel motivated to learn English (82.76%) and to participate in class (79.31%).In the same way, the students of the experimentation group indicate that they feel satisfied with what they have learned (91.38%) since they reinforced the knowledge obtained in class, with the use of the gamified digital tool.Finally, the students indicate that they felt good about the tool (81.03%) and that it helped them work harder so they can outdo the other students (86.21%). Regarding competitiveness Is the application of competitiveness relevant in this process?In this type of strategy, does competitiveness among students improve learning?Table 5 shows that the majority of students in the experimentation groups agree with collaborative (Aexper=57.14%,Bexper=71.43%)and competitive learning (Aexper=78.57%,Bex-per=76.67%), and also indicate that competitiveness does motivate students to continue learning (Aexper=78.57%,Bexper=83.33%).Unlike control groups, where students prefer to learn individually (Ccontrol=53.85%,Dcontrol=63.3%)and in the same way they feel motivated to learn by themselves (Ccontrol=61.54%,Dcon-trol=53.33%). Discussion and Conclusion In this study, a methodology, that applied gamification to a digital support material tool, was used to improve students' language learning.This methodology consisted of the use of gamification elements and activities based on competitiveness so that students feel committed to learning in a more dynamic way than traditional.These gamification elements provided the necessary feedback for the student to be motivated to continue with the course while being able to compete against their peers to reach the first places in virtual competitions.It has been shown this methodology to be effective because, for this case, language learning requires the students to relate foreign words to words they know in their language, so the use of a graphical tool should be essential. In addition, the competitiveness between students was established because, on the one hand they felt motivated to compete against others and on the other hand, they wanted to overcome themselves. As shown in Tables 4, 5 and 6, our Cronbach's alpha coefficient is greater than 0.7 (α>=0.81 for gamification questionnaire and α>=0.79 for competitiveness questionnaire), which indicates that the questionnaires used are truly reliable as well as the information that was collected.This confirms that both gamification and competitiveness are important factors when applying methodologies that include digital tools to support learning of specific topics. Results have shown that the use of digital resources as support material in class is effective.Students who use these tools feel more motivated to learn than those who do not use them and this is because learning is done dynamically, and it is an innovative method.The use of gamification elements generates in the students a commitment towards the course and its completion.Regardless of age, we consider important the use of these as they offer greater participation to those students who are not interested in participating.In this case we can verify that the use of these tools effectively improves learning with the appropriate methodology.Furthermore, the motivation and personal satisfaction of the students are also a great factor to be considered when developing these tools.Nowadays, people do most activities digitally as they prefer a dynamic virtual environment.Finally, the competitive format presented by the platform generates in the students a feeling of self-improvement and to outdo the other students since they can compete with each other. Table 2 . Summary of pre and post test results. Table 3 . Experimentation Group answers.N=58.α=0.81 (α: Cronbach's Alpha).These results differ from what is indicated in table 4 in which the students report that they feel some interest in learning English (51.78%), as well as are not motivated enough to learn English (75.19%).In addition, it is evident that the attempt to overcome others (75.19%) is smaller than those students who do use the tool.It also shows that students are satisfied with what they have learned (89.29%) and that they have regular participation in class (35.71%).Finally, it is clear that students do not like to use the didactic text (50%) because it is not dynamic at all.
2020-10-28T19:20:18.186Z
2020-10-19T00:00:00.000
{ "year": 2020, "sha1": "4bc67ed79abbc73038f2c32609e569b44e94f77c", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-jet/article/download/16135/8091", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee2e0560888a2623f194863cf7db221ed6bb02d3", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
4709333
pes2o/s2orc
v3-fos-license
Quick Response Codes to Instantiate Interactive Medical Device Instructions For Display on a Smartphone Usability is an increasingly important factor within the field of healthcare and medical device development. One of the main issues with the usability of medical devices is their complex nature. Therefore, it is vital that comprehensive and clear instructions are provided to aid in the operation of these devices. While paper-based instructions are commonly provided, they have many disadvantages which can be addressed by interactive digital instructions. Moreover, in an era of pervasive computing, it is important to provide these instructions at the point of need. This can be done using a Quick Response code and a smartphone which allows for interactive instructions to be instantly accessible. This paper presents a case study and a working prototype to test the utility of interactive medical device instructions accessed by a QR code attached to the medical device. Usability, Medical Devices, Instructions, Quick Response Codes. INTRODUCTION Usability is an increasingly important factor within the field of healthcare and medical device development.Much research has been carried out into the usability of medical devices and how this can be improved (Zhang et al., 2003;Vincent, Li, & Blandford, 2014;Fidler et al., 2015).Medical devices which are difficult to use and provide a poor user experience can, in extreme cases, mean the difference between life and death (Lin et al., 2001;Samore et al., 2004). THE NEED FOR INSTRUCTIONS One of the main issues with the usability of medical devices is their complex nature.These devices often need to facilitate multifaceted tasks, and may have a large number of steps which need to be completed.While it is important to reduce the complexity in a task process as much as possible, this can only be taken so far.Therefore, it is vital that comprehensive and clear instructions are provided to aid in the operation of these devices.Without clear instructions, user errors can occur, often with minimal fault of the user (Van Cott, 1994).This was the case in a study by Fidler et al. (2016), where a nurse failed to operate a feature without instructions.These instructions need to be provided using terminology that the user can understand, or easily learn.Analysis of a set of instructions provided with a "simple" medical device showed issues with the instructions being too complex for users to fully understand (Rogers et al., 2001).The form of which instructions take is also vitally important.Paper instructions are commonly used as they are cheap and easy to produce.They are also tangible and users can work through them at their own pace.However, there is the possibility of these instructions being misplaced or destroyed.There is also the possibility that, for particularly complex devices, having all the instructions presented on paper can be overwhelming and visually distracting for users (Figure 1).Also reading instructions and then performing instructions is a sub-optimal approach whereas hearing digital audio instructions and performing these could be more efficient for the user as evidenced in user research of the public accessible cardiac defibrillator which is a medical device that provides audio instructions in an emergency situation (Torney et al., 2016). Providing instructions in a digital format can go a long way to combat these issues, as they are not easily destroyed or misplaced.If presented correctly, they can also better manage the cognitive load of the user, allowing the user to successfully follow the instructions.Taking this a step further, linking a Quick Response (QR) code to the digital instructions, users will be able to access instructions instantaneously without needing to navigate to a particular location on the web or their smart device.It also means that new users will not need any special equipment to access the instructions.Instructions themselves can be simple interactive cards augmented with audio that can be swiped to take the user through the use of a medical device in a sequential manner.Such instructions can be used in vivo for staff that have not used a particular brand of a device before or for training. CASE STUDY A prototype of online instructions was created for a new x-ray cabinet and accompanying software.The instructions for using the device were divided into seven sections: 1) Switching on the equipment, 2) Beginning a session, 3) Loading a sample, 4) Acquiring images, 5) Image adjustment, 6) Saving a session, 7) Switching off the equipment.Each section was broken down into a maximum of nine steps.A description, as well as an image of the xray cabinet or a screenshot of the software, are used to describe what the user needs to do in order to complete the step, as shown in Figure 2. The instructions were developed using web technologies of Hypertext Markup Language (HTML5), Cascading Style Sheets (CSS3), and JavaScript.CSS3 and JavaScript have been used to add gesture controls for ease of use on mobile devices.Transition animations were also added as these help to give the user a feeling of moving smoothly from one step to the next.CSS3 was also used to add audio instructions for each step, as hearing the instructions being read to them may be beneficial for some users.Studies have shown this is particularly true for those in the medical industry (Reid, 1987). The instructions are hosted on the company server.The QR code linking to the instructions is attached to the side of the x-ray cabinet, as shown in Figure 3. Users can then scan the QR code with their smart phone or tablet to access the instructions. FURTHER WORK It is hoped that a usability study will be carried out to determine the effectiveness of the digital instructions against a paper equivalent.The medical device is to be shown at a conference where potential users can be recruited.These participants will be divided into two groups; one to use the paper version of the instructions, and the other to use the digital instructions to be accessed via the QR code.The participants will be asked to complete a series of tasks using the instructions.The completion times of both groups will then be analysed to determine the effectiveness of the digital instructions. There is also potential for the digital instructions to be refactored into an augmented reality solution, as these types of instructions have proven to be even more effective than standardised digital instructions (Baird & Barfield, 1999). Figure 2 :Figure 1 : Figure 2: An example of paper instructions being overwhelming to users
2018-04-12T13:08:50.944Z
2017-06-08T00:00:00.000
{ "year": 2017, "sha1": "319ba1b541d838f4bef720b95062ad4e1bd56a6e", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/2e1c1cbc-f179-4f44-a05e-8d973d3d195b/ScienceOpen/001_Patterson.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "319ba1b541d838f4bef720b95062ad4e1bd56a6e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
91177405
pes2o/s2orc
v3-fos-license
Dynamical quantum phase transitions in many-body localized systems We investigate dynamical quantum phase transitions in disordered quantum many-body models that can support many-body localized phases. Employing $l$-bits formalism, we lay out the conditions for which singularities indicative of the transitions appear in the context of many-body localization. Using the combination of the mapping onto $l$-bits and exact diagonalization results, we explicitly demonstrate the presence of these singularities for a candidate model that features many-body localization. Our work paves the way for understanding dynamical quantum phase transitions in the context of many-body localization, and elucidating whether different phases of the latter can be detected from analyzing the former. The results presented are experimentally accessible with state-of-the-art ultracold-atom and ion-trap setups. I. INTRODUCTION After the establishment of universality and scaling in equilibrium, 1,2 particularly with the invention of the renormalization group, 3,4 their extension to out-ofequilibrium systems has become a major area of research in physics. Notwithstanding considerable progress for classical out-of-equilibrium systems, 5 out-of-equilibrium dynamics for quantum systems remains an active frontier. In recent years, the field of dynamical quantum phase transitions [6][7][8] (DQPT) has shed new light on the out-of-equilibrium quantum many-body criticality. The concept of DQPT relies on an intuitive analogy between the thermal partition function and the overlap of an initial state and its time-evolved self in the wake of a quench, referred to as the Loschmidt amplitude in this context. One can thus construct the Loschmidt return rate, which is proportional to the logarithm of this overlap, as a dynamical analog of the thermal free energy, with evolution time standing for complexified inverse temperature. As such, nonanalyticities in the return rate would occur at critical times much the same way a thermal phase transition manifests itself as a nonanalyticity in the thermal free energy at a critical temperature. Even though quite a few of the original questions raised at the onset of the field of DQPT have now been answered, in the quest to attain these answers far more questions have arisen. 9 For example, in the initial study of quenches in the one-dimensional transverse-field Ising model, 6 the singularities in the return rate were found only after quenches across the equilibrium quantum critical point. 9 Soon afterwards it became clear that this was not the necessary condition for DQPT. Rather, one can define the concept of a dynamical quantum critical point, which may in some cases coincide with the equilibrium one, that separates quenches with DQPT from those without. [10][11][12] Yet later it was observed that in long-range quantum spin chains with power-law interactions, 13,14 the DQPT occur regardless of the quench. Nevertheless, a mean-ingful dynamical critical point can still be defined, this time separating qualitatively different types of evolution after the quench, which notably depends on whether the order parameter oscillates about zero in the time-evolved quenched system. [13][14][15][16][17] This in turn spurred further study of the steady states reached after the quench, their relationship with the DQPT occurring in the evolution towards them, and the effects of quasiparticle excitations, which is still ongoing. [18][19][20][21][22][23] The large body of theoretical work on DQPT in closed clean quantum many-body systems in the wake of a quantum quench has recently been supported by experimental realizations in ion-trap 24 and ultracold-atom 25 setups. Naturally, the robustness of DQPT has since been investigated in other paradigms of quantum manybody physics, such as Floquet systems [26][27][28] where a novel type of "Floquet singularities" appear as intrinsic features of periodic time modulation, and also in disordered systems, 29 where singularities arise in the return rate of random-field classical Ising models. This latter result directly inspires the investigation of DQPT in many-body localized (MBL) quantum systems 30-32 as the next frontier in this field. Indeed, MBL systems are drastically alien to the models that have thus far been utilized in the study of DQPT. On the one hand, so-called emergent integrability due to the failure of MBL systems to thermalize after a quantum quench may perhaps allow one to consider that DQPT behavior should not be ruled out given its prevalence in clean integrable models. On the other hand, how the emergent exotic MBL phases, if at all, are related to possible DQPT nonanalyticities is a completely open question. Furthermore, since the exact origin of DQPT is still not fully understood, with a quasiparticle origin thus far the most convincing argument, 18,22,23 studying DQPT in MBL systems can possibly shed further light on what exactly the necessary and sufficient conditions are for singularities to arise in the return rate. The Loschmidt amplitude is intimately connected to the partition function of the corresponding system evalu-ated at imaginary temperature as we elaborate below. In turn, the imaginary-temperature partition function measures the spectral form factor, or the correlations between energy levels in a quantum system. 29 DQPT would then be equivalent to having strong correlations between energy levels across significant energy intervals much larger than the level spacing. We would expect those correlations in integrable and close-to-integrable systems, as well as in some MBL systems, but not in chaotic quantum systems. This line of inquiry further motivates studying MBL systems for signatures of DQPT in them. In this work, we provide analytic and numerical evidence showing that quantum many-body systems that support MBL can indeed show rich DQPT behavior. The rest of the paper is organized as follows: In Sec. II, we review the imaginary-temperature boundary and full partition functions, and present a candidate quantum MBL model in whose partition functions singularities arise. In Sec. III, we use the l-bits formalism to determine the conditions under which disordered quantum models will exhibit singularities in their imaginary-temperature full partition function. We present and discuss the DQPT in our candidate model in Sec. IV. We conclude in Sec. V and provide an outlook on future work. Additional details on the l-bits analysis are provided in Appendix A. II. PARTITION FUNCTIONS AND MODELS We would like to study the probability that a quantum system placed initially in a state |ψ , after evolving for time t, returns back to its original state |ψ . This quantity can be written as Here c α are the overlap coefficients between the state |ψ and the eigenstates |ψ α of the Hamiltonian H corresponding to the energy levels E α . Z(t) is a boundary partition function construed in the seminal DQPT work of Ref. 6 as a dynamical analog of the thermal partition function. Consequently, the Loschmidt return rate, is a dynamical analog of the thermal free energy, with N standing for system size. General principles of statistical physics dictate that for large systems Z(t) may be well approximated by the thermal finite-time partition function with the appropriately chosen temperature T , such that Among these, the T → ∞ partition function is especially suitable for study: Depending on the choice of |ψ , in some cases Z(t) and Z(t) may even coincide, specifically when |ψ is proportional to the sum α |ψ α . At the same time, |Z| 2 is the so-called spectral form factor of the system, The Fourier transform of the spectral form factor over time t is simply the correlation between the density of states at energies separated by a certain energy interval, Here, is the density of states. A nonanalyticity in Z(t) at a certain time t c is therefore a reflection of strong correlations existing between energy levels at the energy interval ω c ∼ 1/t c . It might be quite surprising to expect these correlations to persist over energy intervals far larger than the level spacing for an interacting quantum many-body system. Indeed, an extreme example of a generic quantum system is provided by the Random Matrix Theory (RMT). Within that theory, average spectral form factors are known to have a single nonanaliticity at t = t c equal to the inverse average level spacing. 33 Level spacings in generic quantum many-body theories are exponentially small in system size, resulting in an exponentially large time at which the potential nonanalytic behavior occurs. These types of singularities would then be nonobservable. 34 In view of this argument, it is quite remarkable that a number of models already discussed in the literature display singularities at times of the order of inverse couplings in the Hamiltonian. In other words, in these models the density of energy levels correlates across energy intervals that cover a large number of energy levels. Some of these models are integrable. It is fair to declare that in this context the existence of subtle correlations in energy levels is not surprising. Some others are nonintegrable, however. We will not attempt here to analyze the origin of energy level correlations in these models. Instead we would like to look at systems which look as different from RMT as possible. RMT is supposed to represent generic "chaotic" quantum systems with Wigner-Dyson level statistics. Quite distinct from these are systems with Poisson level statistics, including integrable models. A whole separate class of quantum systems with Poisson level statistics are MBL models. Here, we would like to discuss the appearance of singularities in Z(t) and Z(t) of MBL systems. Let us concentrate on models of one-dimensional spin-1/2 chains of length N with various random interaction terms in the Hamiltonian. Those are known to be manybody localized when disorder is strong enough. 35 An example of such an MBL model can be the Ising model with random bonds and random fields Here σ z n and σ x n are Pauli matrix operators acting on spins which reside on sites n of a one-dimensional lattice. All J n , h ⊥ n and h n are independent random variables. We shall present a series of analytic and numerical arguments supporting the idea that a certain class of these one-dimensional spin-1/2 MBL models will have singularities in their Z(t). In particular, we will demonstrate that while Z(t) is not singular in the model given by (9), the following MBL Hamiltonian with J n being independent random variables, features singularities in its Z(t) as well as in its return probability Z(t) defined for suitable initial states |ψ . III. l-BITS AND RETURN RATE Locally conserved quantities called l-bits in the context of many-body localization play an important role in our arguments, so let us review their definition. As nicely argued in Ref. 36 for any spin-1/2 Hamiltonian H it is always possible to construct a set of mutually commuting operators τ z n whose square is identity (τ z n ) 2 = 1, which commute with any spin-1/2 Hamiltonian H. A straightforward technique to do that would be to diagonalize the Hamiltonian H such as the one above. In other words we use the natural basis of spin tensor product states and write H = U ΛU † where U is a unitary matrix mapping the product state basis onto the basis of eigenstates of H, and Λ is diagonal. Define then By construction, these τ z n all square to 1, commute with each other, and all commute with the Hamiltonian. Furthermore, it is always possible to rewrite the Hamiltonian in terms of τ z n according to It is obvious that the total number of the coefficients K in (12) is 2 N , same as the number of eigenvalues of H, so they are sufficient to fully parametrize the Hamiltonian. These coefficients can all be found using By itself this construction is too general to be of much use. However, for MBL Hamiltonians the expansion (12) simplifies significantly. U is defined up to the permutations of eigenvalues of the Hamiltonian. It can be argued that with the appropriate choice of U , the operators τ z n become local, that is, they can be written as a linear combination of terms involving products of spin operators σ x m , σ y m , and σ z m on sites m nearby n. At the same time, the series (12) also become local, in that the magnitudes of the coefficients K (s) n1...ns drop off exponentially with s, as well as with the increasing separations between the lattice sites n 1 , n 2 , . . . , n s in these coefficients. In this regime, τ z n are usually referred to as l-bits, with l standing for localized. The coefficients K (s) are random, being some complicated combination of the random interaction coefficients of the original Hamiltonian. The description in terms of l-bits allows us to rewrite the imaginary-temperature partition function in a very straightforward fashion: Here, the variables τ n are eigenvalues of the l-bits τ z n taking values ±1. We can use (14) as a starting point to calculate partition functions of one-dimensional spin-1/2 MBL models. Furthermore, since in the MBL phase the coefficients K (s) quickly go to zero with increasing s, it is sufficient to retain just a few terms in the series in the exponential of (14), which is what we will rely on below. Since the coefficients K (s) n1...ns are random, it is important to decide which quantities can be averaged over many sets of these random coefficients. The quantity Z(t) is not self-averaging. This means that computing it over a particular set of K (s) n1...ns taken from the original randomly generated interaction coefficients in the MBL Hamiltonian is not the same as averaging it over many realizations of them. On the contrary, the return rate defined analogously to the Loschmidt return rate (2), is a self-averaging quantity well defined in the "thermodynamic limit" of infinite chain N → ∞. It is r(t) that can be averaged over many realizations. Its average value should coincide with its typical value computed for a particular set of K (s) n1...ns . Various constants present in the definition of r(t) are there merely for normalization purposes. In the work of Ref. 29, the partition function was evaluated for the model given by (14) with K (1) n = h n random and K (2) nm = (δ n,m−1 + δ n,m+1 )J representing constant nearest-neighbor interactions. In other words, the following model was studied: with h n random independent variables. It was found, both numerically and analytically, that the resulting function r(t) is singular at the points in time t n = nπ/(2J), with n an arbitrary integer. Each singularity was found to be of the type |t − t n | ln |t − t n |. At the same time, numerical evidence indicated that if J in (16) is promoted to a bond-dependent random variable J n , then r(t) is featureless and has no singularities. We expect it to be true generally: a generic model (14) with all the coefficients being independently random variables will feature no singularities and in fact for large system size its Z(t) should self-average to a timeindependent constant. We would like to generalize beyond (16). In the next section we present evidence supporting the conjecture that as long as there is at least one nonrandom interaction coefficient in (14), with the rest being random, the return rate r(t) always features singularities, at positions t n = nπ/(2K), where n is an arbitrary integer and K is the nonrandom interaction coefficient in (14). A generic MBL model can be expected to map onto (14) with all interaction coefficients random. However, it is also natural to expect that models should exist whose mapping to (14) feature at least one nonrandom coefficient. Those models would then have singularities in their return rate. Furthermore, a possibility should not be entirely discounted that (14) with all coefficients random but correlated in a certain way would also feature singularities, and that MBL models exist which map onto these kinds of l-bit models. In the next section we present further examples of random l-bit models with singularities in their return rate, and demonstrate the existence of singularities in the MBL model given by (10). IV. RESULTS AND DISCUSSION An important remark which simplifies further analysis lies in the observation, already discussed in Ref. 29, that a random variable e −ithnσ , for σ = ±1 and t sufficiently large, regardless of the probability distribution for h n , is well approximated by the variable e −ifnσ where f n is now taken as uniformly distributed on the interval f n ∈ [−π, π]. This should be fairly obvious: the latter variable Fig. 1 but for (19), again with N = 45000. r(t) appears qualitatively different from that in Fig. 1 but features the same singularities at th = πn/2. The return rate repeats periodically beyond the largest time shown here. is uniformly distributed over a unit circle in the complex plane, while the former approaches this distribution at large enough t. We would now like to illustrate the principle of a single nonrandom coefficient in (14) leading to singularities by considering for example with J n random. In line with the observation in the previous paragraph, we instead study with f n independent random variables uniformly distributed over the interval [−π, π]. The advantage of this representation of Z(t) is that (18) is automatically periodic in t, as is clear by inspection. At the same time, the original model (17) approaches (18) at sufficiently large t, t 2π/h. This can also be easily verified numerically. Fig. 1 shows r(t) evaluated for (18) for N = 45000 sites and th ranging from 0 to π. r(t) continues periodically beyond the displayed range of t. To produce this result, we took advantage of the standard transfer matrix calculation of the partition function, which allowed us to easily go to fairly large system sizes. The self-averaging property of r(t) is obvious in Fig. 1: even though only one realization of disorder is taken, the curve is smooth. Similarly we can evaluate, for example, the partition function with the result shown in Fig. 2. This shows a new feature at th = π/4 + πn/2, however the more interesting singularity still remains at th = πn/2. All this is compatible with the observation that one nonrandom coefficient in (14) is sufficient to generate periodically repeating singularities in Z(t). Ref. 29 presented a detailed analysis of the singularities in (16) using analytic techniques. That analysis is no longer available for the more intricate models of (18) and (19), with random bonds instead of random fields. Instead, we have studied singularities in r(t) close to t n = πn/(2h) in those models numerically. We find that these singularities are not in the universality class of (16). Instead, they are of the power-law type with ν ≈ 0.2, as shown in Fig. 3 for the model (18). The analysis of singularities in (19) also produces ν ≈ 0.2. We have studied a number of other models of the type (14) with one nonrandom interaction coupling and the rest random, and they all feature singularities supporting the conjecture stated above. It is possible, however, that in those other models the singularities belong to other universality classes with distinct values of the exponent ν. Studying this would be an interesting direction of further research. Given that all spin-1/2 MBL systems map into (14), it is natural to expect that among them systems can be found whose map into (14) does not produce all random and independent couplings. Those will necessarily feature singularities in their partition function Z(t). Identi- (15) (upper panel) and Loschmidt return rate λ(t) given in (2) for the quantum model (10), with each return rate averaged over 1000 realizations of disorder. The initial state used for calculating λ(t) is the fully x-polarized product state |X . It is clear that in the weak-disorder regime h/J0 1 singularities appear in both return rates. The singularities seem to become less pronounced for h/J0 1. fying them however might not be easy. We propose (10) as the candidate model for this purpose. Fig. 4 shows the result of evaluating the partitionfunction return rate r(t) (15) and the Loschmidt return rate λ(t) (2), calculated with |ψ = |X being the state representing all spins polarized in the x-direction, for the model (10) for N = 12 sites by exact diagonalization, averaged over 1000 realizations of disorder, necessarily due to the relatively short length of the chain. Random variables J n are taken to be uniformly distributed on the interval [−J 0 , J 0 ]. Singularities are apparent in both return rates. Note that the periodicity of Z(t) is π/h. In fact, in the absence of disorder J 0 = 0, the return rate is easy to evaluate as We see that with disorder r(t) retains the periodicity of its disorder-free version, but in addition clearly develops singularities, as desired. We emphasize that not all disordered many-body models exhibit singularities in the return rate as (10) does. For comparison, we can examine the return rates for the model (9). Fig. 5 clearly shows absence of any singularities in it (Loschmidt return rate is calculated with the same |ψ = |X as above). And indeed, we can argue that this model maps into an l-bit Hamiltonian with all random couplings. We can further elucidate the model (10) by exploring the limiting cases of strong and weak disorder. In the strong disorder case, h/J 0 1, we can construct the l-bit Hamiltonian perturbatively; cf. (12). Carrying out this procedure (see Appendix A) in the second order in h/J 0 does not produce any nonrandom couplings in (12). This is in line with singularities disappearing for small h/J 0 in Fig. 4. The limit of weak disorder, h/J 0 1, is much more subtle. At J 0 = 0 the model (10) results in many degenerate levels, separated by energy interval 2h (hence the periodicity of the partition function of 2π/(2h) = π/h). Once disorder is turned on, each such level splits into a band. One can expect each band to be many-body localized. Indeed, the effective Hamiltonian within each band, which can be obtained via second-order perturbation theory in J 0 /h, is proportional to J 2 0 , with its ratio to J 2 0 being J 0 -independent. Thus the nature of the MBL eigenfunctions of this system does not depend on disorder strength as it is taken to zero, and no perturbation theory can be useful to analyze this phase in this limit. The precise structure of the l-bit Hamiltonian is difficult to determine in this regime. It is in this regime that the model (10) features singularities seen in Fig. 4. We note the absence of the exact periodicity in Fig. 4. That indicates that the appropriate l-bit Hamiltonian our model (22) for fixed h/J0 = 4 and two disorder strengths L0/h = 0 and 0.05. A finite L0 quickly washes out the singularities. Each return rate is averaged over 1000 disorder configurations, with |X the initial state used to compute λ(t). maps into cannot have simply a single nonrandom coefficient with the rest random and independent. That by itself would produce a periodic-in-time r(t) at large enough t. Rather the l-bit Hamiltoinan should have a more complicated structure involving correlations between its coefficients, going beyond the simple examples of (16) or (17). Whether the singularities disappear at some critical value of h/J 0 or are present at all values albeit getting weaker as h/J 0 gets smaller cannot be explored with the methods currently available to us. The second scenario would imply that these singularities cannot be investigated in perturbation theory over the small parameter h/J 0 , consistent with the arguments in Appendix A. Ultimately, MBL phases are notoriously difficult to analyze using analytic techniques. It is therefore not surprising that we have to rely mostly on the numerics to analyze our system. Finally, we observe that adding extra random terms to the Hamiltonian (10) can take us out of the class of MBL Hamiltonians with singular return rates, even if the first term in (10) remains nonrandom. Consider for example with L n random uniformly distributed on the interval [−L 0 , L 0 ]. This Hamiltonian does not feature singularities in its return rate as shown in Fig. 6, even for tiny L 0 /h = 0.05. Clearly, only a certain subclass of MBL models represent systems with DQPT. In line with the arguments above, the Hamiltonian in (22) must map onto an l-bit Hamiltonian with random and essentially uncorrelated interaction coefficients. V. CONCLUSION AND OUTLOOK In summary, we have investigated DQPT in quantum many-body systems with disorder. Using a mapping to l-bits, we have determined the conditions for which singularities appear in the return rate for such models, and presented candidates for which the return rate displays singularities for small to moderate disorder strength, and which may survive even at large disorder strength. Our results show that DQPT persist in quantum MBL systems, and are not restricted to clean quantum many-body models on which hitherto their investigation has been focused. In light of the search for an origin of DQPT, our conclusions confirm that a Landau equilibrium quantum phase transition is neither a necessary nor a sufficient condition for dynamical criticality to arise in the return rate. From an MBL point of view, this opens up several questions related to what MBL phases can imply about dynamical criticality. Even though this is well understood in traditional Landau phases and has been extensively studied in one-dimensional quantum Ising chains with various interaction ranges, 13,14,37,38 two-dimensional models, 19,39 and mean-field models, [15][16][17]40 little is known about how different MBL phases can alter the kind or presence of singularities in the return rate. In the other direction, one can wonder what the singularities in the return rate can tell us about the equilibrium, possibly MBL, phase. In Ref. 18, it is illustrated how one can determine the equilibrium physics, including Landau phases and the type of quasiparticle excitations in the spectrum of the quench Hamiltonian, directly from the return rate after a quantum-quench sweep, but no protocol was provided for discerning whether an equilibrium phase can be MBL. It would be interesting to further investigate this, and not necessarily just from the point of view of the return rate. Indeed, recently it has been shown how universal equilibrium scaling functions can be deduced at short times from spin-spin correlations after a quantum quench to the vicinity of a critical point. 41 Additionally, it is worth mentioning that the MBL transition has been observed in experiments with interacting fermions in onedimensional quasirandom optical lattices through the relaxation dynamics of the initial state. 42 Our conclusions would in principle be amenable for observation in such experiments given that DQPT have also been observed in setups of spin-polarized fermionic atoms in driven optical lattices. 25 Finally, questions remain on the possible universality classes of the DQPT observed in this work, and on general principles of mapping to l-bits Hamiltonians resulting in DQPT. We leave these open questions for future work.
2019-03-07T19:00:00.000Z
2019-03-07T00:00:00.000
{ "year": 2019, "sha1": "889845e6a50eb3ae3735107f8c87836262a3e47a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "889845e6a50eb3ae3735107f8c87836262a3e47a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
251694791
pes2o/s2orc
v3-fos-license
Large-scale network metrics improve the classification performance of rapid-eye-movement sleep behavior disorder patients Clinical decision support systems based on machine-learning algorithms are largely applied in the context of the diagnosis of neurodegenerative diseases (NDDs). While recent models yield robust classifications in supervised two classes-problems accurately separating Parkinson’s disease (PD) from healthy control (HC) subjects, few works looked at prodromal stages of NDDs. Idiopathic Rapid-eye Movement (REM) sleep behavior disorder (iRBD) is considered a prodromal stage of PD with a high chance of phenoconversion but with heterogeneous symptoms that hinder accurate disease prediction. Machine learning (ML) based methods can be used to develop personalized trajectory models, but these require large amounts of observational points with homogenous features significantly reducing the possible imaging modalities to non-invasive and cost-effective techniques such as high-density electrophysiology (hdEEG). In this work, we aimed at quantifying the increase in accuracy and robustness of the classification model with the inclusion of network-based metrics compared to the classical Fourier-based power spectral density (PSD). We performed a series of analyses to quantify significance in cohort-wise metrics, the performance of classification tasks, and the effect of feature selection on model accuracy. We report that amplitude correlation spectral profiles show the largest difference between iRBD and HC subjects mainly in delta and theta bands. Moreover, the inclusion of amplitude correlation and phase synchronization improves the classification performance by up to 11% compared to using PSD alone. Our results show that hdEEG features alone can be used as potential biomarkers in classification problems using iRBD data and that large-scale network metrics improve the performance of the model. This evidence suggests that large-scale brain network metrics should be considered important tools for investigating prodromal stages of NDD as they yield more information without harming the patient, allowing for constant and frequent longitudinal evaluation of patients at high risk of phenoconversion. Highlights Network-based features are important tools to investigate prodromal stages of PD Amplitude correlation shows the largest difference between two groups in 9/30 bands Amplitude correlation improved up to 11% the performance compared to PSD alone Classification robustness increases when we use both network-based EEG features Classifier performance worsens when PSD is added to network-based EEG features 2 classifications in supervised two classes-problems accurately separating Parkinson's disease (PD) from healthy control (HC) subjects, few works looked at prodromal stages of NDDs. Idiopathic Rapid-eye Movement (REM) sleep behavior disorder (iRBD) is considered a prodromal stage of PD with a high chance of phenoconversion but with heterogeneous symptoms that hinder accurate disease prediction. Machine learning (ML) based methods can be used to develop personalized trajectory models, but these require large amounts of observational points with homogenous features significantly reducing the possible imaging modalities to non-invasive and cost-effective techniques such as high-density electrophysiology (hdEEG). In this work, we aimed at quantifying the increase in accuracy and robustness of the classification model with the inclusion of network-based metrics compared to the classical Fourier-based power spectral density (PSD). We performed a series of analyses to quantify significance in cohort-wise metrics, the performance of classification tasks, and the effect of feature selection on model accuracy. We report that amplitude correlation spectral profiles show the largest difference between iRBD and HC subjects mainly in delta and theta bands. Moreover, the inclusion of amplitude correlation and phase synchronization improves the classification performance by up to 11% compared to using PSD alone. Our results show that hdEEG features alone can be used as potential biomarkers in classification problems using iRBD data and that large-scale network metrics improve the performance of the model. This evidence suggests that large-scale brain network metrics should be considered important tools for investigating prodromal stages of NDD as they yield more information without harming the patient, allowing for constant and frequent longitudinal evaluation of patients at high risk of phenoconversion. Introduction Nowadays, modern clinical neurology starts considering Neurodegenerative disorders (NDD) as network pathologies where connectivity between different brain areas provides more insights and predictive powers towards accurate diagnosis and disease prediction . To do this, it is necessary to look for quantitative and repeatable biomarkers that characterize the brain network in neurodegenerative disease. A good biomarker should indeed be reproducible, cost-effective, readily available, and able to serve as a disease progression marker (Miglis et al., 2021). Electroencephalography (EEG) is a noninvasive, widespread, accessible, and low-cost clinical tool used to observe dynamic changes in the neuronal electrical field and represents a gold-standard diagnostic instrument in neurology studies. Neuronal activity is hierarchically organized in different oscillations that coexist in both space and time (Buzsáki, 2006). These brain oscillations represent a mechanism for neuronal communications (Fries, 2015) where coherent field oscillations facilitate stimulus propagation within a network of brain areas. In physiological conditions, increased phase synchronization facilitates communication while abnormal increase can be predictive of several pathological conditions such as Parkinson's disease (Roascio et al., 2021;Sunwoo et al., 2017), Alzheimer's disease (Pusil et al., 2019), and epilepsy (Avoli, 2014;Jiruska et al., 2013). We here focused on the Rapid eye-movement sleep Behavior Disorder (RBD) which is a parasomnia that involves violent and undesirable behaviors such as the physical reaction to dreams due to the 4 loss of normal muscle atonia during Rapid Eye Movement (REM) sleep. People with idiopathic RBD (iRBD) have a >70% chance to develop Parkinson's Disease (PD), Dementia with Lewy Bodies (DLB), or multiple system atrophy (Postuma et al., 2019). However, iRBD is a heterogeneous disorder and this makes accurate prediction of phenoconversion challenging. Many studies highlighted electrophysiological changes in iRBD patients (Fantini et al., 2003;Roascio et al., 2021;Rodrigues Brazète et al., 2016;Sunwoo et al., 2017) suggesting the importance of EEG features in differentiating iRBD from healthy subjects. It is known that the "slowing down" of the power spectrum is a typical characteristic of people with iRBD compared to healthy subjects (Fantini et al., 2003;Rodrigues Brazète et al., 2016). However, the power spectrum is a frequencywise measure of the amplitude distribution of individual cortical populations and lacks information about large-scale network interactions . On the other hand, it was recently observed that phase synchronization is reduced in iRBD patients compared to healthy subjects (Sunwoo et al., 2017). Moreover, phase synchronization increases in the alpha band, while amplitude correlation decreases in the delta band with the disease progression of iRBD patients (Roascio et al., 2021). Problem statement In this study, we hypothesized that measures of the association between brain areas are likely to provide more detailed information than Fourier-based measures allowing better discrimination between RBD and healthy subjects. We thus aimed at quantifying the accuracy gained in a classification task by adopting advanced network-based EEG features in combination with more standard power-based measures. In iRBD/healthy classification, the gold-standard instrument is the video-polysomnography (video-PSG) according to international criteria (ICSD 3) (Sateia, 2014). However, video-PSG requires time, resources, and highly specialized personnel. Also in this case, therefore, the clinicians looked for other criteria to support the diagnosis such as a cognitive and motor assessment battery and EEG recordings. The previous study indeed used clinical scores and gait parameters obtaining a classification accuracy of 95%, with a sensitivity of 91% (Cochen De Cock et al., 2022). Another study observed if there is useful information on the EEG spectrum to classify iRBD/healthy (Buettner et al., 2020). To do this, a Random Forest was trained using power-based features (i.e., power spectrum) as model input variables and obtaining an accuracy of 90% (Buettner et al., 2020). However, iRBD patients and HC subjects have different age ranges which could introduce a bias into the classification model. Finally, a study investigated which synchrony-based EEG features better differentiate patients with mild cognitive impairment (MCI) -a prodromal stage of Alzheimer's disease -from healthy subjects (Dauwels et al., 2010). However, to the best of our knowledge, no study has used large-scale networkbased EEG features to classify people with iRBD and healthy subjects. Our contribution We investigated a cross-sectional cohort of 105 subjects: 59 people with iRBD and 46 healthy subjects. We recorded high-density EEG signals during relaxed wakefulness, and we investigated if large-scale EEG features (i.e., phase synchronization and amplitude correlation) are more discriminative than the power spectrum to differentiate iRBD from healthy subjects. We then used the EEG-based features as input variables for a machine learning (ML) model with sparsity, to 6 perform iRBD/healthy classification. The sparsity-based regularization allowed us to identify the EEG feature(s) that are most important for training the classification model. Finally, we used this information to train a regression model without sparsity for evaluating the classification performance when using a subset of variables -chosen based on the previous model. We diagnosed the iRBD according to international criteria (ICSD 3) (Sateia, 2014) and we confirmed the diagnosis with overnight video polysomnography. All iRBD patients have been subjected to brain Magnetic Resonance Imaging (MRI) or Computed Tomography (CT), to rule out brain diseases such as tumors or lesions. We did not use the presence of white matter as exclusion criteria if the Wahlund scale was not >1 for all brain regions (Wahlund et al., 2001). Materials All subjects underwent general examinations to rule out other neurological and psychiatric disorders. The study was conducted under the declaration of Helsinki, and all participants gave informed consent before entering the study, which was approved by the local ethics committee. Data collection and availability To minimize drowsiness, prevent sleep, and preserve a high signal quality, we recorded the EEG in the morning, and we monitored the session to maintain a constant level of vigilance of the patient. We thus used a high-density (64 channels) cap of the Galileo system (EBNeuro, Florence, IT) to acquire band-passed (0.3 -100 Hz) signals during relaxed wakefulness, using a sampling rate equal to 512 Hz. We adopted the 10-10 International System to put on the electrodes on the cap. We chose Fpz and Oz as the reference electrode and ground, respectively. To check eye movements, we 8 synchronically acquired the horizontal electrooculogram with the same recording parameters of EEG. Finally, we checked that the electrode impedance was below 5kOhm. Data preparation First, we pre-processed the high-density EEG data using Brainstorm (Tadel et al., 2011), a MATLAB R2021a toolbox. We applied a zero-phase infinite response notch filter (order 2) to remove the power line noise (50Hz). We thus removed all channels (mean 1.90±2.29; range: min 0, max 8) and windows showing artefactual activity using both Independent Component Analysis (ICA) and visual inspection. For all subjects, we discarded channels A1, A2, and POZ due to the presence of artifacts in more than 90% of the recording. We thus applied a band-pass finite impulse response filter (1-80 Hz, Kaiser window, order 3714). Later, we interpolated the bad channels using spline interpolation (kernel size: 4 cm). Finally, we transformed the referenced EEG to Scalp Current Densities (SCD) (Perrin et al., 1989) to all clean sensors with a spline method (lambda 0.00001, stiffness 4). After the pre-processing, five subjects were excluded due to excessive artefactual activity, which left less than 3 minutes of pruned eyes-closed resting-state data. The final population size for this study was 105 subjects, including 59 iRBD patients (9 female; mean age 69.28±6.98 years at the first clinical assessment) and 46 healthy subjects (22 female, 70.5±10.32). Feature extraction For each SCD time-series, we extracted power spectral profile, phase-synchronization, and amplitude correlation. First, we computed the Power Spectral Density (PSD) for all subjects using the Welch algorithm with 1 Hz resolution as a standard reference to compare with current literature. We then quantified the large-sale brain network alterations by evaluating the weighted Phase Lag Index (wPLI) (Vinck et al., 2011), which is a measure of synchronization of the phase of two signals (i.e., phase synchronization), and orthogonalized Correlation Coefficient (oCC) (Hipp et al., 2012) that estimates the correlation of the envelopes of two signals (i.e., amplitude correlation). Specifically, we conducted a time-frequency decomposition using 30 narrow-band Morlet wavelets in a logarithmic space between 2.1 and 75 Hz with 5 cycles. For each Morlet wavelet, we computed the wPLI and the oCC. The wPLI is computed as (Vinck et al., 2011): The oCC is computed as the Pearson correlation coefficient between two orthogonalized time series (Hipp et al., 2012): Both wPLI and oCC range between 0-1 with 1 corresponding to the presence of phasesynchronization or amplitude correlation. We thus chose these two EEG features because they are measures insensitive to volume conduction (Hipp et al., 2012;Vinck et al., 2011), which would inflate phase synchronization and amplitude correlation analyses of sensor EEG data (Palva et al., 2018;Vinck et al., 2011). Please note that in this study, we did not include clinical features in the model input, although this could potentially improve the model performance. Indeed, the main goal of the study is to investigate the discrimination power of the EEG features and determine which one is more informative for the classification task. Further, we did not include sex information in the input classifier as our population is sex imbalanced (N=50 males with iRBD and N=24 healthy males) and, again, this may represent a model bias. This imbalance is not specific to our dataset, but it is known that there is a male predominance in iRBD patients Postuma et al., 2019). Nonetheless, we carried out an explorative analysis where we also used age and sex as input variables together with EEG features to classify patients with iRBD and healthy subjects ( Figure S2). Statistical Analysis We conducted a statistical analysis to investigate differences between people with RBD and healthy subjects. We computed a Kruskal-Wallis test (non-parametric ANOVA) (Kruskal & Wallis, 1952) to observe a statistical difference between RBD/healthy subjects in the clinical scores, power spectrum, wPLI, and oCC across frequency. Later, we performed a multiple comparison correction using the Benjamini-Hochberg (BH) method (Benjamini & Hochberg, 1995). Machine Learning methods for classification We performed iRBD/healthy subject classification based on a machine learning model. A crucial remark is that we only have access to a small amount of data to train our model. This might cause overfitting, a very well-known issue in ML, which occurs when the classifier learns noise and random fluctuations in the training data and does not generalize to the test data (i.e., unseen data). The main causes of overfitting are indeed a small number N of samples in training or the high complexity of the model (e.g., large number P of input variables). A strategy to prevent overfitting due to the high complexity of the model is the selection of a subset of variables to use as input to the model, so that P<<N. We thus adopted the Least Absolute Selection and Shrinkage Operator (LASSO), which uses the ℓ_1 norm for regularization and performs an automatic variable selection. Furthermore, we carried out additional analysis to investigate if directly using the selected variables based on LASSO as model input is advantageous in iRBD/healthy classification and how the performance changes with the input dimension P. To do that, we first derived from the LASSO results in a strategy to select subgroups of variables that may be more informative for the classification (see Experimental design 2.2.5 section for details) and we then used these as input to train a classification model without sparsity. Experimental design The experiments we carried out can be subdivided into two phases (Figure 1a). In the first phase, we adopted LASSO to simultaneously perform variable selection and binary classification. Specifically, here we used different input variables containing: (i) a single EEG-based feature; (ii) a pair of EEG-based features; or (iii) all EEG-based features for all 105 people involved. We then performed three different shuffling of the population data, and, for each shuffling, we split the dataset in learning and testing by stratified 5-fold cross-validation (outer-CV) (Farina et al., 2020;Kiiski et al., 2018). For each fold, we thus applied data standardization by computing the z-score on the learning set and applying the same transformation to the test set. We then further split the learning set into training and validation sets, named inner-CV (Figure 1b), for tuning the model hyperparameters. Hence, we looked for the best ℓ_1regularization parameter ɑ (among 0.001, 0.01, 0.1, 1) by choosing the one that produced the lowest prediction error on the validation set, and we trained a Lasso model on the learning set ( Figure S1). Subsequently, we evaluated the model performance in terms of accuracy, precision, recall, and f1-score on the unseen test set. The accuracy score indicates the percentage of labels predicted correctly. The precision score is defined as the ability of a classifier to not mislabel a sample (TP/TP+FP where TP and FP are true and false positive, respectively). The recall, or sensitivity, score is the ability of a classifier to find all the positive samples (TP/TP+FN where FN is the false negative). The f1-score is a weighted harmonic mean of precision and recall. We, finally, computed the mean and the standard deviation of these scores across folds. We repeated this procedure three times, shuffling the original data each time, to test the robustness of the model to the dataset split. In the second phase, we carried out further analysis for evaluating the classification performance when using a subset of variables. To do that, we first considered the shuffling (N=3) and the folds (M=5) of the previous phase as 15 (M x N) different folds. For each one of these folds, we looked at which variables have been selected based on LASSO, and then, we counted how many folds (i.e., how many times) a variable was selected. We thus created different variable subsets considering the variables selected in at least one-fold up to those selected in all folds and we used them as input for a sparsity-free model. For each input subset, we again performed three different shuffling of the data -the same as used for LASSO -and the outer-CV to provide a more robust evaluation of the model. We thus trained the model without sparsity on the learning set and then, we evaluated the model on the test set computing the accuracy, precision, recall, and f1-score. We finally calculated the averaged performance and the standard deviation across folds and shuffling. Please note that in this phase, we do not perform any inner cross-validation as we do not have a hyperparameters search. EEG features change in iRBD patients We first wanted to characterize the differences between iRBD and HC populations by performing statistical hypothesis tests of single EEG features across frequencies. Power spectral profile shows a slowdown of the prominent alpha peak towards delta/theta band in iRBD patients compared to healthy subjects. This difference does not reach statistical significance after correction for multiple comparisons (p> 0.05 -Kruskal-Wallis test and BH correction) (Figure 2a). Phase synchronization is weaker in iRBD patients than in healthy subjects in delta, theta, and alpha (8-13 Hz), but with no significant evidence (p>0.05 -Kruskal-Wallis test and BH correction) (Figure 2b). Finally, amplitude 13 correlation is stronger (p<0.05 -Kruskal-Wallis test and BH correction) in iRBD patients than in healthy subjects in delta (2-4Hz), theta (5-7Hz), high-beta (20-30Hz), and gamma (>30Hz) bands ( Figure 2c). These differences are not significant after BH correction (p> 0.05 -Kruskal-Wallis test and BH correction). Despite the lack of significance after correction per multiple comparison, these results suggest that the amplitude correlation spectral profile yields major differences between iRBD, and HC compared to power-based spectral metrics. Large-scale EEG features improve classifier robustness and performance We then wanted to quantify the improvement of classification accuracy when using a combination of large-scale network metrics compared to purely power-based metrics. We quantified the classifier performance in terms of f1-score, accuracy, precision, and recall. We showed stronger robustness (i.e., less variability across folds and shuffling) of the classifier when using simultaneously largescale EEG features than other feature groups (Figure 3a). Moreover, the model overperforms the PSD-based classification when using only oCC (f1-score: 61%) or both large-scale EEG features (f1-score: 62%) (Figure 3b-e). We found that the PSD-based model only reaches the chance level. Furthermore, the performance worsened when we added PSD to the large-scale EEG features compared to the performance obtained using the large-scale EEG features alone or together. These results are thus in line with our previous statistical findings (Figure 2) for which the largescale EEG features, and in particular amplitude correlation profiles might be more informative in the iRBD/healthy discrimination. Finally, we carried out a further analysis to investigate the classifier performance by adding age and sex as additional features. As iRBD is primarily a male disease, we found that the performance of the classifier improves ( Figure S2) suggesting that sex imbalance in our population may represent a bias in the model. Performance increased by adopting less than 50% of the features as input variables Underpowered classification studies might benefit from feature selection. Here we investigated the group of EEG features that provided the best performances. Firstly, we found that excessive feature selection was disadvantageous when we used wPLI alone or wPLI and oCC together. Indeed, the performance of the classifier worsened -below the chance level -when we used less than 15% of the variables as model inputs (Figure 4 and S3). In the same way, the performance worsened when we trained the model using the whole spectrum of the EEG feature(s). In contrast, the performance improved when we used less than 50% of the variables as input for the binary classification model (Figure 4). We found several groups of features with similar improved performances (Figure 4, S3). In particular, the oCC alone showed an f1-score of 63% when selecting 17 of its frequency points (Figure 4). The wPLI and oCC together showed the best performance (f1-score: 69%) when we selected 40% of the variables in these two groups (Figure 4). We obtained a similar performance when we selected 49%% of the variables considering all EEG features together ( Figure 4). Finally, we found ( Figure 4) a worsening of the performance when we trained the model using amplitude correlation or phase synchronization and power spectrum (f1-score: 65% PSD + wPLI; f1-score: 62% PSD + oCC). We observed large variability in the percentages of the selected feature number due to the different sizes of the original training set. However, we showed that the optimal number of features is between 10 and 40 ( Figure 4). These results suggest that the whole frequency spectrum of the EEG features is unnecessary for iRBD/healthy classification, agreeing that the idea to use LASSO in the first phase of the experimental design is appropriate. However, we did not find a specific subset of EEG features that performs the classification much better than others. Discussion To predict phenoconversion in neurodegenerative diseases, ML-based approaches would require a plethora of stable biomarkers reflecting disease progression that can be homogeneously collected across time To be effective and widely adopted in clinical practice, these biomarkers need to be non-invasive, widely accessible across different centers, and accurately separate between healthy and pathological conditions even at prodromal stages. In the last decades, the researchers mainly used clinical scores (Prashanth et al., 2016), cerebrospinal fluid (CSF) (Prashanth et al., 2016;Wang et al., 2020), or features extracted by imaging techniques (e.g., MRI and SPECT) (Farina et al., 2020;Noor et al., 2019;Prashanth et al., 2014Prashanth et al., , 2016Wang et al., 2020) as input variables to the classification models of different NDDs including Alzheimer or Parkinson's disease and their respective main prodromal stages as Mild Cognitive Impairment and RBD, respectively. However, CSF, PET, and SPECT are diagnostic techniques that are very expensive, invasive, and not always accessible across different centers. In this work we specifically considered EEG-only features for its technical advantages and we selected phase synchronization and amplitude correlations as EEG-derived features in contrast to classical power spectral analyses. Previous studies highlighted significant electrophysiological changes in iRBD patients (Fantini et al., 2003;Roascio et al., 2021;Rodrigues Brazète et al., 2016;Sunwoo et al., 2017) suggesting the importance of EEG-based features in differentiating iRBD from healthy subjects. These works reported a slowing of the alpha rhythm (Fantini et al., 2003;Rodrigues Brazète et al., 2016) and alterations in phase synchronization (Roascio et al., 2021;Sunwoo et al., 2017) and amplitude correlations (Roascio et al., 2021). Several lines of evidence suggest that NDDs are characterized by network deficiencies, and that the investigation of the cross-talk between different brain areas provides a better understanding of the diseases and their progression (Pusil et al., 2019;Roascio et al., 2021;Sunwoo et al., 2017). We hypothesized that network-based EEG features more reliably capture differences between patients and healthy controls, and this would be reflected by an improvement in the robustness and the accuracy in a classification model. We quantified the gain in classification accuracy of patients with idiopathic REM sleep behavior disorder from age-matched healthy controls using phase synchronization and amplitude correlation profiles. Our analyses suggest that: (1) amplitude correlations yield the largest difference around alpha band, (2) accuracy increases when using a combination of network-based features compared to more classical ones derived from Fourier spectrum, (4) the inclusion of power spectrum features decreases model accuracy, and finally that (3) strong feature selection is not beneficial when using EEG-derived metrics. Our results provide evidence that (1) EEG-based features can be used in a classification model of RBD patients, and (2) that phase synchronization and amplitude correlation should be considered as important features in diagnosis support systems as they capture subtle changes of the progressive pathologies even in absence of overt symptoms. We acknowledge that there are some limitations to this study. First, the small number of subjects involved limits the generalizability of our observations. However, this study aims to demonstrate that network-based metrics carry more information about the ongoing pathology by observing an increase of the accuracy in a classification model. Second, the sex could introduce a bias in our classification model. As discussed in the Feature Extraction section, the sex imbalance between healthy subjects and iRBD patients is not specific to our dataset but it is known that there is a male predominance in iRBD patients Postuma et al., 2019). Nonetheless, we carried out an explorative analysis where we also used age and sex as input variables together with EEG features to classify patients with iRBD and healthy subjects (see Figure S2). As expected, we found a performance improvement when using age and sex combined with EEG feature(s). Conclusion This study, for the first time, investigates if the network-based features improve the robustness and the classification performance in a cross-sectional cohort of iRBD patients and healthy subjects. Our results suggest that the power spectrum alone is not discriminative enough to perform an accurate iRBD/healthy classification reaching only the chance-level. In contrast, phase synchronization and amplitude correlation increased classifier performance compared to PSD alone, and classifier robustness improved when we simultaneously used both as input for the model. These findings suggest that network-based EEG features are more discriminative than power-based EEG features to improve the robustness and the performance of a classification model. We speculate that to accurately predict phenoconversion of RBD patients using ML-based approaches EEG derived 19 features need to be included among the observed variables, in particular phase synchronization and amplitude correlations. Data/code availability statement The data used in this work can be available upon a reasonable request due to privacy issues of clinical data. Declaration of Competing Interest The authors have no conflict of interest to report.
2022-08-21T13:39:15.232Z
2022-08-17T00:00:00.000
{ "year": 2022, "sha1": "1e4467d10b10963f629b70b6a53f3ef7e869cf09", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/08/17/2022.08.16.504129.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "1e4467d10b10963f629b70b6a53f3ef7e869cf09", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology" ] }
246901941
pes2o/s2orc
v3-fos-license
MRI for in vivo Analysis of Ablation Zones Formed by Cooled Radiofrequency Neurotomy to Treat Chronic Joint Pain Across Multiple Axial Spine Sites Purpose Radiofrequency (RF) ablation is the targeted damage of neural tissues to disrupt pain transmission in sensory nerves using thermal energy generated in situ by an RF probe. The present study aims to evaluate the utility of magnetic resonance imaging (MRI) for in vivo quantitative assessment of ablation zones in human subjects following cooled radiofrequency neurotomy for chronic pain at spinal facet or sacroiliac joints. Ablation zone size and shape have been shown in animal models to be influenced by size and type of RF probe – with cooled RF probes typically forming larger, more spherical ablation zones. To date, MRI of RF ablation zones in humans has been limited to two single retrospective case reports. Patients and Methods A prospective, open-label pilot study of MRI for evaluation of cooled radiofrequency ablation zones following standard of care procedures in adult outpatients was conducted. Adult subjects (n=13) received monopolar cooled RF (CRF) ablation (COOLIEF™, Avanos Medical) of sensory nerves at spinal facet or sacroiliac joints, followed by an MRI 2–7 days after the procedure. MRI data were acquired using both Short Tau Inversion Recovery (STIR) and contrast-enhanced T1-weighted (T1C) protocols. T1C MRI was used to calculate 3-dimensional ellipsoid ablation zone volumes (V), where well-defined regions of signal hyperintensity were used to identify three orthogonal diameters (T, D, L) and apply the formula V=π/6×T×D×L. Results Among 13 patients, 96 CRF ablation zones were created at 4 different anatomic sites (sacroiliac, lumbar, thoracic and cervical). CRF ablation zone morphology varied by anatomical location and structural features of surrounding tissues. In some cases, proximity to bone and striations of surrounding musculature obscured ablation zone borders. The volumes of 75 of the 96 ablation zones were measurable from MRI, with values (mean±SD) ranging from 0.4679 (±0.29) cm3 to 2.735 (±2.62) cm3 for the cervical and thoracic sites, respectively. Conclusion In vivo T1C MRI analysis of cooled RF ablation zones at spinal facet and sacroiliac joints demonstrated variable effects of local tissues on ablation zone morphology. Placement of the CRFA probe very close to bone alters the ablation zone in a negative way, causing non-spherical and incomplete lesioning. These new data may serve to inform practicing physicians about optimal cooled RF probe placement in clinical procedures. Introduction Radiofrequency ablation (RFA) is well established 1 as a treatment modality across multiple clinical indications, including a range of painful conditions of the spine. 2 Monopolar RFA generates thermal energy in situ by delivering highfrequency alternating current through an insulated needle-sized radiofrequency probe, causing ionic agitation, ambient heating through friction and cell death through coagulative necrosis. 3,4 Because the RFA probe may be inserted into the body percutaneously, the procedure is relatively non-invasive. Cooled radiofrequency ablation (CRFA) probes differ from standard RFA probes in that water circulated through the CRFA probe tip draws heat away from the tissue-tip interface to prevent tissue charring or desiccation, which have been shown to greatly increase resistance to further energy flow. 5 CRFA probes deliver more energy than standard RF probes and form larger, more spherical ablation zones. It is believed that larger lesions can increase the chances of interventional success, as larger lesions improve the odds of successful capture of the target nerve within the lesion zone. To date, however, most ablation zone measurements of both cooled and standard RFA probes have been made using ex vivo chicken breast models. 6 Chicken breast is non-perfused and relatively homogeneous tissue, so it is not well suited as a surrogate for understanding in vivo response. In animal and ex vivo models, the size and type of RFA probe tip used influence the size and shape of the ablation zone formed, characteristics which are believed to be critical in determining procedural success. 7 With its unique ability to safely resolve normal soft tissues as well as pathological changes, magnetic resonance imaging (MRI) has been used to examine in vivo effects of RFA lesioning in the cardiac, CNS and tumor ablation spaces in both humans and animal models. [8][9][10][11] MRI has been used to characterize in vivo effects of pain management RFA neurotomy procedures in animals, including rodents and pigs prior to sacrifice and verification through histological analysis. 12,13 The need for human clinical studies using MRI to characterize in vivo RF lesions for pain management was emphasized in a 2018 report on porcine RF lesion characterization. 12 Some MRI studies of CRFA ablation zones in vivo in humans have been conducted, including two retrospective case reports, which inspired the present work. 14,15 To our knowledge, and based upon literature searches (eg, PubMed with search filters such as "cooled AND radiofrequency AND (ablation OR neurotomy) AND MRI"), the present study is the first prospective quantitative analysis of CRFA ablation zones using MRI. Broader literature searches to include standard RF neurotomy also fail to find prospective clinical studies using MRI to quantify lesion size. Materials and Methods The goal of this study was to confirm the utility of MRI technology to identify, obtain and categorize CRFA ablation zones in vivo under normal clinical conditions. Study approval was obtained by the Sterling Institutional Review Board (Atlanta, GA). All patients were properly consented prior to initiating screening activities. This prospective, singlecenter, pilot study was conducted in accordance with the Declaration of Helsinki. The study enrolled 13 patients over the age 21 who were diagnosed with chronic joint pain (≥3 months), and were eligible to receive CRFA (based on dual, comparative diagnostic nerve blocks) as part of their standard of care within a single practice. The treatment algorithm for joint pain often begins with physical therapy and oral-non-steroidal anti-inflammatory drugs. If pain persists, the next steps may involve minimally invasive treatments, such as CFRA. The ideal CFRA patient is anyone ready to move beyond the first line treatments based upon the physician's judgments and patient preference. Because the goal of this study was to characterize CRFA lesions, only patients selected for CRFA treatment were considered. Areas originally targeted for treatment included: cervical, thoracic and lumbar spinal facet joints; the sacroiliac joint (SIJ) region; the hip and the knee. Limited data collection in the hip and knee (3 subjects total) were ultimately excluded from analysis to focus on the axial spine. The study physician provided Standard of Care treatment for all subjects across two outpatient office visits: a screening visit and a CRFA procedure visit. CRFA in the cervical region was performed using a 2-mm exposed active tip, thoracic, a 5.5-mm active tip, while lumbar and sacral RF utilized 4-mm active tips. MRI was performed at an outpatient imaging center 2-7 days later using both STIR and contrast-enhanced (gadolinium) T1-weighted (T1C) protocols. MRI was performed within the first week post procedure based upon the anticipated time course for tissue response: it was hoped edema and inflammation would be resolved leaving CRFA ablation zones more clearly visible. The ablation zone was defined as the area of coagulative necrosis that is well marginated as defined in previous work. 10 The methodology for calculating ablation zone size was informed by the significant precedent of similar work previously published. Imaging conventions developed to assess tumor size have influenced quantitative assessment of ablation zones. 8 The current study employed a simple diameter-based ablation zone size and volume calculation assuming a 3-dimensional ellipsoid shape (V=π/6×T×D×L or alternatively, V=π/6×D cc ×D ap ×D l ) and involving simple measurement of three orthogonal diameters that can be performed on standard radiology images. 16 Cranial-caudal diameter D cc and anterior-posterior diameter D ap are from the largest slice in the sagittal plane, and the lateral diameter D l is from largest slice in the axial plane. CRFA ablation zone volumes were calculated from MR images by an independent board-certified radiologist, subspecialized in neuroradiology. Results A total of 13 adult subjects (10 females: 3 males) underwent CRFA for chronic joint pain at spinal facet or sacroiliac joints for a total of 96 ablation zones, as summarized in Table 1. MRI data acquired using the STIR protocol were generally indicative of the edema present in and around the ablation zone site, whereas T1C images were judged to be more indicative of the inflammation/necrosis present at the ablation zone site. Thus, T1C images were judged to be most representative of true CRFA ablation zone size in the first week post procedure. Of all the T1C images acquired, 75 were analyzable, based on radiologist impressions of well-defined lesion area borders. T1C Ablation zone volumes were affected by CRFA probe size as well as anatomical location ( Table 1). As expected, smaller probe tip sizes correlated with smaller ablation zone volumes. The smallest tip size of 2 mm was used on the cervical sites, yielding mean volume 0.4679 (±0.29) cm 3 , in contrast to 2.735 (±2.62) cm 3 volume resulting from the largest tip size of 5.5 mm used on the thoracic sites. Interestingly, ablation zone volume in SIJ and lumbar regions were noticeably different from each other (0.6915 (±1.08) versus 1.685 (±2.51) cm 3 ), despite using the same size probe tip (4 mm). The large standard deviations show that significant variations exist among lesions delivered to the same anatomic site with the same needle size. Image analysis was conducted to gain insight into this variation. Ablation zone borders were affected by the impedance of surrounding tissues, including the surrounding musculature as well as adjacent bone structures. Figure 1 shows a well-defined CRFA ablation zone with the characteristic spheroidal shape. Striations in musculature appeared to influence the size and shape of areas of hyperintensity in T1C images ( Figure 2). Placement of the CRFA probe very close to bone alters the shape of the ablation zone which deviated from the expected rounded shape. This is notably demonstrated within Figure 3A and B, where very minor differences in probe placement within the same patient at the same treatment level resulted in drastic differences in ablation zone morphology on MRI. Discussion Previous ex vivo work has demonstrated the influence of probe size and probe type on ablation zone size and shape. Internally cooled probes create larger, more spherical lesions than standard probes in homogeneous tissues (ie, chicken breast). Additionally, lesion size increases expectedly with larger gauge and active tip sizes. 7 When identifiable ablation zones were visible within this MRI dataset, this spherical shape remained and proved this 425 concept is transferable from the ex vivo modeling. However, this work suggests that environmental factors (tissue composition, probe proximity to bone and tissue structural features) may play a more important role than previously appreciated. The impact of tissue heterogeneity on ablation zone size can be explained by the conductivity of various tissues. Bone has one of the lowest electrical conductivities, followed by fat. Skeletal muscle has the highest electrical conductivity of tissues related to RF ablation. 3 Tissues with higher conductivities have higher current flows, which results in greater induced temperature when subjected to radiofrequency energy. As such, ablation to tissues with higher conductivity will result in larger ablation zones than tissues with low conductivity. Previous computational work has shown that ablation volumes in heterogeneous models vary more than that of homogeneous models. 17 In this computational model, homogeneous domains were comprised of only muscle tissue, one heterogeneous domain consisted of muscle and nervous tissue, and another heterogeneous domain comprised bone, nerve and muscle tissues. Investigators uncovered that heterogeneity resulted in distorted electric field distribution, which significantly reduced ablation volume. When looking at ablation zone volumes reported in this manuscript across anatomical locations, it is clear that tissue heterogeneity plays a critical role in influencing size and shape of ablation zone creation. Consider the mean ablation volumes in the SI joint and the lumbar facet joint, each made with the same size probe tip. The ablation volumes within SI joint are noticeably smaller. When looking into the tissue heterogeneity of the joints, the SI joint has more bone structure, tendons, ligaments, cartilage and other connective tissues, 18 whereas the area surrounding the lumbar facet joint consists of more skeletal muscle. The differences in conductivity of these various tissues may help explain why the same cooled radiofrequency probe size can create different ablation zone volumes. Proximity to bone also appears to impact the size and shape of ablation zones measured by MRI. Previous ex vivo modeling showed that RF probe placement directly against the bone resulted in the projection of the lesion outward from the bone and perpendicular to the needle axis in the vertical plane. 19 It is suggested that the bone, with its higher electrical impedance, acts as an insulator that directs the current return path further outward into the surrounding soft tissue. The results of our data mirror this phenomenon, in that ablation zone size and shape were directly impacted by close proximity (or placement against) bone. The probes in the previously described ex vivo model were placed directly against bone, resulting in the perpendicular distribution of RF energy, 19 the probes within this study had a more varied placement, enabled by the distal projection of cooled radiofrequency probes. As such, deflection of RF energy could not be determined to be directly perpendicular to the probe/bone interface. Likely, because of the differences in distribution of RF energy of cooled probes, the deflection of RF energy was much more disperse. It is hypothesized that this dispersion of RF energy results in less thorough lesioning, as reflected by the different imaging characteristics of these ablation zones as viewed in the corresponding MRI images (Figure 3A and B). It is noted that the majority of ablation zones with irregular or atypical borders were seemingly impacted by the probe's proximity to bone structure. In addition to probe type, surrounding musculature and proximity to bone, there are yet still other factors that can affect ablation zone size. Previous studies have demonstrated the effects of pre-injectate solutions on in vivo lesion size. 12 Injection of a hypertonic sodium chloride solution significantly increased lesion volume, as measured by both histological analysis and MRI analysis. This study also demonstrated that a 1 mL 1% lidocaine injection did not alter the overall size of a lesion (compared with no fluid injection), but the 1% lidocaine injection did result in a less symmetrical shape of lesion, suggesting that local anesthetic can influence lesion geometry. While this is not a direct comparison with the work presented herein (eg, different animal model, different tissues, different probes), it does serve to highlight the impact of local anesthetic on lesion geometry. It is suggested that the in vivo MRI analysis of ablation zones is demonstrating a similar phenomenon, wherein tissue structural features enabled local anesthetic to infiltrate the striations of musculature. RF energy is believed to have been distributed preferentially through these striations and, due to the low conductivity of the infiltrating fluid, resulted in the formation of a more diffuse, non-concentrated ablation zone. Clinically, these results providing compelling evidence for the role of impedance of tissues adjacent to the radiofrequency probe. Specifically, the heterogeneity of adjacent structures may have a significant and meaningful impact on lesion geometry, thereby influencing the placement of cooled RF probes. Based on radiological examination, ablation zones with less defined borders are hypothesized to be indicative of incomplete lesioning. These ablation zones lack the necrosis and coagulation that are believed to be associated with a successful lesioning and denervation. Proximity to bone appeared to be the biggest factor driving incomplete lesioning. Current manufacturer-endorsed protocols for CRFA involve inserting the introducer into the relevant anatomical area until the practitioner finds the bone, utilizing it as a backstop. Upon retraction of the introducer, the cooled probe is inserted in its place. Given the differences in the lengths of the introducer and the probe, this should leave a 2 mm gap between the probe tip and the bone. However, it is possible that micro-adjustments made during either introduction of the probe or during the ablation procedure, can result in the probe eventually being directly on the bone. It is suggested that extra precautions be taken when conducting micro-adjustments to ensure that there is space between the probe tip and the bone to allow for more thorough and predictable lesioning. Furthermore, these results likely provide the first tangible contributor to inconsistent results or failures of standard RFA following positive response to diagnostic blocks. Given the size limitations associated with standard RF probes, current recommendations/protocols for standard radiofrequency probes involve placing probes directly on bone structure within the SIJ and facet joints. As seen within these data, bone disperses the radiofrequency energy in such a manner as to prevent the creation of a concentrated ablation zone, which may impede ability to successfully ablate nervous tissue. Probe designs and needle placement techniques have historically been guided by findings taken from chicken breast models. Based on the present human MRI findings, however, it would seem reasonable to revisit the current placement recommendations in the spine. A revisitation of probe placement and trajectory in genicular ablations has been recommended by the authors who first discovered the ablation zone in the knee, underscoring the ability of MRI imaging in assisting practitioners with procedural insights. 15 Another case study made use of post-treatment MRI, not originally intended for ablation zone analysis, to correlate the large, spherical ablation zone with successful genicular pain reduction. 14 Although serendipity led to some of these previous insights, the present study represents the proactive use of human MRI ablation zone characterization. It is aimed at informing future work to improve CRFA techniques for facet and sacroiliac joint pain treatment, and immediately shows that the use of the homogeneous chicken breast model should be laid to rest as a surrogate for expectations of use in vivo. While the information provided within this study is novel, there is a relatively small sample size. Variance in MRI collection dates post CRFA (2-7 days) could also have altered measurements given differences in inflammatory response. Specimens collected at the start of the window would show a greater impact of the inflammatory response on the imaging findings (especially the STIR images) and measurements, which could skew the data toward larger numbers. It is also important that the ablation zone sizes do not formally measure lesion size, as lesion sizes are typically calculated through histological staining, a process not suited for clinical practice. This exploration was also limited to monopolar lesioning utilizing CRFA. This methodology should be repeated using various other products and techniques (ie, standard RF, tined probes, bipolar lesioning) and placement techniques to guide procedural optimization. Given the dramatic impact of tissues and proximity to bone, some current placement techniques may need to be revisited. Finally, as the goal of this study was to primarily understand whether ablation zones could even be visualized in vivo, no clinical follow up was collected on these patients, so no connection with associated outcomes could be made. Future studies should attempt to link clinical response to ablation zone formation and placement. Conclusion This is the first work of its kind to evaluate in-vivo human response to CRFA ablation in pain management conditions using MRI imaging and to describe a model that can be used for future exploration. Probe placement, proximity to bone, surrounding tissues and striations in musculature all impacted ablation zone size and shape. These contextual factors may need to be considered when placing RFA probes, regardless of technology. Given the multiple variables to consider when utilizing this treatment, it is truly amazing this procedure works as well as it does. While size varied of CRFA ablation zones across each anatomical location (and across different active tip sizes), CRFA ablation zones were consistent within each anatomic location.
2022-02-18T05:16:52.786Z
2022-02-09T00:00:00.000
{ "year": 2022, "sha1": "d963253856c59e3d9529cae934ab42f764d6019f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=78186", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d963253856c59e3d9529cae934ab42f764d6019f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119285148
pes2o/s2orc
v3-fos-license
A comparison of Alpha Particle and Proton Beam Differential flow in Collisionally Young Solar Wind In fast wind or when the local Coulomb collision frequency is low, observations show that solar wind minor ions and ion sub-populations flow with different bulk velocities. Measurements indicate that the drift speed of both alpha particles and proton beams with respect to the bulk or core protons rarely exceeds the local Alfv\'en speed, suggesting that a magnetic instability or other wave-particle process limits their maximum drift. We compare simultaneous alpha particle, proton beam, and proton core observations from instruments on the Wind spacecraft spanning over 20 years. In nearly collisionless solar wind, we find that the normalized alpha particle drift speed is slower than the normalized proton beam speed; no correlation between fluctuations in both species' drifts about their means; and a strong anti-correlation between collisional age and alpha-proton differential flow, but no such correlation with proton beam-core differential flow. Controlling for the collisional dependence, both species' normalized drifts exhibit similar statistical distributions. In the asymptotic, zero Coulomb collision limit, the youngest measured differential flows most strongly correlate with an approximation of the Alfv\'en speed that includes proton pressure anisotropy. In this limit and with this most precise representation, alpha particles drift at 67% and proton beam drift is approximately 105% of the local Alfv\'en speed. We posit that one of two physical explanations is possible. Either (1) an Alfv\'enic process preferentially accelerates or sustains proton beams and not alphas or (2) alpha particles are more susceptible to either an instability or Coulomb drag than proton beams. INTRODUCTION Simple models of solar wind acceleration (e.g. Parker (1958)) are unable to explain the solar wind's acceleration to high speeds. Wave-particle interactions are likely necessary to explain these observations. Differential flow is the velocity difference between two ion species. It is a useful indicator of such interactions and related acceleration. Ionized hydrogen (protons) is the most common ion in the solar wind, usually constituting over 95% by number density. Within a few thermal widths of their mean speed, solar wind protons are well described by a single bi-Maxwellian velocity distribution function (VDF). However, an asymmetric velocity space shoulder has also been observed in the proton distribution. It can be described by a second, differentially flowing Maxwellian. We refer to the primary proton component as the proton core (p 1 ) and the secondary component as the proton beam (p 2 ). Proton beams are most easily measured in fast solar wind and when the local Coulomb collision frequency is small in comparison to the local expansion time. Fully ionized helium (alpha particles, α) are the second most common species and constitute ∼ 4% of the solar wind by number density. Differential flow is the velocity difference between two ion species or populations. It has been measured in the solar wind plasma at many solar distances starting in the corona and, when the local collision rate is smaller than the expansion time, extending out to and beyond 1 AU. (Landi & Cranmer 2009;Marsch et al. 1982a,b;Steinberg et al. 1996;Neugebauer 1976;Kasper et al. 2008;Feldman et al. 1974;Asbridge et al. 1976;Goldstein et al. 1995) Kasper et al. (2006) showed that α differential flow is aligned with the magnetic field B to within several degrees as long as it is larger than ∼ 1% of the measured solar wind speed, consistent with any apparent non-parallel flow being measurement error. It should not be surprising that differential flow is field aligned because any finite differential flow perpendicular to B would immediately experience a Lorentz force until the plasma was again gyrotropic on a timescale comparable to the ion gyroperiod. We denote the differential flow as ∆v b,c = (v b − v c ) ·b, where ion species b differentially streams with respect to core population c andb is the magnetic field unit vector. Positive differential flow is parallel to local B and negative differential flow is antiparallel to it. Simultaneous measurements of α-particles and protons indicate that ∆v α,p1 is typically 70% of the local Alfvén speed, C A . (Kasper et al. 2017(Kasper et al. , 2008Neugebauer 1976;Asbridge et al. 1976;Feldman et al. 1974) While measurements of heavier ions (e.g. iron, oxygen, carbon) show similar behavior (Berger et al. 2011), proton beam-core differential flow (∆v p2,p1 ) has been reported at approximately the local Alfvén speed or larger (Marsch et al. 1982b). Given that the local Alfvén speed in the solar wind is generally a decreasing function of distance from the sun, this apparent Alfvén speed limit implies that there is effectively a local wave-mitigated limit on ∆v p2,p1 , for which several instability processes have been hypothesized. (Daughton & Gary 1998;Daughton et al. 1999;Goldstein et al. 2000) Simulations by Maneva et al. (2015) showed that a nonlinear streaming instability limits alpha particle drift to a maximum of 0.5 C A . Raw data from the Wind/SWE Faraday cups are now archived at the NASA Space Physics Data Facility (SPDF) and available online at CDAweb. We have developed a new fitting algorithm that returns simultaneous parameters for three solar wind ion populations (α, p 1 , and p 2 ) and have processed over 20 years for Faraday cup solar wind measurements. For this project, we have restricted the analysis to measurements with clear differential flow signatures for both the alpha particle and proton beam components. We find that ∆v α,p1 /C A and ∆v p2,p1 /C A are indeed clustered around characteristic values that are consistent with previous results, but with considerable spreads in the respective distributions. We investigate possible contributions to the spreads; the apparent impact of Coulomb collisions in the weaklycollisional regime; and the limitations of calculating the Alfvén speed under the commonly assumed frameworks of ideal and anisotropic MHD. We report that in collisionless solar wind: 1. α particle and p 2 differential flow speeds exhibit distinctly different trends with the locallymeasured Coulomb collision rate; 2. Coulomb collisions account for the dominant contribution to the spread in ∆v/C A ; 3. and an accounting for the proton pressure anisotropy in the local Alfvén speed, as under anisotropic MHD, significantly reduces the spread in ∆v/C A . DATA SOURCES & SELECTION The Wind spacecraft launched in fall 1994. Its twin Faraday cup instruments have collected over 6.1 million proton and alpha particle direction-dependent energy spectra, the majority of which are in the solar wind. (Ogilvie et al. 1995) Available on CDAweb, these raw spectra consist of measured charge flux as a function of angel and energy-per-charge for each cup. With these spectra, we reconstruct 3D velocity distribution functions (VDFs) for each ion species and extract the bulk plasma properties: number density, velocity, and thermal speed. Over more than 20 years, refinements in the data processing algorithms have yielded new information from these distributions including precise α particle abundances (Aellig et al. 2001;Kasper et al. 2007Kasper et al. , 2012, perpendicular to parallel proton temperature ratios (Kasper et al. 2002(Kasper et al. , 2008, and relative alpha to proton temperature ratios (Kasper et al. 2008;Maruca et al. 2013). Ogilvie et al. (1995) provide a thorough description of the Solar Wind Experiment (SWE). In summary, the SWE Faraday cups measure a single energy window approximately every 3s and a full spectrum combines multiple energy windows measured over ∼ 92s. Our fitting algorithm utilizes magnetic field measurements from the Wind Magnetic Field Investigation (MFI) (Koval & Szabo 2013;Lepping et al. 1995) to determine each VDF's orientation relative to the local magnetic field and it assumes that the extracted parameters are approximately constant over the measurement time. In spectra for which this is not the case, automatically processed bulk properties can be unreliable. This new fitting algorithm returns 15 simultaneous parameters for three solar wind ion-populations: alpha particles (α), proton cores (p 1 ) and proton beams (p 2 ). Kasper et al. (2006) describes the six parameter α fitting routines. The protons are jointly fit by a nineparameter set: six to p 1 (number density, vector velocity, and parallel & perpendicular temperature) and three to p 2 (number density, differential flow, and isotropic thermal speed). Previous work with this data includes studies by Chen et al. (2016); Gary et al. (2016). Figure 1 shows example energy-per-charge measurements made in four representative look directions. These directions are identified by the angle between the magnetic field and the direction normal to the Faraday cup's aperture. Figure 2 provides the corresponding proton (top) and α (bottom) VDFs. The proton beam is the extension of the proton VDF to large v > 0. Our alpha particle and proton core quality requirements nominally follow Kasper et al. (2002Kasper et al. ( , 2007Kasper et al. ( , 2008. Because this study focuses on measurements with a clear differential flow signature, we allow an additional class of fits for which the alpha particle temperature has been fixed to the proton core temperature so long as the alphas are well separated from the proton beam. To ensure that the magnetic field is suitably constant over the measurement time, we follow Kasper et al. (2002) and we reject spectra for which the RMS fluctuation of the local magnetic field direction is larger than 20 o . In addition to the reported impact on alpha particle measurements, we find that excluding these spectra also improves the overall quality of reported proton beams. To ensure that the beam is well constrained, we only include spectra for which the beam phase space density is larger than the core phase space density at the beam's bulk velocity, i.e. f p2 /f p1 (v p2 ) ≥ 1. The vertical dashed lines in Figure 1 indicate where this ratio is evaluated in each look direction. The look directions that are most aligned with the magnetic field direction give the clearest view of the beam. The dashed lines are alpha-proton core differential flow (∆v α,p1 /C A ) and the solid lines are proton beam-core differential flow (∆v p2,p1 /C A ). Here, we normalize to the ideal MHD Alfvén speed following Eq. (2) and consider only the proton beam and core densities. 2 The gray lines are histograms of all data. In order to extract representative values and spreads thereof, we fit the green regions corresponding to 30% of the peak with a Gaussian. In selecting this portion of the his-1 See Section 5 for a discussion of collisional age. 2 See Section 6 for a discussion of the Alfvén speed. , p1 (dashed) f(x) = A e 1 2 ( x ) 2 = 2.60 × 10 1 = 6.73 × 10 1 p2, p1 (solid) f(x) = A e 1 2 ( x ) 2 = 1.64 × 10 1 = 1.079 × 10 +0 Bins in Fit Fit Figure 3. Normalized Alpha particle (α, p1) and proton beam (p2, p1) differential flow in collisionless, fast solar wind. Both differential flows are normalized by an Alfvén speed approximation from Eq. 2 using both proton densities. Bins within 30% of the maximum are selected for fitting to exclude core-halo distributions. togram, we implicitly exclude an allowed class of proton VDF fits in which dominant non-Maxwellian features appear as large tails or a halo in the proton distribution instead of a secondary peak or shoulder-like fit because the uncertainty on the drift velocity is large. We leave these core-halo distributions for a later study. For the α-particle case, there is a distinct population with small drifts resulting from a combination of noise and poor quality fits. Requiring ∆v α,p1 /C A ≥ 0.27 addresses this issue. The best fit Gaussians are shown in orange. Similar to previous results (e.g. Kasper et al. (2008Kasper et al. ( , 2017; Marsch et al. (1982a); Reisenfeld et al. (2001)), ∆v α,p1 /C A = 67% ± 26% and ∆v p2,p1 /C A = 108% ± 16%, where the ranges quoted are the one-sigma widths of these fits. The widths of the Gaussians, which we will heretofore denote σ α,p1 and σ p2,p1 , are attributed to a combination of (1) the range of measured solar wind conditions that support a non-zero differential flow and (2) applicable measurement errors. In the following sections, we hypothesize and test some potential contributions to each. UNCORRELATED FLUCTUATIONS Differential flow is strongest in solar wind with large Alfvénic fluctuations and therefore thought to be a signature of local wave-particle interactions, e.g. cyclotronresonance-induced phase space diffusion for the case of proton beaming (Tu et al. 2004). If differential flow is in general a product of local wave-particle interactions, the difference in widths observed in the histograms of Fig. 3 may follow from a resonance condition or aspect of the wave-particle coupling that depends on ion species characteristics, such as charge-to-mass ratio. To test this, we compare the magnitudes of correlated α and p 2 streaming fluctuations about their mean. Figure 4 is a 2D histogram of proton beam differential flow fluctuations (δ∆v p2,p1 ) and alpha differential flow fluctuations (δ∆v α,p1 ), each about their mean. Comparing fluctuations in ∆v removes other sources of variation in the magnitude of ∆v, such as large scale variations in the Alfvén speed or the bulk speed of the solar wind. Fluctuations are calculated by subtracting a running 14 minute mean from each ∆v time series, and requiring spectra for ∼ 50% of the time period. Because the fitting algorithms returns the parallel component of the beam differential flow, comparing any other component would incorporate additional information about the magnetic field. An ellipse is fit to the 2D histogram and contours of the fit are shown. The insert gives the function and fit parameters. The ellipse is a circle centered at the origin, indicating that the variations in ∆v α,p1 and ∆v p2,p1 are uncorrelated on these scales. We conclude that the difference in ∆v distribution widths, i.e. σ α,p1 = σ p2,p1 , described in the previous section is not due to any species-specific difference in response to large scale, local fluctuations. We repeated this calculation for running means calculated over various time intervals ranging from 5 minutes to more than 20 minutes and multiple requirements for the minimum number of spectra per window. The result is not sensitive to either parameter. TRENDS WITH COLLISIONAL AGE In a hot and tenuous plasma -even in the absence of classical hard collisions -the cumulative effect of small angle Coulomb collisions acts like a simple drag force that gradually slows differentially flowing particles (Spitzer 1962). Tracy et al. (2016) showed that collisions with bulk protons are the dominant source of Coulomb drag on all other ions in the solar wind. Kasper et al. (2008Kasper et al. ( , 2017 have demonstrated that ∆v α,p1 /C A is a strong, exponentially decaying function of the Coulomb collisional age, the ratio of the local collision rate to the local expansion rate. The differential equation describing Coulomb drag is d∆v dt = −ν c ∆v, where ν c is the effective collision rate. In integral form, this becomes ∆v = ∆v 0 exp − t0 0 ν c dt . Under the highly-simplified assumption that ν c and the solar wind speed (v sw ) are constant over the propagation distance r, the integral is commonly estimated as t0 0 ν c dt = ν c r/v sw . We follow Kasper et al. (2008) and refer to this empirical proxy for the total number of collisions experienced over the expansion history as the collisional age (A c ) of the solar wind. Kasper et al. (2017) refer to the same quantity as the Coulomb Number (N c ). Chhiber et al. (2016) provide a detailed comparison of this empirical proxy to simulations. As we show below, the exponential decay of ∆v with collisional age implies that ∆v/C A histogram widths σ α,p1 and σ p2,p1 is highly sensitive to the range of A c in the sample. Based on the work of Tracy et al. (2016), we neglect collisions amongst the minor populations themselves and only consider collisions of α or p 2 ions with proton core ions (p 1 ). Based on the work of Kasper et al. (2008Kasper et al. ( , 2017, we limit our analysis of the collisional age dependence to collisionless and weakly collisional regimes that constitute the range 10 −2 A c 10 −1 . This is the range in which ∆v α,p1 /C A is empirically nonzero. Because the proton beam can have a non-negligible density in comparison to the proton core, we calculate the collision frequency between two species following Hernández & Marsch (1985, Eq. (23)) in a selfconsistent manner by integrating over test and field particles from both components. Our treatment of the Coulomb logarithm follows Fundamenski & Garcia (2007, Eq. (18)). We assume that r is the distance traveled from a solar source surface to the spacecraft's radial location, ≈ 1 AU, and we take the solar wind velocity to be v sw ≈ v p1 . Measurements of ∆v α,p1 /C A and ∆v p2,p1 /C A are binned by collisional age and histogrammed in Figure 5 across the aforementioned range. Each column has been normalized by its maximum value in order to emphasize the trends with A c . Only bins with at least 30% of the column maximum are shown. To characterize the collisionally "youngest" solar wind spectra that have been measured, we define a sufficiently large and statistically significant subset that reflects the limiting behavior. We have chosen this "youngest" range to be (10 −2 ≤ A c ≤ 1.2 × 10 −2 ). The rightmost limit of this subset is marked with a blue line on the figure. In the case of α particles, the decrease from the mean value in the reference or youngest region of ∆v α,p1 /C A ∼ 0.8 down to ∆v α,p1 /C A ∼ 0.4 over the range shown would appear to account for a significant fraction of σ α,p1 , up to a ∼ 40% spread. In contrast, the proton analogue exhibits a far weaker apparent decay with increasing collisions,showing a decrease of at most approximately one-tenth the slope of the alpha particle trend. In other words, ∆v p2,p1 /C A is nearly independent of the collisional age. We would also like to derive the general and limiting cases for the differential flow speed ratios ∆v p2,p1 /∆v α,p1 in spectra where the two are observed simultaneously. In Fig. 6, we compare ∆v α,p1 to ∆v p2,p1 directly in the full low-collision regime and in the very young reference regime. The ratios ∆v α,p1 /∆v p2,p1 are histogrammed, with the dashed line indicating the full low-collision sample 10 −2 ≤ A c ≤ 10 −1 and the solid line indicating the reference or youngest subsample (10 −2 ≤ A c ≤ 1.2 × 10 −2 ). The selection of data that contributes to Fig. 6 is slightly different and more restrictive than in the previous section, because here we require that both the alpha-core and proton beam-core collision rates simultaneously fall in the target range. Col. Norm. Count [#] Figure 5. 2D histograms of α particle and p2 Alfvén speed normalized differential flow each as a function of its collisional age. Only bins with at least 30% of the a column maximum are shown. Measurements with a collisional age Ac 1.2 × 10 −2 is indicated to the left of the blue line. As before, we characterize these distributions in Fig. 6 in a manner insensitive to the tails by fitting a Gaussian to bins with a count of at least 30% of the most populated bin. Similar to Fig. 3 Figure 6. The ratio of alpha particle to proton beam differential flow (∆vα,p 1 /∆vp 2 ,p 1 ) in collisionless (10 −2 ≤ Ac ≤ 10 −1 , dashed) and the youngest measured data (10 −2 ≤ Ac ≤ 1.2 × 10 −2 , solid). Bins in Fit Fit parameters up to the fit uncertainty. As there are fewer counts in the youngest A c range, the histograms have been normalized by their maximum values in order to emphasize the difference in the respective means (µ) and widths (σ) of the distributions. Over the low-collision range, ∆v p2,p1 is approximately 1.6× faster than ∆v α,p1 . Over the youngest range, that reduces to 1.4×. The width or characteristic spread in ∆v α,p1 /∆v p2,p1 is 1.37× larger over the broader, lowcollision range than the youngest range. Having demonstrated that ∆v α,p1 and ∆v p2,p1 are uncorrelated in these ranges and that the mean value of ∆v α,p1 /C A changes by about 0.4 over the full range, we attribute most of the spread in the ratio ∆v α,p1 /∆v p2,p1 to the observed decay of ∆v α,p1 with increasing Coulomb collisions. CORRECTIONS TO THE ALFVÉN SPEED Alfvén waves are parallel propagating, transverse, non-compressive fluctuations in MHD plasmas. (Alfvén 1942) Under ideal MHD and considering only a single, simple fluid, the phase speed of these waves (the Alfvén speed) is given by the ratio of the magnetic field magni-tude (B) to the square root of the mass density (ρ): (2) Barnes & Suffolk (1971) derived an approximation to the phase speed of the Alfvén wave under anisotropic MHD that accounts for pressure anisotropy and differential flow of multiple ion species: . Here, C A is the ideal MHD Alfvén speed from Eq. (2). The second term in the brackets gives the correction due to the thermal anisotropy of the plasma. Total thermal pressure perpendicular and parallel to the local magnetic field are p i = s n s k b T s,i = ρp 1 2 s ρs ρp 1 w 2 s;i for components i =⊥, . The third term in the brackets gives the correction due to the dynamic pressure from differential streaming in the plasma frame which is is the plasma's center-of-mass velocity; a given species' mass density is ρ s ; and its velocity is v s . All species s are summed over. Pressure terms have been written in terms of mass density ratios to emphasize the significance of correction factors discussed in the following paragraphs and cataloged in Table 1. When the plasma is isotropic and there is either vanishingly slow differential flow or a vanishingly small differentially flowing population, the term in brackets is equal to unity and Eq. (3) reduces to Eq. (2). This anisotropic, multi-component formalism of Barnes & Suffolk (1971) ought to be a more appropriate and higher fidelity description of the solar wind plasma than the commonly-evoked ideal single-fluid approximation. Nevertheless, it is instructive to give a rough illustration of the magnitude of each correction term under typical conditions. We note first that the proton core in the solar wind is often anisotropic, with core pressure ratios falling primarily in the range 0.1 p ⊥ /p 10. The absolute correction to the Alfvén speed, via the second bracketed term in Eq. (3), that follows from this anisotropy alone is ∼6%-7% for the median case and can be as high as ∼50%. With regards to the third bracketed term, we note that a typical proton beam carrying 10% of the total protons at a speed of roughly C A relative to the core would carry a ∼5% self-consistent correction to the Alfvén speed, owing to proton beam-core dynamic pressure. Our goal in this section is to relax the ideal MHD approximation by considering these next-order approximations for the speed of the predominant parallelpropagating wave in the solar wind. We explore whether f(x) = A e 1 2 ( x ) 2 = 1.50 × 10 1 = 1.057 × 10 +0 Figure 7. Examples of the Gaussian fits to 1D distributions of α and p2 normalized differential flow along with the associated residuals. As discussed in Section 6, the Alvén speed normalizations shown minimize the width of these distributions. the spreads in normalized differential flow, i.e. the widths of the 1D distributions of ∆v/C A , are further minimized when the contributions of anisotropic and dynamic pressure are considered. In order to disentangle this element from the Coulomb collision effect described in the previous section, we limit our analysis in this section to the "youngest" plasma, i.e. measurements drawn from the youngest-measured reference regime to the left of the blue line in Fig. 5. Figure 7 plots distributions and fits in the nowfamiliar style, together with the fit residuals, for one possible renormalization of ∆v α,p1 /C A and ∆v p2,p1 /C A . The color selection for the various components in the top panel follows the convention from the previous figures and again only bins with counts at least 30% of the maximum are used in the fit. Residuals are shown for the bins in the fit, and the fit parameters are shown in the inserts. The amplitudes A are omitted because they are of no consequence. In this particular case, the α and p 2 differential flow are normalized by the Alfvén speeds with proton core pressure anisotropy taken into account. For reasons discussed below, the normalization in the proton beam-core example (Right) also accounts for the beam contribution to the proton mass density. We consider a family of similar approximations to the Alfvén speed, each accounting for corrections associated with the measured anisotropies and multiple component terms in Eq. (3). As these contributions rely on higherorder moments of the spectrum fit 3 , they can carry relatively large uncertainties. If the uncertainties are significant in the aggregate, they are expected to contribute to broadening of the ∆v/C A distributions. However, terms that are well-measured in the aggregate, will improve the precision of the Alfvén speed when accounted for and thus reduce the width of ∆v/C A if the true differential flows are Alfvénic in nature. In the following, we examine all possible combinations in order to ascertain whether a well-measured high order correction exists that further minimizes the width of the normalized differential flow distributions. Table 1 contains fit parameters for each 1D distribution of ∆v/C A , for both the alpha-proton and proton beam-core differential flows, using the various formulations of the Alfvén speed. Overall, we find that the widths of both ∆v/C A distributions increase substantially when the dynamic pressure term is included, indicating that either (1) the differential flows are less strongly correlated with generalized Alfvén speed, or (2) that the additional measurement uncertainty introduced along with a given term is in the aggregate comparable to the correction itself. However, when only the proton core temperature anisotropy correction is factored in, the distribution width is indeed reduced relative to the isotropic case. Because the core anisotropy correction term in Eq. (3) is usually (but not always) positive, it tends to increase the Alfvén speed estimate relative to the ideal MHD approximation. Thus, the corrected mean values ∆v/C A are generally lower. Figure 8 is a plot of the width vs. mean for select 1D fits that were performed in the style of Figure 7, illustrating these observations. In the cases shown, each Alfvén speed includes both proton densities. The cases accounting for proton core pressure anisotropy correction factor (p ⊥ − p ) are indicated with the square. Cases that additionally account for the proton core dynamic pressure correction factor (p ⊥ − p − pṽ) are indicated by stars. TRENDS IN A C Using the Alfvén speed approximation that minimizes the spread in normalized differential flow for alphas and beams, we examine the behavior of ∆v/C A as a function of A c and in the asymptotic limit of zero collisions. We applied the same methodology used to examine 1D dis- Table 1. All fit parameters and their uncertainties in the manner calculated in Fig. 7. The column indicates the parameter (Mean Value or Width) for a given differentially flowing species. The row indicates the wave speed normalization. The bold, colored row is the preferred normalization. Anisotropic Alfvén speeds including the dynamic pressure term from Eq. 3 are indicated by (pṽ). The average fit uncertainty on the Mean is 4 × 10 −3 and the average uncertainty on Width is 5 × 10 −3 . Normalizations marked with an asterisk (*) are plotted in Fig. 8 tributions in the youngest A c data to binned α, p 1 and p 2 , p 1 differential flow spanning the low-collision range. Figure 9 plots these trends. Alpha particles are shown in blue and proton beams in yellow. Mean values to 1D fits are indicated as pluses and the 1D widths are given as error bars. Fits to each trend are given as black dotted lines. Four clear features are apparent pertaining to the mean values of both normalized differential flows and to their collisional trends. First, if we consider the asymptotic limit of zero Coulomb collisions and we account for the widths reported in Table 1, the alpha particles differentially stream at 67% of the local Alfvén speed and the proton beams stream at approximately the Alfvén speed. Second, that the fit constant c governing α, p 1 decay is greater than 1 indicates that our collisional age calculation over-simplifies our A c by either underestimating r, under-estimating ν c , over-estimating v sw , or some combination of these. Kasper et al. (2017) be a subject for future study. Third, even using the formulation of the Alfvén speed that yields the highest precision, the spread in alpha particle differential flow due to the change in mean value over the collisionless range is still ∼ 0.3, which is the largest single contribution to the spread in ∆v/C A . Fourth, in the asymptotic absence of collisions, the proton beams differentially flow at very nearly (105% of) the Alfvén speed. Given the widths of the error bars in Fig. 9, the difference between the youngest resolved ∆v p2,p1 and the asymptotic value could be due to the spread in our measurements. DISCUSSION The evolution of solar wind velocity distribution functions is governed by an interplay between adiabatic expansion, Coulomb collisions, and wave-particle interactions. Collisional transport rates (Livi & Marsch 1986;Pezzi et al. 2016) and many types of wave-particle interactions (Verscharen et al. 2013b,a;Verscharen & Chandran 2013) depend on the small-scale structure of the VDF, in particular the small-scale velocity space gradients. Because measurements indicate the presence of f(x) = m ⋅ x + b b = 1.052 × 10 +0 m = − 4 × 10 −1 α p2 Figure 9. Trends of 1D fits to ∆vα,p 1 /CA and ∆vp 2 ,p 1 /CA as a function of Ac. Error bars are the Widths of the 1D fits. Each trend has been fit and the parameters are shown in the appropriate insert. While ∆vα,p 1 markedly decays with increasing Ac, ∆vp 2 ,p 1 is relatively constant with Ac. To within the fit uncertainty, proton beams differentially stream at approximately the local Alfvén speed. alpha-proton differential flow starting at the corona and extending out to and beyond 1 AU, one can assume that non-zero differential flow is a coronal signature. Under this hypothesis, the decay of ∆v α,p1 is due to dynamical friction. (Kasper et al. 2017) As the proton beam-core drift and alpha-core drift are signatures of one plasma with a single expansion history, the collisional bottleneck that erodes ∆v α,p1 could likewise be expected to erode ∆v p2,p1 . However, the observed independence of ∆v p2,p1 /C A with respect to A c over the examined range contradicts this assumption and minimally implies either (1) an additional competing process that preferentially couples to proton beams or (2) that Eq. (1) underestimates the proton dynamical friction. Several in situ mechanisms that preferentially couple to protons have been proposed. As one example, the interaction between resonant protons and kinetic Alfvén waves leads to the local formation of beams (Voitenko & Pierrard 2015). Such a mechanism could be responsible for the creation of proton beams throughout the solar winds evolution or it could turn on at some distance from the sun where plasma conditions become favorable. As another example, Livi & Marsch (1987) have argued that Coulomb scattering itself in the presence of the interplanetary magnetic field can produce skewed and beam-like distributions under certain circumstances. The collisional age used in Eq. (1) assumes that the collision frequency describing proton dynamical friction does not change over the solar winds evolution and is equal to the value measured at the spacecraft. Chhiber et al. (2016) have shown that such assumptions do not capture the full nature of proton radial evolution. Eq. (1) also neglects the ways in which this frequency depends on the small-scale structure of the VDF (Livi & Marsch 1986;Pezzi et al. 2016). One avenue of future work is to better address collisional effects by modeling the radial dependence, building on the work of Chhiber et al. (2016) and Kasper et al. (2017). A further refinement would be to account for dependence of collision frequency on the VDF fine structure (Livi & Marsch 1986;Pezzi et al. 2016). A second avenue of future work involves modeling the force required to locally maintain differential flow. By letting this force depend on local wave amplitudes, perhaps the differential flow radial evolution could be modeled from the competition between a Coulomb frictional force and a force from resonant scattering (Voitenko & Pierrard 2015). The hypotheses of proton beams as coronal in origin or created and modified in situ are not mutually exclusive. For example, wave-resonant or frictional forcing may only be significant over a certain portion of the solar winds radial evolution and that range may correspond to a subset of commonly measured conditions at 1 AU. Applying a holistic model to data that is differentiated by wave power or Coulomb collisions may allow us to distinguish between or unite the two origin hypotheses. The upcoming Parker Solar Probe (Fox et al. 2015) and Solar Orbiter (Müller et al. 2013) missions, with their closer perihelia and higher energy resolution plasma instruments (Kasper et al. 2015), will also allow us to gauge the relative importance of and interplay between these effects. CONCLUSIONS In fast (> 400 km s −1 ) and collisionless (A c ≤ 10 −1 ) solar wind, α, p1 differential flow is approximately 62% as fast as p2, p1 differential flow when measured by the Wind spacecraft's Faraday cups. The spread in α, p1 differential flow is approximately 1.7× larger than p2, p1 differential flow. We ruled out large-scale, in-phase wave-particle interactions by examining the correlation between fluctuations in both species parallel differential flows over multiple time scales ranging from 5 minutes to more than 20 minutes. Minimizing the spread in normalized differential flow due to the method used to approximate the Alfvén speed, we found that the difference in ∆v/C A width for both species is predominantly due to the decay of ∆v α,p1 /C A with increasing Coulomb collisions. At the youngest resolved collisional age, when the impact of Coulomb collisions has been minimized, we find that proton core pressure anisotropy has the largest impact on minimizing the spread in normalized differential flow and that the increase in spread when including dynamic pressure in the anisotropic Alfvén speed is beyond what would be expected from random fluctuations. In the asymptotic absence of Coulomb collisions, α-particles differentially flow at approximately 67% of the local Alfvén speed and proton beams differentially flow at approximately 105% of it. This upper limit on ∆v α,p1 /C A is close to the upper limit found by Maneva et al. (2014) and worth further investigation. We also found that, unlike the known (Neugebauer 1976;Kasper et al. 2008Kasper et al. , 2017 α, p1 decay with A c , proton beam differential flow minimally decays and is approximately constant with collisional age. Given the results of Tracy et al. (2016) showing that solar wind ions collisionally couple most dominantly to protons, it is unsurprising that the widths of both ∆v α,p1 /C A and ∆v p2,p1 /C A are smallest when the Alfvén speed accounts for the proton core. That the proton core temperature anisotropy is also significant supports the conclusion of Chen et al. (2013) that solar wind helicities are closer to unity when normalzing by the anisotropic Alvén speed. That the beam differential flow width is smaller when it is normalized by an Alfvén speed including the beam density may indicate some coupling between the beams and local Alfvén waves, as predicted by Voitenko & Pierrard (2015). That the dynamic pressure term causes a larger spread in both species normalized differential flow is either a result of measurement uncertainty or some underlying physical mechanism that is beyond the scope of this paper to test.
2018-09-05T19:11:14.000Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "2430c20cb5b35409bc2dcb67cb4fa653a61eb45f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3847/1538-4357/aad23f", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2430c20cb5b35409bc2dcb67cb4fa653a61eb45f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233677055
pes2o/s2orc
v3-fos-license
Islamic Religious Education in Inclusive Education: Curriculum Modification for Slow Learner Students at SMP Muhammadiyah 1 Godean ARTICLE INFO Article history: Received January 17, 2021 Revised January 26, 2021 Accepted March 14, 2021 ABSTRACT Education for children with special needs began to change on the basis of diversity and the fulfillment of the right to obtain an education and an inclusive education. Inclusive educators enforce learning in the same learning environment for each student, for regular students or students with special needs, including for slow learners. This qualitative descriptive research was conducted with the intention of knowing the implementation of Islamic Religious Education curriculum in the setting of inclusive education, namely at SMP Muhammadiyah 1 Godean. The research subjects were obtained through purposive sampling method, with research data obtained through observation methods, interviews and documentation. Data analysis is carried out through several procedures, namely data reduction, data presentation, verification and data validity test using triangulation techniques. The results showed that Islamic Education learning was carried out in regular full inclusion classes using a modified regular curriculum for slow leaner students. Learning Islamic Religious Education SMP Muhammadiyah 1 Godean faced several separate problems such as the un availability of Special Companion Teachers and differences in characteristics of each student, both regular students and special needs. INTRODUCTION Education plays an important role in developing knowledge, skills and personality for better individual progress, including for children with special needs. Previously there was segregation of education for children with special needs. They are separated in exceptional schools to provide space and overcome the obstacles they experience in the learning process. This kind of education model is considered to separate children with special needs from the lives of society in general, so that education for children with special needs begins to change on the basis of diversity and fulfillment of the right to obtain an education. Starting from there, education is initiated that unites regular students with students with special needs to follow learning in the same learning environment in an inclusive manner. Government Regulation No. 17 of 2010 Article 129 paragraph (3) states that different learners consist of visually impaired, deaf, visually impaired, visually impaired, visually impaired, visually impaired, learning difficulty, slow learning, autism, having motor disorders, victims of narcotic abuse, illegal drugs, and addictive substances or have other abnormalities. Slow learner or slow learner students have better abilities than visually impaired children, but their abilities remain below average. Its existence becomes the differentiator and divide between the category of regular students and students with special needs type of tungrahita (Mumpuniarti, 2007). They have difficulty understanding something abstract, short attention span, slow in responding as well as difficulty capturing learning materials than regular students in general. Such conditions cause their abilities to be limited, including in this case academic ability and coordination skills. Related to that, Indonesia supports the implementation of inclusion education that brings together regular children with children with special needs through Law No. 20 of 2003 on the National Education System. In article 15 of special education, it is explained that special education is an education that is organized for students with certain disorders and needs or have above average intelligence that is carried out inclusively or in the form of special education units at the elementary and secondary education level. In addition, Permendiknas No. 70 of 2009 article 4 states that the district or city level government must appoint at least one elementary school and junior high school to organize inclusive education. Inclusion education is a form of education that facilitates regular children with children with special needs to study together in the same school (Tarmansyah, 2007). In inclusive education, schools strive to provide access and accommodate all student needs regardless of differences in physical, intellectual, social and so on, including children with disabilities, talented intelligent children, street children, or from marginalized groups. Inclusive education emphasizes all aspects of openness to accepting students with special needs to grant their rights as citizens (Ilahi, 2013). Inclusive education strives to develop the potential and intelligence of its students, taking into account the diversity and individual needs of each student and providing equal opportunities to be able to learn and actualize themselves together as in schools in general (Indianto, 2013). Furthermore, the Directorate of PLB explained the following learning models (Latif et al., 2013): 1. Regular full inclusion classes are the merging of classes as a whole and the use of the same curriculum for regular students and students with special needs. 2. Regular class with cluster. Where technically in a group, students with special needs follow the learning along with other regular students in the same class. 3. Regular classes with pull outs are the implementation of learning between students with special needs and normal students in regular classes, and on certain occasions students with special needs are drawn to move learning in the resource room together with a special tutor. 4. Regular class with cluster and pull out. This class model is a combination of cluster and pull out classes, where groups of students with special needs study with normal students in regular classes, and on certain occasions are drawn to study with guidance from a Special Guidance Teacher in the resource room. 5. Special class with various integration. This class separates the learning of students with special needs in special classes in regular schools, but in certain areas still provides opportunities for them to study with normal students in regular classes. 6. A full special class is the implementation of learning for students with special needs in special classes in regular schools. Based on the above exposure, it can be concluded that inclusive education providers schools do not require every student with special needs to participate in learning in regular full inclusion classes. However, it also provides adequate space to be in the study room even in special classes to study with a special companion teacher. Based on previous exposure, it becomes a certainty that the learning curriculum for children with special needs in inclusive schools requires a variety of alignment according to the conditions and characteristics of students in the school. Law no. 20 of 2003 on the National Education System explains that the curriculum is a set of plans on the objectives, content and materials of lessons and the way used as guidelines for the implementation of learning activities in achieving their educational objectives. Previous research asserted that curriculum is an effort by the school to influence students to learn, both inside and outside the classroom and school (Rusman, 2011). This is similar in Zainal Arifin who explained that the curriculum is an organized learning experience in a certain form carried out under the guidance and supervision of schools (Arifin, 2011). In the formulation process, in addition to being based on philosophical thinking and values, especially state philosophy, the curriculum also pays attention to the demands and conditions of society, so that the curriculum is very instrumental to deliver to the expected educational objectives, and errors in the preparation of the curriculum can have an impact on educational failure (Ramayulis & Nizar, 2010). So it can be concluded that the curriculum becomes a guideline in the process of teaching and learning activities formulated based on the condition of the community, carried out in an organized manner under the guidance and supervision of the school. Curriculum models in inclusive education can be grouped into three, namely (Garnida, 2015) : Regular curriculum model/ This model is implemented by applying a general curriculum for all students, both regular students and students with special needs. Adjustments and service programs for students with special needs are directed to their guidance or learning motivation. The second was regular modified curriculum model. This curriculum model combines a general curriculum with an individualized learning curriculum. Curriculum development for students with special needs is done by modifying the general curriculum according to their individual potential and characteristics. The third was individualized curriculum model. The curriculum of students with special needs is individualized in a specially designed and developed learning program format (Sulthon, 2013). From these three models, schools can choose the curriculum model to be used, adjusted to the number of students with special needs served, type of guidance, availability and readiness of educators and school facilities and infrastructure. While this research was to analyze the implementation of Islamic Religious Education curriculum in the setting of inclusive education, namely at SMP Muhammadiyah 1 Godean. METHOD This research is classified as field research that is descriptive qualitative. This research was conducted with the aim of knowing the implementation of Islamic Religious Education curriculum in the setting of inclusive education at SMP Muhammadiyah 1 Godean. The study subjects were selected using purposive sampling methods to make it easier for researchers to explore specific social objects/ situations. The sources in this study are the principal, deputy headmaster of the curriculum section as well as teachers of Islamic Religious Education subjects at SMP Muhammadiyah 1 Godean. Research data obtained through observation methods, interviews and documentation. The observation method of the participants was carried out to find out directly related to the activities of the research subjects. Researchers also used a flexible, non-structured interview method, where each question can be changed as needed. Furthermore, documentation is used to obtain data that is documentative. The process of data analysis is carried out by studying the entire data that has been obtained from various sources (Almanshur et al., 2012). In this study, data analysis was conducted through several procedures, namely data reduction, data presentation, verification and data validity test using triangulation techniques. RESULT AND DISCUSSION Curriculum Model of Islamic Religious Education in Inclusive Education at SMP Muhammadiyah 1 Godean SMP Muhammadiyah 1 Godean handles students with special needs in the slow learner category or sluggish learning. Based on the results of the study, it is known that Islamic Religious Education subjects are also given to slow learner students with the aim of introducing them to Allah SWT, teaching them about the Quran and daily worship especially fardhu prayer. Teachers of Islamic Religious Education subjects confirm that despite their condition, the school will continue to strive for Islamic learning as a form of sincerity and responsibility to provide understanding for them. Educators think that they are essentially the same as normal regular children in general, but they need different learning strategies and a longer time to understand things, including Islamic education materials. As explained in Sholawati (Sholawati, 2019), that inclusive school learning is tailored to the development and type of guidance experienced by each student. This is also the case at SMP Muhammadiyha 1 Godean, where in general the learning process for slow learner students is the same as regular students in general, it is just that there are adjustments and simplifications in some aspects, namely the material, strategy and evaluation of learning used. The simplification of Islamic Religious Education material for them appears to be the narrowed scope of learning and the level of difficulty that is lowered. The simplification of the material is based on Bloom's taxonomy. If the material for regular students generally reaches the realm of C4 or C5 then the material for slow learner students only reaches the realm of C2 or C3. The learning competencies that they ideally master tend to be directed at aspects of knowledge and implementation of daily worship practices. Islamic education learning strategies for slow learner students, teachers pay attention to their individual characteristics and needs. In line with the explanation in Hamzah B. Uno and Nurdin Mohamad (Uno & Mohamad, 2011). the selection of learning strategies must be based on learning objectives, analysis of students' needs and characteristics, as well as learning materials, to be adapted to the media and learning resources available and usable in schools. Basically Islamic education learning strategies for slow learner students are implemented with active and cooperative learning strategies, where teachers also involve slow learner students to participate in group work, practicum or peer tutoring. Cooperative learning is proven to increase the learning motivation of slow learner students learner (Anita & AB, 2019) and peer tutor strategies are known to help slow learner students understand learning materials and slowly they can find effective learning models for themselves (Vasudevan, 2017). During the learning process, slow learner students are asked to sit in the front row close to the teacher's desk. This is done to maintain concentration and optimize the overall use of their senses, as well as facilitate the mobility of teachers to provide attention or guidance during the learning process. Rilla Melyana (Fitri et al., 2019) mentioned that this condition makes it easier for teachers to provide guidance to children who are slow to learn more intensely, motivate and maintain eye contact during learning. In addition, teachers are more likely to prioritize the continuity of attention of slow learner students rather than the speed at which they complete learning tasks, so this becomes one of the useful strategies to train their focusing and concentration skills. Slow learner students at SMP Muhammadiyah 1 Godean tend to be quiet and shy, and the model of classroom learning services fall into the regular full inclusion category. This class allows slow learners to follow the learning process along with regular students in general using the same curriculum. In addition, Islamic education learning for slow learner students at SMP Muhammadiyah 1 Godean is conducted directly between the subject teacher and the learner without a special accompanying teacher. Therefore, to ensure their understanding of the learning materials, in particular slow learner students will get additional time to study privately. This additional learning is done after the school hours end by being accompanied by the subject teacher directly and delivered in a concrete language according to the characteristics of slow learner students. This kind of mentoring can improve slow leaner student achievement for the better because slow learner students have obstacles in understanding abstract concepts, so their achievements will tend to increase if delivered concretely, with additional time in learning and doing tasks, as well as training to develop academic skills on an ongoing basis (Fitri et al., 2019). During the Islamic Education learning process, teachers strive to monitor the activities and development of each student, including slow learner students. Based on the results of the interview when students get a violation of the rules of learning in class, the teacher will reprimand or even sanction them. However, sanctions given to students are useful sanctions and educate their character, such as reading the Qur'an, picking up garbage scattered in the schooly yard, cleaning the blackboard and so on. Similarly, teachers will give awards in the form of praise or gifts for each student who is good, orderly and obedient during the learning. It is also enforced for slow learner students, even teachers do not hesitate to praise, applaud or mention publicly in front of the class that they are exemplary exemplary figures in the classroom. The awarding of such reward and punishment is known to have a great influence on students' learning motivation. Nevertheless, Anggraini said that the effectiveness of reward and punishment for students will be felt if applied appropriately, because too often giving reward and punishment will become a less favorable habit (Anggraini et al., 2019). Evaluation of Islamic Religious Education at SMP Muhammadiyah 1 Godean is conducted in a structured and scheduled manner for all students. Evaluation is carried out as an effort to find out their development from the academic and non-academic side. Academic evaluations are conducted in the form of giving questions or quizzes, daily repeats, midterm exams and final semester exams to find out the learning skills of the students in understanding the material. There is an adjustment of the evaluation form for students with special needs in the slow learner category at SMP Muhammadiyah 1 Godean. The evaluation adjustment is done in line with the simplification of Islamic education materials for them. Adjustments to the evaluation of Islamic Religious Education learning for students with special needs appear from the scope of the evaluation of materials are narrower, the level of difficulty is much lower, the details of the questions made are also simpler, and the duration of time to work on more questions. Remedial programs are also applied to students with special needs, only that teachers do not directly provide minimum standards for students with special needs. Teachers do not only provide grades based on the results of the evaluation that has been carried out, but also based on the process and efforts of the slow learner students during the learning. In addition, slow learner students also often follow evaluations orally or with practice directly, especially on some materials that contain applicatives. This form of accommodation is in accordance with the results of research contained in the effect of academic intervention on the developmental skill of slow learners (N.I et al., 2012) .. It is mentioned that cognitive limitations that occur in slow learner students have an impact on their difficulty to do something in the form of paper-pencil, so they need to be connected with creative activities to support material achievement. Non-academic evaluation is done by providing monitoring cards that serve as a tool to control students' activities while at home. Monitoring card is a short stuffing card that describes daily worship activities, such as praying fardhu, reading Iqro'/ Qur'an and fasting sunnah in the form of tables. It is known that this kind of monitoring card can help slow learner students to solve problems related to interpersonal communication, initiative and poor motivation (N.I et al., 2012). Monitoring card becomes a media liaison between teachers and parents / guardians of students so that, Islamic education materials that are applicative can be practiced by students, observed and well controlled by parents and guardians at home. Problematika Implementation of Islamic Religious Education Curriculum in Inclusive Education Setting at SMP Muhammadiyah 1 Godean The implementation of a modified regular curriculum in Islamic Religious Education subjects for students with special needs at SMP Muhammadiyah 1 Godean faced several problems including the absence of a special companion teacher for slow learner students. In fact, special assistance teachers have a quite important role in inclusive schools, because their presence accompanying during the learning process can help maximize the understanding of students with special needs. Therefore, in order to replace the role of a special companion teacher, in each lesson the teacher must condition the class in such a way and spend more time on the individual mentoring of slow learner students. In addition, the condition and characteristics of students are also one of the problems in learning Islamic Religious Education at SMP Muhammadiyah 1 Godean. On some occasions, regular students tend to be difficult to condition, thus disrupting the concentration of slow learner learning. While slow learner students have characteristics that tend to be quiet and shy. Such conditions make it increasingly difficult for teachers to identify the level of understanding of students during the learning process. Several similar problems related to the implementation of Islamic Religious Education in inclusive schools can be overcome by adding capable and competent educators in the field of Extraordinary Education, adding infrastructure for Children with Special Needs, reviewing the curriculum and establishing cooperation with parents (Khotimah, 2019). In addition, the role and direct contribution from the school can also be done by intense socialization to all school residents so that students with special needs can be well received in school (Hasyim, 2013). This finding showed that Islamic Education learning was carried out in regular full inclusion classes using a modified regular curriculum for slow leaner students. Learning Islamic Religious Education SMP Muhammadiyah 1 Godean faced several separate problems such as the un availability of Special Companion Teachers and differences in characteristics of each student, both regular students and special need. Curriculum adjustment is done by paying attention to the condition and characteristics of slow learner students by modifying the material, delivery strategies and evaluation of learning. In practice, Islamic education learning at SMP Muhammadiyah 1 Godean faced several problems including the absence of a Special Companion Teacher for slow learner students and the condition and characteristics. This research in a line with Wahyuno, E. W. E., Ruminiati, R., & Sutrisno, S. (2014) who confirmed that in implementing learning at school inclusive classroom teachers generally experience obstacles or difficulties primarily associated with the types and characteristics of the students in the class. Inclusive education is aimed at accommodating learning needs from a very broad spectrum in formal and informal education settings and not just integrating children who are marginalized in mainstream education (Sunarto & Hidayah, 2017). Inclusive education is an approach to changing the education system so that it can accommodate a very diverse range of students. The aim is to enable both teachers and students to feel comfortable with differences and see them as challenges and enrichment in the learning environment rather than as problems. The implementation of inclusive education is influenced by many factors including cultural, political, and human resources factors (Cansız, N., & Cansız, M. 2018;Kwon, 2005). According to (Ainscow, 2005) the implementation of inclusive education can be evaluated using an index called index for inclusion. Conceptually this inclusion index is constructed from three dimensions, namely (1) the cultural dimension (creating inclusive cultures), (2) the producing inclusive policies, and (3) the evolving inclusive practices. CONCLUSIONS Islamic Education learning for students with special needs at SMP Muhammadiyah 1 Godean is carried out in regular full inclusion classes using a modified regular curriculum model. Curriculum adjustment is done by paying attention to the condition and characteristics of slow learner students by modifying the material, delivery strategies and evaluation of learning. In practice, Islamic education learning at SMP Muhammadiyah 1 Godean faced several problems including the absence of a Special Companion Teacher for slow learner students and the condition and characteristics of slow learner students who tend to be quiet and shy, making it increasingly difficult for teachers to identify the level of understanding of students during the learning process.
2021-04-17T17:56:57.915Z
2021-03-14T00:00:00.000
{ "year": 2021, "sha1": "4d38df0c7ff00c20477a8d07c1320dd4b22dbf3c", "oa_license": "CCBYSA", "oa_url": "https://ojs.staialfurqan.ac.id/IJoASER/article/download/93/59", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4d38df0c7ff00c20477a8d07c1320dd4b22dbf3c", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
260734451
pes2o/s2orc
v3-fos-license
Can Glaucoma Suspect Data Help to Improve the Performance of Glaucoma Diagnosis? Purpose The presence of imbalanced datasets in medical applications can negatively affect deep learning methods. This study aims to investigate how the performance of convolutional neural networks (CNNs) for glaucoma diagnosis can be improved by addressing imbalanced learning issues through utilizing glaucoma suspect samples, which are often excluded from studies because they are a mixture of healthy and preperimetric glaucomatous eyes, in a semi-supervised learning approach. Methods A baseline 3D CNN was developed and trained on a real-world glaucoma dataset, which is naturally imbalanced (like many other real-world medical datasets). Then, three methods, including reweighting samples, data resampling to form balanced batches, and semi-supervised learning on glaucoma suspect data were applied to practically assess their impacts on the performances of the trained methods. Results The proposed method achieved a mean accuracy of 95.24%, an F1 score of 97.42%, and an area under the curve of receiver operating characteristic (AUC ROC) of 95.64%, whereas the corresponding results for the traditional supervised training using weighted cross-entropy loss were 92.88%, 96.12%, and 92.72%, respectively. The obtained results show statistically significant improvements in all metrics. Conclusions Exploiting glaucoma suspect eyes in a semi-supervised learning method coupled with resampling can improve glaucoma diagnosis performance by mitigating imbalanced learning issues. Translational Relevance Clinical imbalanced datasets may negatively affect medical applications of deep learning. Utilizing data with uncertain diagnosis, such as glaucoma suspects, through a combination of semi-supervised learning and class-imbalanced learning strategies can partially address the problems of having limited data and learning on imbalanced datasets. Introduction Glaucoma is a neuro-degenerative disease and one of the leading causes of blindness worldwide. 1 Deep learning approaches have recently been used to diagnose and assess glaucoma with promising results. Fundus images, 2-5 optical coherence tomography (OCT) volumes, [6][7][8][9] or thickness maps 10-14 obtained from OCT volumes are usually utilized for glaucoma assessment through either segmentation-based [10][11][12][13] or segmentation-free [6][7][8][9]14 methods. However, regardless of the underlying models and imaging types utilized for diagnosis, in a real-world glaucoma dataset, the number of samples from one class (the majority class, e.g. glaucoma cases) is much higher than the other class (the minority class, e.g. healthy cases), and thus the datasets are imbalanced. [5][6][7][8][9]15 Training models on imbalanced datasets usually leads to inaccurate parameter estimation and generalization failure because the learning algorithm spends most of its time training from majority class samples while underestimating minority class samples. Two common ways to handle imbalanced learning are reweighting samples and re-sampling. 16,17 In addition, it has recently been demonstrated that using unlabeled data alongside labeled data in a semisupervised learning approach can improve performance in class imbalanced learning. 18 Here, we hypothesized that utilizing the gray zone data of glaucoma suspects as a source of unlabeled data in addition to the typically used fully labeled glaucoma dataset (which includes both healthy and glaucomatous samples) would be beneficial in developing a feature agnostic 3D convolutional neural network (CNN) for glaucoma diagnosis from OCT volumes. In reality, glaucoma suspect samples are a mix of healthy and preperimetric glaucoma cases with suspicious optic disc appearance and/or ocular hypertension. Due to the difficulty with accurate early diagnosis, it is a common practice to classify patients as glaucoma suspect when all the diagnosis criteria are not satisfied. 19 Although, in a few studies, glaucoma suspect has been treated as a separate class from both healthy and glaucomatous eyes, 20,21 in the majority of studies, [2][3][4][5][6][7][8][9][10][11][12][13][14][15]22 glaucoma suspect samples were simply discarded. The former approach retains the distribution of glaucoma and healthy samples, but it cannot help with class imbalanced learning to improve diagnosis performance. The latter approach discards portions of data obtained during clinical studies with the hope of avoiding confounding the data. [2][3][4][5][6][7][8][9][10][11][12][13][14][15]22 In contrast, in this paper, we will show that glaucoma suspect samples can be used to (1) mitigate the limitations of imbalanced learning and (2) increase the overall number of samples. Both of these features are crucial for training deep neural networks. To the best of our knowledge, this is the first study to propose the creative use of glaucoma suspect data in order to maximally exploit all glaucoma-related data in a clinical study for improving the performance of training a deep learning-based glaucoma diagnosis system. Dataset Optic nerve head (ONH) centered OCT volumes were captured using spectral-domain OCT devices (Cirrus HD-OCT; Zeiss, Dublin, CA, USA) on patients over multiple visits during 2005 and 2019 at the UPMC Eye Center and the NYU Langone Eye Center. Retinal diseases other than glaucoma and refractive errors greater than +6 or smaller than -6 diopters were excluded. Each OCT volume was obtained by scanning an area of 6 × 6 × 2 mm 3 on the retina and storing the result as a 200 × 200 × 1024 (horizontal × vertical × depth) data cube. Data collection was conducted in accordance with the tenets of the Declaration of Helsinki and the Health Insurance Portability and Accountability Act (HIPAA). The Institutional Review Board of New York University and the University of Pittsburgh approved the study, and all subjects gave written consent before participation. Our initial dataset consisted of 12,863 OCT volumes (with signal strength >=6) from the eyes of 794 individuals. These scans were labeled either as healthy or as having primary open angle glaucoma (POAG). Patients with at least two consecutively abnormal visual field test results were considered as glaucomatous eyes by specialists, and healthy scans were captured from individuals without a clinical history of glaucoma. Given this dataset, we first removed repeated scans (which were taken at the same date from the same individual) and kept the scans with the highest signal strength. At the end, 8476 scans remained, in which there were 683 healthy and 7793 POAG scans. Therefore, the class imbalance ratio (defined as the ratio of the number of samples in the majority class to the number of samples in the minority class 18 ) is around 12. Demographic characteristics of healthy and glaucomatous samples are provided at the second and forth columns of Table 1. We divided all healthy and glaucoma scans into three groups, training, validation, and test subsets containing 5434 (442 healthy scans from 113 individuals), 1348 (105 healthy from 23 individuals), and 1694 (136 healthy from 32 individuals) samples, respectively. We ensured that OCT volumes belonging to the same patient were included in only one of the three splits to avoid data leakage, and the class imbalance ratio was kept fixed in each split. We refer to this dataset as the main dataset. In addition to the main dataset, there was an extra set of 8318 OCT volumes from 751 individuals, which were labeled as glaucoma suspect because the visual field test results were normal but at least one of the following criteria was met: intraocular pressure of 22 to 30 mm Hg, abnormal ONH appearance (including but not limited to increased cupping [diffuse or focal narrowing of the disc rim], asymmetric ONH cupping, recurrent disc hemorrhages, large ONH with thin disc rims, and/or anomalous discs); or an eye that was the contralateral eye of unilateral glaucoma. Similar to the main dataset of healthy and POAG scans, we removed the repeated scans from glaucoma suspect samples, and then, the remaining 5648 OCT volumes were kept aside to be used as our additional source of data for semi-supervised training (Proposed Method section). Finally, all OCT volumes were downsampled to 64 × 64 × 256 data cubes using the 3D bicubic interpolation method because training a neural network with data samples in their original size is impractical. [6][7][8][9]15 The demographic characteristics of glaucoma suspect samples are presented in the third column of Table 1. Proposed Method Following, [6][7][8][9]15 we first developed a baseline 3D CNN consisting of 7 convolutional layers followed by a global average pooling and a fully connected layer with two neurons (Fig. 1). Each convolution layer has a 3 × 3 convolution kernel with stride 1, batch normalization, and rectified linear unit (ReLU) activation. Next, 3D maximum pooling is used after each convolution layer to gradually reduce the dimensionality of data and extract more semantically meaningful features. The number of feature maps in convolution operators are 16, 16, 32, 32, 32, 64, and 128, respectively. Then, the output of the fully connected layer is followed by a Softmax activation to provide a probability score for each class. First, to train the 3D CNN architecture in Figure 1 on the main dataset (without glaucoma suspect samples; Dataset section), following, 6-9,15 weighted cross-entropy (WCE) is used. In WCE, samples from each category are weighted by reciprocal of their corresponding class proportions. Thus, the cost of failures on minority (healthy) samples increases. We refer to this method as "3D CNN + WCE." Next, we construct our second method by exploiting resampling (RE), 16,17,23 which is a common method to handle imbalanced learning. Specifically, we construct balanced batches during training by uniformly sampling from both classes of the training set. 23 In this way, the same number of samples from each class are presented in every training batch, and, thus, the gradients always have useful information about both classes. We refer to this method as "3D CNN + RE." In order to harness the power of all glaucoma data that is usually collected in clinical studies (Dataset section), the third 3D CNN model is trained on a dataset comprising of the main dataset (healthy and glaucomatous samples) and glaucoma suspect samples. We treat glaucoma suspect samples as a source of unlabeled data in a semi-supervised learning (SSL) approach by adding them to the pool of training samples and leaving both validation and test subsets untouched. Pseudo-labels for glaucoma suspect samples are generated using our second model (3D CNN + RE). Obviously, we are more confident about the labels in the main dataset than these pseudo-labels. Therefore, we reduce their weights in the overall loss computations by using L T = L + αL U as the total loss function, where L denotes the cross-entropy loss for the samples from the main dataset, L U denotes the cross-entropy loss for the glaucoma suspect samples, and the weight α is used to control the contribution of loss value from glaucoma suspect samples. We refer to this method as "3D CNN + SSL." In this method, the hyperparameter α was empirically set to 0.7 by checking the performance on the validation set. It turned out that when it was less than 0.7, the improvement over 3D CNN + RE decreased or became negligible and bigger values resulted in performance decline. In addition to the 3 mentioned methods, we also trained 2 combined configurations: (1) "3D CNN + SSL + RE," in which a 3D CNN is trained on the main dataset with glaucoma suspect samples included while performing the resampling technique to create balanced batches during training, and (2) "3D CNN + SSL + RE + WCE," in which we also applied weighted cross-entropy along with SSL and resampling by creating balanced batches. Similar to "3D CNN + WCE," the weight for each sample is computed by the reciprocal of total number of samples in its corresponding class. We trained all mentioned methods using stochastic gradient descent with a fixed learning rate and batch size equal to 0.01 and 8, respectively. We avoid changing these hyperparameters in our experiments to decrease the chance of interfering with other factors in improving the performance. During the training, early stopping with a patience of 10 epochs were used and checkpoints were saved to prevent overfitting and selecting the best model. The methods were implemented using Python, Keras with Tensorflow 24 backend on a desktop PC with 16 GB of RAM and GPU of NVIDIA GeForce GTX 2080 Ti. In the upcoming section, the performances of the methods will be evaluated using accuracy, balanced accuracy, F1 score, and area under the curve of receiver operating characteristic (AUC ROC) metrics. Accuracy is defined as the number of correctly classified samples divided by the total number of samples in the test dataset. Balanced accuracy 25 is defined as the arithmetic mean of sensitivity (or recall) and specificity, where sensitivity is the ratio of correct glaucoma predictions to the total number of glaucoma samples and specificity is the ratio of correct healthy predictions to the total number of healthy samples. The F1 score is computed based on the harmonic mean of precision and recall, that is, it can be computed using 2 × precision × recall/(precision + recall), where precision is the ratio of correct glaucoma predictions to the total number of glaucoma predictions. Both balanced accuracy and F1 score are usually reported when the dataset is imbalanced. The F1 score favors classifiers to have a precise prediction on positive class (here, glaucoma class) and balanced accuracy favors classifiers to have good recognition rates on both positive and negative classes. AUC ROC measures the ability of the classifier to distinguish between classes and summarizes the ROC curve. Results To evaluate the compared methods, 5-fold crossvalidation have been used, and the average metric results on the test dataset consisting of 1694 OCT volumes (Dataset section) along with 95% confidence intervals are reported in Table 2. Among methods with (w/) or without (w/o) using glaucoma suspect samples, the best performance was shown by 3D CNN + SSL + RE. In addition, the Mann Whitney U test 26 was performed to statistically compare the results of the compared methods against the best baseline method without using glaucoma suspect samples, which is 3D CNN + RE. In this test, where the U value is less than the critical value for a 0.05 significance level is considered statistically significant. It can be seen that the proposed 3D CNN + SSL + RE method is the only SSL-based method that not only achieved better The last column (glaucoma suspect?) indicates whether each method was exploited glaucoma suspect samples in its training procedure or not. Best results in the mean values are shown in bold. Results which are statistically significant in comparison with 3D CNN + RE are indicated by "*". averaged results but its results are statistically significantly better than the baseline. In Table 2, the results show that 3D CNN + SSL was as effective as using weighted cross-entropy (3D CNN + WCE) without glaucoma suspect samples, which is a widely used technique in learning from an imbalanced glaucoma dataset. [6][7][8][9]15 However, 3D CNN + RE, which uses data re-sampling, shows better performance than 3D CNN + SSL. Due to observing the effectiveness of this method, we retained the resampling technique and coupled this method with both using glaucoma suspect samples (through SSL) and WCE to practically test their contributions. The results in Table 2 show that the performance was improved by exploiting glaucoma suspect samples along with the resampling technique, which is denoted by 3D CNN + SSL + RE in Table 2. However, adding WCE (3D CNN + SSL + RE + WCE) had a negative effect, probably due to the fact that modifying the weights in the loss function became less important when the number of data was increased by using glaucoma suspect samples and at the same time the mini-batches were resampled in such a way that they contained the same number of samples from each class. It is also worth mentioning that, in our experiments, when no imbalanced learning method was used, the training collapsed and the accuracy of the trained method was simply equal to the proportion of glaucoma (the majority class) samples, that is, the trained method classified every input image as glaucoma and was not able to recognize the healthy cases. Discussion The superiority of 3D CNN + SSL, semisupervised learning with the help of glaucoma suspect samples, over the common 3D CNN + WCE, using weighted cross-entropy, complies with the findings of Ref. 18, where their reported results showed that SSL was at least as effective as reweighting approaches, such as WCE in learning from imbalanced datasets. However, in their experiments, re-sampling had inferior results in comparison to SSL. Data resampling [16][17][18] is a common technique to handle learning on imbalanced datasets, and it is usually applied at the data level by random subsampling. In our experiments, we used a re-sampling technique which has previously been shown to be very effective in facial expression recognition on imbalanced datasets. 23 Specifically, in 3D CNN + RE, we created balanced batches to prevent the training batches from being dominated by samples from one class. 24 Although, in our experiments, we have practically shown that glaucoma suspect samples can be used for improving performance, it is worth mentioning that this gain was observed because glaucoma suspect samples are indeed a mixture of healthy and preperimetric glaucoma cases. 19 This similarity is important for expecting benefits from using unlabeled data in a SSL approach over an imbalanced dataset. 18 To intuitively show this similarity, we projected the feature vectors obtained from the GAP layer at the end of a trained 3D CNN into 2D using t-SNE 27 in Figure 2 to visually compare how the network understands glaucoma suspect, healthy, and glaucomatous samples. It is clear that features computed for glaucoma suspect samples are very similar to healthy and glaucomatous ones. Therefore, using glaucoma suspect samples is a way of increasing the amount of similar data, and it partially addresses the problem of limited data in glaucoma diagnosis by exploiting all the data collected in a clinical study. In conclusion, this paper emphasized the importance of addressing class-imbalanced learning for the performance of a deep learning-based glaucoma diagnosis method from OCT volumes. Although many studies discard glaucoma suspect samples, [2][3][4][5][6][7][8][9][10][11][12][13][14][15]22 our experiments showed that the novel usage of glaucoma suspect samples through an SSL approach can partially alleviate imbalanced learning issues by increasing the number of semantically similar training examples. However, the use of glaucoma suspect samples alone is not sufficient, and in order to boost the performance, it is crucial to use the resampling through creating balanced batches to further address imbalanced learning issues and exploit all relevant data collected in a clinical study for training the method.
2023-08-10T06:17:48.136Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "6d7ecaa30f49f4e83e1c84330db9bfd23228b27e", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cbb6a622bcdd9cca7c3aabe8bf67253e1041291a", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4846886
pes2o/s2orc
v3-fos-license
The prevalence of brain abnormalities in boys with central precocious puberty may be overestimated Brain magnetic resonance imaging (MRI) is routinely performed to identify brain lesions in boys with central precocious puberty (CPP). We investigated the prevalence of CPP in Korean boys and the necessity for routine brain MRI examinations. This retrospective cross-sectional study was conducted from April 2003 to December 2016 at a Korean university hospital. Among 151 boys who were diagnosed with CPP, the data of 138 boys who underwent sellar MRI were evaluated. The mean age of the study subjects was 9.51 ± 0.56 years (<8 years [n = 4] and ≥8 years [n = 134]). We excluded patients who had been previously diagnosed with brain tumors and those who did not undergo a sellar MRI because of refusal or the decision of the pediatric endocrinologist. The main outcome measure was the prevalence of intracranial lesions among boys with CPP. Normal sellar MRI findings were observed in 128 of the 138 boys (93%). Mild brain abnormalities were found in 10 boys (7%), while none of the patients had pathological brain lesions. The prevalence (7%) of intracranial lesions among boys who were healthy, did not have neurological symptoms, and were diagnosed with CPP was different from that previously reported. None of the identified lesions necessitated treatment. Although this was a single country study, we found that the previously reported prevalence of brain lesions in boys with CPP is much higher than the prevalence observed in Korea. This study suggests the need to globally reevaluate the prevalence of pathological brain lesions among male pediatric patients with CPP. Introduction Central precocious puberty (CPP) is the onset of secondary sexual characteristics before the ages of 8 years in girls and 9 years in boys; it is caused by the early activation of the hypothalamic-pituitary-gonadal axis [1]. CPP may be a sign of an existing central nervous system pathology. The North American and European Pediatric Endocrinology Societies concluded that girls aged <6 years with CPP should undergo brain magnetic resonance imaging (MRI). However, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 considering that only 2-7% of girls who show an onset of CPP between the ages of 6 and 8 years have unsuspected pathology and only <1% have tumors, it is controversial whether all girls who develop CPP between the ages of 6 and 8 years should undergo brain MRI [2]. In contrast to girls, in whom 90% of CPP is idiopathic, approximately 40-75% of boys with CPP have pathological brain lesions [1,2]. Thus, brain MRI of all boys with CPP, regardless of their age, is an essential screening tool to exclude pathological brain lesions. The criteria for brain MRI examinations are strongly influenced by the diagnostic criteria of CPP, such as the age at diagnosis. The mean age of pubertal onset has declined over the last two decades globally [3][4][5]. Even in healthy children, the age at the onset of puberty is lower than it has ever been. However, there has been no change in the diagnostic criteria for CPP, and the prevalence of brain lesions in boys with CPP has remained steady. Therefore, the purpose of this study was to investigate the prevalence of brain lesions in boys with CPP over the past decade in Korea and to compare it to previously reported data. Study participants The data of 151 boys who were diagnosed with CPP at the Pediatric Endocrine Unit of the Ajou University Hospital (Gyeonggi-do, Korea) between April 2003 and December 2016 were retrospectively analyzed. The inclusion criteria were a diagnosis of CPP before the age of 10 years and the completion of a sellar MRI. The exclusion criteria included neurofibromatosis, congenital adrenal hyperplasia associated with precocious puberty, a previously identified brain tumor (1 patient with astrocytoma, aged 1 month, and 1 patient with optic glioma, aged 15 months), hydrocephalus, and trauma (n = 8). Moreover, we excluded five patients who refused to undergo an MRI. A total of 138 boys were included in the final analysis. The study protocol was approved by the institutional review board of the Ajou University Hospital (AJIRB-MED-MDB-16-459). The requirement for informed consent was waived owing to the retrospective nature of this study. This research was conducted in accordance with the principles of the Declaration of Helsinki. Diagnosis of central precocious puberty The CPP diagnosis was based on a clinical evaluation that included a full clinical history of the patients and caregivers, followed by Tanner staging conducted by a pediatric endocrinologist. It was confirmed on the basis of an increase in the testicular volume before the age of 9 years, a bone age advancement >2 standard deviations above the chronological age, and a peak luteinizing hormone (LH) level of !5 mIU/mL after a gonadotropin-releasing hormone (GnRH) stimulation test [6]. Height was measured to the nearest 0.1 cm using a Harpenden stadiometer (Holtain Ltd., Crymych, Wales, UK), and weight was measured to the nearest 0.1 kg using a digital scale. The body mass index (BMI) was calculated by dividing the weight by the square of the height (kg/m 2 ). The standard deviation scores (SDS) for height, weight, and BMI were calculated based on the 2007 Korean National Growth Charts [7]. Bone age was assessed via radiography of the left hand, according to the method established by Greulich and Pyle [8]. Serum LH and follicle-stimulating hormone (FSH) levels were measured using an immunoradiometric assay (BioSource, Nivlles, Belgium) with detection limits of 0.1 IU/L and 0.2 IU/L, respectively. Testosterone levels were measured using a radioimmunoassay with a detection limit of 0.01 ng/ mL. The GnRH stimulation test was performed during the day, and serum LH and FSH levels were determined at 0 min, 30 min, 45 min, 60 min, and 90 min after the GnRH injection (100 μg Relefact; Sanofi-Aventis, Frankfurt, Germany). All boys diagnosed with CPP underwent sellar MRI, using before and after gadoliniumenhanced T1-and T2-weighted images in axial, coronal, and sagittal sections. The sellar MRI examination was performed using two kinds of machines: prior to 2016: GE scanner, 1.5 (HDxt) and 3.0 Tesla magnet (Discovery MR750w), GE healthcare, the United States; Phillips Statistical analyses Statistical analyses were performed using SPSS version 21.0 (IBM Corp., Armonk, NY, USA). The Mann-Whitney U test was used to compare the characteristics between boys with normal sellar MRI findings and those with abnormal findings. Statistical significance was defined as a P-value <0.05. The results are reported as means ± standard deviations, unless otherwise stated. Data limitations When diagnosing boys with CPP, it is difficult to determine the exact start time of the increase in testicular volume. Therefore, at the time of CPP diagnosis, the determination of the age at pubertal onset can be a challenge in boys. The CPP diagnosis after the first observation of pubertal symptoms by parents and the first consultation with an endocrinologist is frequently spaced aparty by 1.5 years [9,10]. The same difficulties were encountered in this study; we included boys with CPP who were diagnosed before the age of 10 years. Results The brain MRI findings of the 138 boys with CPP according to age groups are presented in Table 1. MRI revealed central nervous system abnormalities in 6 boys (4.3%). All MRI findings were considered incidental findings that were not related to early puberty but rather indicated pituitary hyperplasia (n = 3), thickening of the pituitary stalk (n = 1), a Rathke's cleft cyst (n = 1), and pineal cysts (n = 5; Fig 2). The boys with CPP showing incidental findings underwent !2 MRI scans during the follow-up period (mean, 6 months). None of the boys showed central nervous system-related symptoms or underwent surgery. Table 2 depicts the differences in clinical and biochemical characteristics between cases with normal and those with incidental MRI findings. We found no significant differences for the mean age at diagnosis, height (SDS), weight (SDS), BMI (SDS), Tanner stage, bone age (SDS), basal LH levels, basal FSH levels (mIU/mL), and testosterone levels (ng/mL). The temporal distribution of patients diagnosed with both CPP and brain abnormalities are presented in Table 3. The number of boys diagnosed with CPP and the number of incidental findings significantly increased since 2012. Discussion The known causes of CPP include idiopathic organic brain lesions, hypothalamic hematoma, brain tumors, and hydrocephalus [1,[11][12][13][14]. The predictors of organic brain lesions in CPP patients remain unknown; therefore, pediatric endocrinologists routinely recommend brain MRI for all children with CPP. The prevalence of brain lesions is much higher in boys than in girls with CPP [11,12,15,16]; thus, brain MRI is recommended more frequently for boys than girls. We conducted sellar MRI examinations in boys with CPP covering 14 years (2003-2016) and obtained results that are significantly different to those previously reported (showing that approximately 40-75% of boys with CPP have pathological brain lesions) [1,2]. In the present study, the MRI results did not reveal any lesions that required interventions, and all findings were classified as incidental findings. A possible association between CPP and Rathke's cleft cyst has been reported in previous studies [17,18], although this condition rarely affects children and adolescents; moreover, the [17]. However, in this study, we identified a Rathke's cleft cyst in only 1 of 138 patients with CPP (0.7%). Our case did not exhibit any clinical symptoms and did not require treatment. Pineal cysts are common in the pediatric population, with a higher prevalence among girls and older children [20]. All pineal cysts identified in this study were benign and did not require treatment. All MRI findings of the present study are also observed in healthy individuals; thus, it remains unclear whether they were causally related to CPP. Pituitary hyperplasia is a frequent cause of incidental findings and may cause precocious puberty; however, the incidence of pituitary hyperplasia in the general population is unknown [21]. In the present study, pituitary hyperplasia was diagnosed via sellar MRI alone and was defined as a bulging contour of the pituitary gland with a height >6 mm [22]. The follow-up period of our study was 52.3 ± 34.70 months (range, 20.0-89.0 months); within this period, no clinical findings of pituitary adenomas were obtained. Therefore, findings indicative of pituitary hyperplasia were considered to suggest normal glands. We were able to compare our results with the MRI data of girls with CPP who were treated at the same hospital during the same period. During the study period, a total of 455 pediatric patients with CPP (317 girls and 138 boys) underwent sellar MRI at the hospital. The mean age of the girls with CPP was 6.89 ± 1.60 years. Normal sellar imaging findings were observed for 308 of the 317 girls (97.2%). CPP-related incidental findings were found in 9 girls (2.8%). The imaging studies revealed only cases of Rathke's cleft cysts (n = 7) and pineal cysts (n = 2). Other findings, such as those of suspected pituitary gland hyperplasia (n = 12), suspected pituitary hypoplasia (n = 2), and suspected microadenoma (n = 3), were considered normal findings. In both girls and boys with CPP, suspected pituitary gland hyperplasia, Rathke's cleft cyst, and pineal cyst were the main incidental findings. However, we were unable to ascertain how these findings are related to CPP. Our data revealed an increasing trend in the number of boys with CPP and the number of incidental findings since 2012. However, the increasing number of boys with CPP might be related to the increase in the number of sellar MRI examinations during the study period; moreover, since this study was a single institution study, it was difficult to ascertain the annual incidence of incidental findings. According to Kim et al., the annual incidence of CPP in boys in Korea increased from 0.3 to 1.2 per 100,000 boys between 2004 and 2010, although this increase was not significant [5]. In this study, we saw a significant difference in the number of incidental findings before and after 2011. It is likely that the age at the onset of puberty is decreasing in both girls and boys in Korea. Our findings indicate a need for up-to-date nationwide statistics on the annual incidence of CPP in boys in Korea. This study reported that pathologic brain lesions are rare in Korean boys with CPP. This is in contrast to the previously published prevalence of 40-75% [1,2]. There are two main potential explanations for the significant difference between the previously published prevalence of brain lesions among boys with CPP and the prevalence obtained in this study. First, the most recent study on CPP prevalence was conducted about two decades ago. Although the reasons remain unknown, the mean age at pubertal onset has been declining, and the prevalence of CPP has been increasing in many countries [4,5,23]. It is known that 10% of girls and 40-75% of boys with CPP show brain abnormalities [1,11]; however, it is unclear how the increase in the prevalence of CPP is linked to the increase in brain abnormalities. Second, the previously reported prevalence of brain lesions in boys with CPP (of 40-75%) was based on a smaller number of subjects when compared to the current study. The prevalence of CPP in Korean boys has not shown a significantly increasing trend [5]; therefore, we should consider the influence of sociocultural factors besides the underlying causes such as pathophysiologic mechanisms. Generally, it is difficult for parents to identify the beginning of puberty in boys when compared to girls; therefore, cases of CPP could be underrecognized. In Korea, CPP is a social issue, and many parents are keen on addressing this issue. In the present study, a large number of boys were able to visit the hospital with only signs of CPP, and subsequently, the diagnosis was confirmed. It is important to recognize the relationship between CPP and brain lesions at an early age. As in previous studies, this study also showed that brain lesions manifest at young ages. The children were diagnosed with neurological problems before they were diagnosed with CPP. In the present study, most patients with CPP were around 9 years of age, whereas the onset of puberty was estimated to be around 8 years of age. In this age group, there were no brain abnormalities that required intervention. Moreover, no new brain lesions requiring treatment were detected. This is very different to the previously reported prevalence of brain abnormalities (40-75%) in boys with CPP [1,2]. The present study has several limitations. First, the prevalence of brain lesions across ages remains unclear because only few subjects aged <7 years were included. Second, this study was conducted in a single country and at a single institution; however, the results are meaningful owing to the large number of subjects and the period of data analysis (14 years). Third, we examined only Korean children, and it is possible that the regional prevalence of brain lesions in CPP cases is related to the study population, ethnicity, healthcare system, and social factors (e.g., access to medical care and resources to diagnose CPP). Conclusion In conclusion, the present study revealed a significantly lower prevalence of brain lesions among boys with CPP than that previously reported. Although it is possible that there are regional or ethnic differences in the prevalence of CPP, we believe that the prevalence of pathological brain lesions in CPP cases has been overestimated. Since this study was only conducted in a Korean population, the results should be reevaluated in other countries. The findings of this study suggest that the routine use of brain MRI to screen all patients with CPP should be reconsidered, in particular in patients who are healthy, neurologically asymptomatic, and boys aged !8 years.
2018-04-26T19:15:14.571Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "aafbe0f89bb9087b00dc8c8257cda55f60d600ab", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195209&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aafbe0f89bb9087b00dc8c8257cda55f60d600ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51982253
pes2o/s2orc
v3-fos-license
Multimodal Language Analysis with Recurrent Multistage Fusion Computational modeling of human multimodal language is an emerging research area in natural language processing spanning the language, visual and acoustic modalities. Comprehending multimodal language requires modeling not only the interactions within each modality (intra-modal interactions) but more importantly the interactions between modalities (cross-modal interactions). In this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which decomposes the fusion problem into multiple stages, each of them focused on a subset of multimodal signals for specialized, effective fusion. Cross-modal interactions are modeled using this multistage fusion approach which builds upon intermediate representations of previous stages. Temporal and intra-modal interactions are modeled by integrating our proposed fusion approach with a system of recurrent neural networks. The RMFN displays state-of-the-art performance in modeling human multimodal language across three public datasets relating to multimodal sentiment analysis, emotion recognition, and speaker traits recognition. We provide visualizations to show that each stage of fusion focuses on a different subset of multimodal signals, learning increasingly discriminative multimodal representations. Introduction Computational modeling of human multimodal language is an upcoming research area in natural language processing. This research area focuses on modeling tasks such as multimodal sentiment analysis (Morency et al., 2011), emotion recognition (Busso et al., 2008), and personality traits recognition (Park et al., 2014). The multimodal temporal signals include the language (spoken words), visual (facial expressions, gestures) and acoustic modalities (prosody, vocal expressions). At its core, these multimodal signals are At each recursive stage, a subset of multimodal signals is highlighted and then fused with previous fusion representations. The first fusion stage selects the neutral word and frowning behaviors which create an intermediate representation reflecting negative emotion when fused together. The second stage selects the loud voice behavior which is locally interpreted as emphasis before being fused with previous stages into a strongly negative representation. Finally, the third stage selects the shrugging and speech elongation behaviors that reflect ambivalence and when fused with previous stages is interpreted as a representation for the disappointed emotion. highly structured with two prime forms of interactions: intra-modal and cross-modal interactions (Rajagopalan et al., 2016). Intra-modal interactions refer to information within a specific modality, independent of other modalities. For example, the arrangement of words in a sentence according to the generative grammar of a language (Chomsky, 1957) or the sequence of facial muscle activations for the presentation of a frown. Cross-modal interactions refer to interactions between modalities. For example, the simultaneous co-occurrence of a smile with a positive sentence or the delayed occurrence of a laughter after the end of a sentence. Modeling these interactions lie at the heart of human multimodal language analysis and has recently become a centric research direction in multimodal natural language processing (Liu et al., 2018;Pham et al., 2018;, multimodal speech recognition Gupta et al., 2017;Harwath and Glass, 2017;Kamper et al., 2017), as well as multimodal machine learning (Tsai et al., 2018;Srivastava and Salakhutdinov, 2012;Ngiam et al., 2011). Recent advances in cognitive neuroscience have demonstrated the existence of multistage aggregation across human cortical networks and functions (Taylor et al., 2015), particularly during the integration of multisensory information (Parisi et al., 2017). At later stages of cognitive processing, higher level semantic meaning is extracted from phrases, facial expressions, and tone of voice, eventually leading to the formation of higher level crossmodal concepts (Parisi et al., 2017;Taylor et al., 2015). Inspired by these discoveries, we hypothesize that the computational modeling of crossmodal interactions also requires a multistage fusion process. In this process, cross-modal representations can build upon the representations learned during earlier stages. This decreases the burden on each stage of multimodal fusion and allows each stage of fusion to be performed in a more specialized and effective manner. In this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which automatically decomposes the multimodal fusion problem into multiple recursive stages. At each stage, a subset of multimodal signals is highlighted and fused with previous fusion representations (see Figure 1). This divide-and-conquer approach decreases the burden on each fusion stage, allowing each stage to be performed in a more specialized and effective way. This is in contrast with conventional fusion approaches which usually model interactions over multimodal signals altogether in one iteration (e.g., early fusion ). In RMFN, temporal and intra-modal interactions are modeled by integrating our new multistage fusion process with a system of recurrent neural networks. Overall, RMFN jointly models intra-modal and cross-modal interactions for multimodal language analysis and is differentiable end-to-end. We evaluate RMFN on three different tasks related to human multimodal language: sentiment analysis, emotion recognition, and speaker traits recognition across three public multimodal datasets. RMFN achieves state-of-the-art performance in all three tasks. Through a comprehensive set of ablation experiments and visualizations, we demonstrate the advantages of explicitly defining multiple recursive stages for multimodal fusion. Related Work Previous approaches in human multimodal language modeling can be categorized as follows: Non-temporal Models: These models simplify the problem by using feature-summarizing temporal observations . Each modality is represented by averaging temporal information through time, as shown for language-based sentiment analysis (Iyyer et al., 2015;Chen et al., 2016) and multimodal sentiment analysis (Abburi et al., 2016;Nojavanasghari et al., 2016;Zadeh et al., 2016;Morency et al., 2011). Conventional supervised learning methods are utilized to discover intra-modal and cross-modal interactions without specific model design (Wang et al., 2016;Poria et al., 2016). These approaches have trouble modeling long sequences since the average statistics do not properly capture the temporal intra-modal and cross-modal dynamics (Xu et al., 2013). Multimodal Temporal Graphical Models: The application of graphical models in sequence modeling has been an important research problem. Hidden Markov Models (HMMs) (Baum and Petrie, 1966), Conditional Random Fields (CRFs) (Lafferty et al., 2001), and Hidden Conditional Random Fields (HCRFs) were shown to work well on modeling sequential data from the language (Misawa et al., 2017;Ma and Hovy, 2016;Huang et al., 2015) and acoustic (Yuan and Liberman, 2008) modalities. These temporal graphical models have also been extended for modeling multimodal data. Several methods have been proposed including multi-view HCRFs where the potentials of the HCRF are designed to model data from multiple views (Song et al., 2012), multi-layered CRFs with latent variables to learn hidden spatiotemporal dynamics from multi-view data (Song et al., 2012), and multi-view Hierarchical Sequence Summarization models that recursively build up hierarchical representations (Song et al., 2013). Multimodal Temporal Neural Networks: More recently, with the advent of deep learning, Recurrent Neural Networks (Elman, 1990;Jain and Medsker, 1999) have been used extensively for language and speech based sequence modeling (Zilly et al., 2016;Soltau et al., 2016), sentiment analysis (Socher et al., 2013;dos Santos and Gatti, 2014;Glorot et al., 2011;Cambria, 2016), and emotion recognition Bertero et al., 2016;Lakomkin et al., 2018). Long-short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997a) have also been extended for multimodal settings (Rajagopalan et al., 2016) and by learning binary gating mechanisms to remove noisy modalities . Recently, more advanced models were proposed to model both intra-modal and cross-modal interactions. These use Bayesian ranking algorithms (Herbrich et al., 2007) to model both person-independent and person-dependent features , generative-discriminative objectives to learn either joint (Pham et al., 2018) or factorized multimodal representations (Tsai et al., 2018), external memory mechanisms to synchronize multimodal data (Zadeh et al., 2018a), or lowrank tensors to approximate expensive tensor products (Liu et al., 2018). All these methods assume that cross-modal interactions should be discovered all at once rather than across multiple stages, where each stage solves a simpler fusion problem. Our empirical evaluations show the advantages of the multistage fusion approach. Recurrent Multistage Fusion Network In this section we describe the Recurrent Multistage Fusion Network (RMFN) for multimodal language analysis (Figure 2). Given a set of modalities Each sequence X m is modeled with an intra-modal recurrent neural network (see subsection 3.3 for details). At time t, each intra-modal recurrent network will output a unimodal representation h m t . The Multistage Fusion Process uses a recursive approach to fuse all unimodal representations h m t into a cross-modal representation z t which is then fed back into each intra-modal recurrent network. Multistage Fusion Process The Multistage Fusion Process (MFP) is a modular neural approach that performs multistage fusion to model cross-modal interactions. Multistage fusion is a divide-and-conquer approach which decreases the burden on each stage of multimodal fusion, allowing each stage to be performed in a more specialized and effective way. The MFP has three main modules: HIGHLIGHT, FUSE and SUMMARIZE. Two modules are repeated at each stage: HIGHLIGHT and FUSE. The HIGHLIGHT module identifies a subset of multimodal signals from that will be used for that stage of fusion. The FUSE module then performs two subtasks simultaneously: a local fusion of the highlighted features and integration with representations from previous stages. Both HIGHLIGHT and FUSE modules are realized using memorybased neural networks which enable coherence between stages and storage of previously modeled cross-modal interactions. As a final step, the SUMMARIZE module takes the multimodal representation of the final stage and translates it into a cross-modal representation z t . Figure 1 shows an illustrative example for multistage fusion. The HIGHLIGHT module selects "neutral words" and "frowning" expression for the first stage. The local and integrated fusion at this stage creates a representation reflecting negative emotion. For stage 2, the HIGHLIGHT module identifies the acoustic feature "loud voice". The local fusion at this stage interprets it as an expression of emphasis and is fused with the previous fusion results to represent a strong negative emotion. Finally, the highlighted features of "shrug" and "speech elongation" are selected and are locally interpreted as "ambivalence". The integration with previous stages then gives a representation closer to "disappointed". Module Descriptions In this section, we present the details of the three multistage fusion modules: HIGHLIGHT, FUSE and SUMMARIZE. Multistage fusion begins with the concatenation of intra-modal network outputs h t = m∈M h m t . We use superscript [k] to denote the indices of each stage k = 1, , K during K total stages of multistage fusion. Let ⇥ denote the neural network parameters across all modules. HIGHLIGHT: At each stage k, a subset of the multimodal signals represented in h t will be au- At each stage, the HIGHLIGHT module identifies a subset of multimodal signals and the FUSE module performs local fusion before integration with previous fusion representations. The SUMMARIZE module translates the representation at the final stage into a cross-modal representation z t to be fed back into the intra-modal recurrent networks. tomatically highlighted for fusion. Formally, this module is defined by the process function f H : t is a set of attention weights which are inferred based on the previously assigned attention weights a [1∶k−1] t . As a result, the highlights at a specific stage k will be dependent on previous highlights. To fully encapsulate these dependencies, the attention assignment process is performed in a recurrent manner using a LSTM which we call the HIGHLIGHT LSTM. The initial HIGHLIGHT LSTM memory at stage 0, c This allows the memory mechanism of the HIGHLIGHT LSTM to dynamically adjust to the intra-modal representations h t . The output of the is softmax activated to produce attention weights a [k] t at every stage k of the multistage fusion process: and a [k] t is fed as input into the HIGHLIGHT LSTM at stage k + 1. Therefore, the HIGHLIGHT LSTM functions as a decoder LSTM (Sutskever et al., 2014;Cho et al., 2014) in order to capture the dependencies on previous attention assignments. Highlighting is performed by element-wise multiplying the attention weights a [k] t with the concatenated intra-modal representations h t : where ⊙ denotes the Hadamard product andh [k] t are the attended multimodal signals that will be used for the fusion at stage k. FUSE: The highlighted multimodal signals are simultaneously fused in a local fusion and then integrated with fusion representations from previous stages. Formally, this module is defined by the process function f F : where s [k] t denotes the integrated fusion representations at stage k. We employ a FUSE LSTM to simultaneously perform the local fusion and the integration with previous fusion representations. The FUSE LSTM input gate enables a local fusion while the FUSE LSTM forget and output gates enable integration with previous fusion results. The initial FUSE LSTM memory at stage 0, c t . Formally, this operation is defined as: where z t is the final output of the multistage fusion process and represents all cross-modal interactions discovered at time t. The summarized cross-modal representation is then fed into the intra-modal recurrent networks as described in the subsection 3.3. System of Long Short-term Hybrid Memories To integrate the cross-modal representations z t with the temporal intra-modal representations, we employ a system of Long Short-term Hybrid Memories (LSTHMs) (Zadeh et al., 2018b). The LSTHM extends the LSTM formulation to include the cross-modal representation z t in a hybrid memory component: where is the (hard-)sigmoid activation function, tanh is the tangent hyperbolic activation function, ⊙ denotes the Hadamard product. i, f and o are the input, forget and output gates respectively.c m t+1 is the proposed update to the hybrid memory c m t at time t + 1 and h m t is the time distributed output of each modality. The cross-modal representation z t is modeled by the Multistage Fusion Process as discussed in subsection 3.2. The hybrid memory c m t contains both intra-modal interactions from individual modalities x m t as well as the cross-modal interactions captured in z t . Optimization The multimodal prediction task is performed using a final representation E which integrate (1) the last outputs from the LSTHMs and (2) the last crossmodal representation z T . Formally, E is defined as: where denotes vector concatenation. E can then be used as a multimodal representation for supervised or unsupervised analysis of multimodal language. It summarizes all modeled intra-modal and cross-modal representations from the multimodal sequences. RMFN is differentiable end-toend which allows the network parameters ⇥ to be learned using gradient descent approaches. Experimental Setup To evaluate the performance and generalization of RMFN, three domains of human multimodal language were selected: multimodal sentiment analysis, emotion recognition, and speaker traits recognition. Datasets All datasets consist of monologue videos. The speaker's intentions are conveyed through three modalities: language, visual and acoustic. Multimodal Features and Alignment GloVe word embeddings (Pennington et al., 2014), Facet (iMotions, 2017) and COVAREP (Degottex et al., 2014) are extracted for the language, visual and acoustic modalities respectively 1 . Forced alignment is performed using P2FA (Yuan and Liberman, 2008) to obtain the exact utterance times IEMOCAP Emotions Task Happy Baseline Models We compare to the following models for multimodal machine learning: MFN (Zadeh et al., 2018a) synchronizes multimodal sequences using a multi-view gated memory. It is the current state of the art on CMU-MOSI and POM. MARN (Zadeh et al., 2018b) models intra-modal and cross-modal interactions using multiple attention coefficients and hybrid LSTM memory components. GME-LSTM(A) learns binary gating mechanisms to remove noisy modalities that are contradictory or redundant for prediction. TFN ) models unimodal, bimodal and trimodal interactions using tensor products. Evaluation Metrics For classification, we report accuracy Ac where c denotes the number of classes and F1 score. For regression, we report Mean Absolute Error MAE and Pearson's correlation r. For MAE lower values indicate stronger performance. For all remaining metrics, higher values indicate stronger performance. Performance on Multimodal Language Results on CMU-MOSI, IEMOCAP and POM are presented in Tables 1, 2 and 3 respectively 2 . We achieve state-of-the-art or competitive results for all domains, highlighting RMFN's capability in human multimodal language analysis. We observe that RMFN does not improve results on IEMO-CAP neutral emotion and the model outperforming RMFN is a memory-based fusion baseline (Zadeh et al., 2018a). We believe that this is because neutral expressions are quite idiosyncratic. Some people may always look angry given their facial configuration (e.g., natural eyebrow raises of actor Jack Nicholson). In these situations, it becomes useful to compare the current image with a memorized or aggregated representation of the speaker's face. Our proposed multistage fusion approach can easily be extended to memory-based fusion methods. Dataset Table 4, we observe that increasing the number of stages K increases the model's capability to model cross-modal interactions up to a certain point (K = 3) in our experiments. Further increases led to decreases in performance and we hypothesize this is due to overfitting on the dataset. Q3: To compare multistage against independent modeling of cross-modal interactions, we pay close attention to the performance comparison with respect to MARN which models multiple crossmodal interactions all at once (see Table 5). RMFN shows improved performance, indicating that multistage fusion is both effective and efficient for human multimodal language modeling. Q4: RMFN (no MFP) represents a system of LSTHMs without the integration of z t from the MFP to model cross-modal interactions. From Table 5, RMFN (no MFP) is outperformed by RMFN, confirming that modeling cross-modal interactions is crucial in analyzing human multimodal language. Q5: RMFN (no HIGHLIGHT) removes the HIGHLIGHT module from MFP during multistage fusion. From Table 5, RMFN (no HIGHLIGHT) underperforms, indicating that highlighting multimodal representations using attention weights are important for modeling cross-modal interactions. Visualizations Using an attention assignment mechanism during the HIGHLIGHT process gives more interpretability to the model since it allows us to visualize the attended multimodal signals at each stage and time step (see Figure 3). Using RMFN trained on the CMU-MOSI dataset, we plot the attention weights across the multistage fusion process for three videos in CMU-MOSI. Based on these visualizations we first draw the following general observations on multistage fusion: Across stages: Attention weights change their behaviors across the multiple stages of fusion. Some features are highlighted by earlier stages while other features are used in later stages. This supports our hypothesis that RMFN learns to specialize in different stages of the fusion process. Across time: Attention weights vary over time and adapt to the multimodal inputs. We observe that the attention weights are similar if the input contains no new information. As soon as new multimodal information comes in, the highlighting mechanism in RMFN adapts to these new inputs. Priors: Based on the distribution of attention weights, we observe that the language and acoustic modalities seem the most commonly highlighted. This represents a prior over the expression of sentiment in human multimodal language and is closely related to the strong connections between language and speech in human communication (Kuhl, 2000). Inactivity: Some attention coefficients are not active (always orange) throughout time. We hypothesize that these corresponding dimensions carry only intra-modal dynamics and are not involved in the formation of cross-modal interactions. Qualitative Analysis In addition to the general observations above, Figure 3 shows three examples where multistage fusion learns cross-modal representations across three different scenarios. Synchronized Interactions: In Figure 3(a), the language features are highlighted corresponding to the utterance of the word "fun" that is highly indicative of sentiment (t = 5). This sudden change is also accompanied by a synchronized highlighting of the acoustic features. We also notice that the highlighting of the acoustic features lasts longer across the 3 stages since it may take multiple stages to interpret all the new acoustic behaviors (elongated tone of voice and phonological emphasis). Asynchronous Trimodal Interactions: In Figure 3(b), the language modality displays ambiguous sentiment: "delivers a lot of intensity" can be inferred as both positive or negative. We observe that the circled attention units in the visual and acoustic features correspond to the asynchronous presence of a smile (t = 2 ∶ 5) and phonological emphasis (t = 3) respectively. These nonverbal behaviors resolve ambiguity in language and result in an overall display of positive sentiment. We further note the coupling of attention weights that highlight the language, visual and acoustic features across stages (t = 3 ∶ 5), further emphasizing the coordination of all three modalities during multistage fusion despite their asynchronous occurrences. Bimodal Interactions: In Figure 3(c), the language modality is better interpreted in the context of acoustic behaviors. The disappointed tone and soft voice provide the nonverbal information useful for sentiment inference. This example highlights the bimodal interactions (t = 4 ∶ 7) in alternating stages: the acoustic features are highlighted more in earlier stages while the language features are highlighted increasingly in later stages. Conclusion This paper proposed the Recurrent Multistage Fusion Network (RMFN) which decomposes the multimodal fusion problem into multiple stages, each focused on a subset of multimodal signals. Extensive experiments across three publicly-available datasets reveal that RMFN is highly effective in modeling human multimodal language. In addition to achieving state-of-the-art performance on all datasets, our comparisons and visualizations reveal that the multiple stages coordinate to capture both synchronous and asynchronous multimodal interactions. In future work, we are interested in merging our model with memory-based fusion methods since they have complementary strengths as discussed in subsection 5.1.
2018-08-12T10:04:45.000Z
2018-08-12T00:00:00.000
{ "year": 2018, "sha1": "a81a8cf811540be14f7840ef6939a3d7b901a8e3", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/D18-1014.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "a81a8cf811540be14f7840ef6939a3d7b901a8e3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
15376979
pes2o/s2orc
v3-fos-license
Relativistic Landau problem at finite temperature We study the zero temperature Casimir energy and fermion number for Dirac fields in a 2+1-dimensional Minkowski space-time, in the presence of a uniform magnetic field perpendicular to the spatial manifold. Then, we go to the finite-temperature problem with a chemical potential, introduced as a uniform zero component of the gauge potential. By performing a Lorentz boost, we obtain Hall's conductivity in the case of crossed electric and magnetic fields. Introduction The quantization of the Hall conductivity [1] is a remarkable quantum phenomenon, which occurs in two-dimensional electron systems, at low temperatures and strong perpendicular magnetic fields. Most proposed explanations for this phenomenon [2] rely on Schröedinger's one-particle theory and the introduction of a filling fraction of the Landau levels, which must be assumed to be integer to reproduce the observed behavior. It is the aim of this paper to show that, in the context of relativistic field theory, such behavior naturally arises, as a consequence of the spin-statistics theorem. In section 2 we present the theory of Dirac fields in 2 + 1 Minkowski space-time, interacting with a magnetic background field perpendicular to the spatial plane, and evaluate the vaccum expectation values of the energy and fermion density. Section 3 contains the generalities of the same theory in Euclidean 3-dimensional space, and presents the eigenvalues of the corresponding Dirac operator. From such eigenvalues, the partition function is evaluated in section 4. Section 5 contains the resulting free energy and mean particle density at finite temperature. Finally, in section 6 we perform an adequate Lorentz boost in order to consider the problem of fermions interaction with crossed electric and magnetic fields, and obtain the Hall conductivity. Zero-temperature problem We study a 2 + 1-dimensional theory of Dirac fields, in the presence of a uniform background magnetic field perpendicular to the spatial plane. We choose the metric (−, +, +) , natural unitsh = c = 1, and adopt the following representation for the Dirac matrices: γ 0 M = iσ 3 , γ 1 M = σ 2 and γ 2 M = σ 1 . The Hamiltonian can be determined from the solutions of the Dirac equation (i∂ / − eA /)Ψ = 0, where −e is the negative charge of the electron. In the Landau gauge A = (0, 0, Bx) , with B > 0. Thus, after setting Ψ(t, x, y) = e −iEt ψ(x, y), we get the In order to solve the eigenvalue problem for this Hamiltonian, we take This leads to the following system of first order equations (2) After imposing that the eigenfunctions be well-behaved for x → ±∞, we find two types of solutions to our problem : 1. A definite-chirality zero mode (E 0 = 0) 2. A set of eigenfunctions corresponding to the symmetric spectrum E n = ± √ 2neB , n = 1, ..., ∞. In all cases, the eigenfunctions can be written in terms of Hermite polynomials, and all eigenvalues exhibit the well known Landau degeneracy per unit area: The vacuum expectation value of the energy per unit area, defined through a zeta function regularization (see, for example, [3] and references therein), is given by In the present case, we have (α is an arbitrary parameter with mass dimension, introduced to render the complex powers dimensionless) Always in the zeta-function regularization framework, the fermion number is [4] where N 0 is the contribution coming from zero modes. In our case, the nonvanishing spectrum is symmetric. So, only the zero mode, which is charge self-conjugate, contributes. This gives as a result [4] Or, equivalently, for the vacuum expectation value of the charge density For the other non-equivalent representation of the gamma matrices in 2+1 dimensions, the spectrum doesn't change. The theory at finite temperature with chemical potential In order to study the effect of temperature, we go to Euclidean space, with the metric (+, +, +) . To this end, we take the Euclidean gamma matrices to be We will follow [5] in introducing the chemical potential as an imaginary A 0 = −i µ e in Euclidean space. Thus, the partition function in the grand-canonical ensemble is given by In order to evaluate it in the zeta regularization approach [6], we first determine the eigenfunctions, and the corresponding eigenvalues, of the Dirac operator, in the same gauge used in the previous section, i.e, we solve To satisfy antiperiodic boundary conditions in the τ direction, we propose where β = 1 T is the inverse temperature. After doing so, and writing ψ k, , we have for each k, l, There are two types of eigenvalues 1. ω l =λ l , with l = −∞, ..., ∞. Evaluation of the partition function at finite temperature and chemical potential The partition function, in the zeta regularization scheme [3], is given by As in the previous section, α is a parameter with mass dimension, introduced to render the ζ-function dimensionless. We must consider two contributions to log Z, respectively coming from eigenvalues of type 1 and 2 in the previous section, i.e., and In the rest of this section, we sketch the main steps in the analytic extension of both zeta functions and in the calculation of their s-derivatives (for a detailed presentation, see [7]). The contribution ∆ 1 (µ) can be evaluated at once for the whole µ-range. The analytic extension of ζ 1 (s, µ) can be achieved as follows (for a similar calculation, see [8]) Now, in order to write the second term as a Hurwitz zeta, we must relate the eigenvalues with negative real part to those with positive one without, in so doing, going through zeros in the argument of the power. Otherwise stated, we must select a cut in the complex ω plane [9]. This requirement determines a definite value of (−1) −s , i.e., (−1) −s = e iπsign(µ)s . Taking this into account, we finally have From this last expression, the contribution ∆ 1 (µ) to log Z can be obtained. It is given by The analytic extension of ζ 2 (s, µ, B) requires a separate consideration of different µ ranges. We study in detail two of these ranges Making use of the Mellin transform, this can be written as where we have used the definition of the Jacobi theta function Θ 3 (z, x) = ∞ l=−∞ e −πxl 2 e 2πzl . To proceed, we use the inversion formula for the Jacobi function, x , and perform the integration over t, thus getting From this expression, the contribution ∆ 2 to the partition function can be readily obtained, since the factor accompanying s is finite at s = 0. After using that e −x , and performing the resulting sum over l, we obtain Finally, adding the contributions given by equations (16) and (20) we get, for the partition function in the range µ 2 ≤ 2eB 4.2. 2eB < µ 2 < 4eB As before, we have However, in this range of µ, the contribution to the zeta function due to n = 1 must be analytically extended in a different way. In fact, the expression cannot be written in terms of a unique Mellin transform, since its real part is not always positive (note, in connection with this that, for n = 1, eq. (19) diverges). Instead, it can be written as a product of two Mellin transforms or, after changing variables according to t ′ = t − z; z ′ = t + z, performing one of the integrals, and the sum over l ζ n=1 Now, the integral in this expression diverges at z = 0. In order to isolate such divergence, we add and subtract the first term in the series expansion of the Bessel function, thus getting the following two pieces and The contribution of equation (25) to the partition function can be easily evaluated by noticing that the factor multiplying s is finite at s = 0. This gives . In order to get the contribution coming from (26), the integral can be evaluated for ℜs > 1, which gives where ζ H (s, x) is the Hurwitz zeta function. Its contribution to the partition function can now be evaluated by using that ζ H (0, 1 2 (1 − iµβ π )) + ζ H (0, 1 2 (1 + iµβ π ) = 0 and the well known value of − d ds ⌋ s=0 ζ H (s, x) [10], to obtain Summing up the contributions in equations (16), (27) and (29), as well as the contribution coming from n ≥ 2, evaluated as in the previous subsection, one gets for the partition function log Z = ∆ L log 2 cosh µβ 2 + |µ|β 2 At first sight, this result looks different from the one corresponding to µ 2 < 2eB (equation (21)). However, it is easy to see that both expressions coincide. The advantage of using expression (30) for this range of µ is that the zero-temperature limit is explicitly isolated from finite-temperature corrections. Free energy and particle number From equations (21) and (30), the free energy per unit area (F = − 1 β log Z) can be obtained ¶. It is given by Moreover, the free energy is continuous at µ 2 = 2neB, n = 0, ..., ∞. In the low-temperature limit one has which coincides with the Casimir energy obtained in section 2, even for µ = 0, but in this range, i.e., for µ less than the first Landau level, if positive, or greater than minus the first Landau level, if negative. On the other hand, The mean particle density can be obtained as N = 1 β d dµ log Z. For nonzero temperature and arbitrary µ (not coinciding with an energy level + ) one has It is interesting to note that, for µ = 0, one has N (µ = 0) = ± ∆L 2 , which shows that, in the absence of chemical potential, the fermion number obtained in equation (5) remains unaltered as the temperature grows. On the other hand, for nonvanishing µ, the low-temperature limit differs, depending on the µ-range considered N (2eBn < µ 2 < 2eB(n + 1)) → β→∞ n∆ L sign(µ) , where n = µ 2 2eB . ¶ Consistently with the footnote in section 2, all the results in this section are independent from the representation of the gamma matrices chosen. + Note that, for instance, if µ = 2neB, the series in equation (19) converges only conditionally, and its term-by-term derivative leads to a divergent series. This result is nothing but the expected one for particles with the statistic of fermions, since relativistic field theory naturally leads to the spin-statistics theorem. At zero temperature, µ is nothing but the Fermi energy; for example, for µ > 0, as µ grows past a Landau level, such level becomes entirely filled. Final comments From the previous result, the mean value of the particle density at zero temperature can be obtained. After recovering units, one has j 0 (2ec 2h Bn < µ 2 < 2eBc 2h (n + 1)) = −nce 2 B h sign(µ) , the other two components of the current density tri-vector being equal to zero in the absence of an electric field. Now, the zero-temperature limit of the same tri-vector in the presence of crossed homogeneous electric (F ′ ) and magnetic (B ′ ) fields can retrieved, for F ′ < cB ′ , by performing a Lorentz boost with absolute value of the velocity F ′ B ′ . Suppose, for definiteness, that the homogeneous electric field points along the positive y axis. Then, the velocity of the Lorentz boost must point along the negative x-axis, and the transformation gives as a result As a consequence, the quantized zero-temperature Hall conductivity is σ xy = −ne 2 h sign(µ) . Finally, we mention that the more realistic case of massive fermions is at present under study [11].
2014-10-01T00:00:00.000Z
2005-11-09T00:00:00.000
{ "year": 2005, "sha1": "6fd38816a3c0e87c383e58fe3c68f7cbe6a7401e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0511108", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "15c4b3728676375bad7d96f71227c9cde021fb91", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256832033
pes2o/s2orc
v3-fos-license
Spatio-temporal dynamics enhance cellular diversity, neuronal function and further maturation of human cerebral organoids The bioengineerined and whole matured human brain organoids stand as highly valuable three-dimensional in vitro brain-mimetic models to recapitulate in vivo brain development, neurodevelopmental and neurodegenerative diseases. Various instructive signals affecting multiple biological processes including morphogenesis, developmental stages, cell fate transitions, cell migration, stem cell function and immune responses have been employed for generation of physiologically functional cerebral organoids. However, the current approaches for maturation require improvement for highly harvestable and functional cerebral organoids with reduced batch-to-batch variabilities. Here, we demonstrate two different engineering approaches, the rotating cell culture system (RCCS) microgravity bioreactor and a newly designed microfluidic platform (µ-platform) to improve harvestability, reproducibility and the survival of high-quality cerebral organoids and compare with those of traditional spinner and shaker systems. RCCS and µ-platform organoids have reached ideal sizes, approximately 95% harvestability, prolonged culture time with Ki-67 + /CD31 + /β-catenin+ proliferative, adhesive and endothelial-like cells and exhibited enriched cellular diversity (abundant neural/glial/ endothelial cell population), structural brain morphogenesis, further functional neuronal identities (glutamate secreting glutamatergic, GABAergic and hippocampal neurons) and synaptogenesis (presynaptic-postsynaptic interaction) during whole human brain development. Both organoids expressed CD11b + /IBA1 + microglia and MBP + /OLIG2 + oligodendrocytes at high levels as of day 60. RCCS and µ-platform organoids showing high levels of physiological fidelity a high level of physiological fidelity can serve as functional preclinical models to test new therapeutic regimens for neurological diseases and benefit from multiplexing. 1. Cell composition variability is known to be a limitation of some human brain organoid protocols. In this study, the authors refer to reproducibility in terms of organoid size, which does not infer much about the quality of the differentiation and cell composition. Therefore, conclusions about reproducibility should take other parameters in consideration, such as cellular composition and cellular organization, instead of just organoid size. 2. Overall presentation and description of data should be significantly improved. There is insufficient characterization that limits interpretation and conclusions. Moreover, proportion of cell types and protein expression patterns present in the study are inconsistent with what is known in the field. This data should be revised and validated more carefully. Below are some examples that require careful revision: a) Brightfield images (Fig. 2) and immunostainings throughout the manuscript raise concerns about the organoid differentiation quality. It is very hard to assess the quality of differentiation by these images and it is not very clear what information the authors want to convey with the H&E staining ( Fig. 2). Additionally, the transparent regions observed in organoids from most conditions resemble cystic formation associated with off-target differentiation. Therefore, the authors should demonstrate that the organoids are indeed cortical organoids. This can be shown by high-quality images demonstrating a high proportion of FOXG1+ cells. FOXG1 is a forebrain marker that is expressed early during cortical development (PMID: 18983967, PMID: 24277810, PMID: 23995685). Moreover, the authors should include high-quality images including other standard markers to confirm successful cortical differentiation (e.g. EMX1, PAX6, SOX2, TBR1, CTIP2, SATB2), as well as verify the presence of rosette and neural tube-like structures in multiple timepoints. b) The authors claim that both astrocytes (GFAP+) and neurons (NEUN+) are present since day 60 of organoid development. Previous studies with human brain organoids have demonstrated that differentiation in these in vitro 3D models recapitulate endogenous development where gliogenesis happens after neurogenesis. In one of these papers, astroglia are not seen before 6 months of development in vitro (PMID: 28445462). It is possible that modifications done by the authors to the protocol change this pattern, but it is recommended that higher magnification images, including more markers per cell type, are included to better illustrate the presence of these cells in their models. c) The authors claim presence of cell types such as microglia and oligodendrocytes that are usually not present in most protocols. It is also recommended to include higher magnification images of these cells with more markers (microglia: IBA1, P2RY12, TMEM119; oligodendrocytes: MBP, OLIG2, PLP) in order to validate their presence in the organoids. d) Staining for the postsynaptic marker, PSD95, seems unspecific in the representative images and would require co-staining with a presynaptic marker and imagining using a confocal microscope. e) KI67 staining should be nuclear; however in Fig. 6a the staining seems to be outside of the nucleus. Moreover, specifically in this panel, representative organoids depicted in the RCCS and uplatform raise concern about the quality of differentiation, as broad morphology seems cystic and irregular. f) Font size in Fig. 1 should be increased in order to be seen. Fig. 3: i. typos (GFAB, migrolglia); ii. qRT-PCR heatmap: it is not clear how the different markers vary along the time. The way it is presented looks like that some markers that are expected later in development (S100B) are rather expressed early, and downregulated later on; iii. Specific cell marker (fold change plot): it is unclear what data is being plotted in this graph and what is the final conclusion related to this data. g) Other examples within 3. There are multiple overstatements and generalizations throughout the text that need to be revised by the authors. See some examples below: a) "Conventional cerebral organoid maturation approaches using spinners10 and shakers11, promote self-neuronal organization and formation of cerebral ventricular spaces by providing agitation. However, these methods have major limitations, including altered nutrient diffusion and apoptotic zones due to lack of vascularization12, immune cell deficiency, matrigel dependency, low reproducibility, scalability and high variability of the induced brain components and cells13." This statement should be rephrased as some of the limitations mentioned (e.g. apoptotic zones, lack of reproducibility) have already been addressed and overcome in prior studies (including the ones cited in this sentence). b) The authors should be particularly careful about claiming "molecular and functional maturation of cerebral brain organoids". Not enough data is provided to make these claims and some of the available data is not of sufficient quality. For instance, colocalization of the postsynaptic marker, PSD95, and a presynaptic marker, demonstrated by confocal microscopy, would suggest that there are functional synapses present in the organoids, but would still be insufficient for concluding functional maturation of this organoids. c) In the Discussion section there is a series of overstatements: i. "(1) enriched cellular diversity (abundant neural/glial/ endothelial cell population)" > deeper characterization with higher quality images and more markers should be done to conclude this; ii. "(2) structural brain morphogenesis (complex cortical layers, radia-glial organization, preplate splitting)" > no clear demonstration of brain morphogenesis; indeed, the lack of complex structure in brain organoids is one of the limitations of the field. iii. "further functional neuronal identities (glutamatergic and hippocampal neurons) and synaptogenesis (postsynaptic interaction) during whole human brain development" > further analysis using more sophisticated technologies would be necessary to validate this claim. iv. "GFAP+/S100B+ astrocytes and NEUN+/MAP2+ mature neurons were abundant on day 60 in RCCS organoids richest in cell diversity, both RCCS and µ-platform organoids expressed CD11b+/IBA1+ microglia and MBP+/OLIG2+ oligodendrocytes at high levels as of day 60." > further characterization with more markers and higher quality images are necessary to support this claim. v. "RCCS organoids with enormous cell architectures, exhibited excellent cortical plate and SVZ architecture at day 120, where TUJ+ cells located at the basal side of the ventricle-like structures with TBR1+ cells located in the deep layer of cortical-like plates as well as well-defined progenitor zone organization, neural identity and further neuronal differentiation with NESTIN+/TBR2+/PAX6+ and SOX1+/SOX2+ cells." > it is not clear what the authors are referring to. vi. "On the other hand, N-CAD+ apical epithelial and TTR+ choroid plexus epithelial cells were populated in RCCS organoids as of day 60, while both FOXG1+ forebrain, KROX20+ hindbrain and PROX1+ hippocampus related to midbrain and hindbrain identities were observed in µ-platform organoids at day 120." > further characterization with more markers and higher quality images are necessary to support this claim. Minor concerns: 1. The authors raise a valid point that hemodynamic forces are underexplored in the brain organoid field, despite the known biological relevance of mechanosensitive signals associated with cerebrospinal fluid flow during development. However, the authors do not explore this aspect in their data or discussion. It would be interesting to tie their data back to what is known about this in the neurodevelopmental field. 2. The authors hypothesize that microgravity and gravity-driven laminar flows enhance maturation of cerebral organoids recapitulating features of human embryonic cortical development. It would be interesting to discuss the findings that support this hypothesis. 3. "In terms of tissue structure, while the most dispersed organoid structure was observed in static culture and the most uniform distribution was observed in the shaker, the well-defined multipatterned organoid morphologies with self-organization and desired cellular architectures in the innerouter regions6,13,28 were observed in the RCCS microgravity bioreactor, followed by µ-platform and spinner systems (Fig. 2b,c,d,e,f)." It is not clear which parameters the authors used to say the latter. How was this quantified? 4. What does "neurons at different polarized states" mean? 5. Authors mentioned the presence of "cortical plate" in their organoids, but this is not clear from the images provided. 6. "Paired box protein 6 (PAX6) is a transcription factor expressed in neural stem cells and ventral forebrain radial glial progenitor cells. Although expressed in all brain regions, the highest expression level is reported at the cerebellum, which is involved in glial cell differentiation55." This sentence is confusing: it is unclear the relationship between glial cell differentiation and the cerebellum. 7. "CTIP2, SATB2, and TBR1 were upregulated at protein and RNA levels which indicated cortical neuronal differentiation in both RCCS and µ-platform organoids mainly on day 120 (Fig. 4, Supp. Fig. 3), supporting the spatial distribution and functional organization of cortical neurons61." It has been demonstrated in 3D brain organoid cultures that generation of the different cortical neuron subtypes is not necessarily accompanied by structural/functional organization in the stereotypical layers of the human cerebral cortex (PMID: 24277810). Therefore, this sentence should be revised. 8. In the methods, the authors mentioned that some modifications were done to the organoid protocol, but they do not mention what was done. Reviewer #2 (Remarks to the Author): Saglam-Metiner et al. reported that RCCS microgravity bioreactor and μ-platform presented improvement of the harvestability, scalability, reproducibility, and survival of cerebral organoids, as well as functional maturation. It is assumed that comparing these various 3D culture methods with each other required a lot of effort, and the results should provide important information to the scientific research field. However, their findings are just observations, and quantitative data are lacking. Statistical analysis of quantitative data is a basic requirement for scientific consideration. Furthermore, there are no data to allow the evaluation of physiological function, although the authors mentioned that their organoids have achieved functional maturity. There remain many issues to be resolved before reaching the conclusion the authors proposed. Major points: 1. The authors should add the quantitative data to explain the improvement of harvestability, scalability, reproducibility, and survival of cerebral organoids to compare the respective culture conditions, and they should present the results of statistical analysis. 2. Although the authors described organoid-maturation, they only presented immunostaining and western blot data to show protein expression related to synapses such as PSD95. The authors should present the data of physiological analysis to show that their organoids have acquired functional maturation as neurons. 3. Although the authors mentioned that a layered structure of brain has formed, the figures showed no such structure but only a scattering of cells expressing markers for the cortical layers. The authors should correct the description. 4. The authors should provide the details of the equipment, and especially the μ-platform that was reported as being newly constructed in this paper. 5. The authors should compare the cell-type characteristics and maturation status in single-cell resolution methods like single-cell RNA sequence or detailed IFC investigation because organoid contains various types of brain cells. Quantification of these data will be helpful for understanding the beneficial points of the presented platform. 6. The authors attempted to characterize different culture platforms by hydrodynamic analysis. This analysis clarifies the characteristics of how the medium moves and can be considered to be useful data. However, what this analysis reveals is limited to phenomena outside of organoids, such as the distribution of shear stress. This analysis unfortunately does not help us to understand what physical effects occur inside individual organoids and, as a result, affect differentiation and maturation. Care should be taken so as not to overstate this point, and to improve the description to avoid misunderstandings. Minor points 1. Signal/background ratios of images are not enough. Revise the setting when acquiring images. 2. The shape and marginal border of organoids are irregular, and these organoids appeared to be dying in Figures 3-6. The authors can investigate and describe what happens, e.g. cellular migration, cell death, unique differentiation, etc. 3. In Figure 5b, PROX1 appears to highlights the thalamus and cerebellum. Is that correct? 1 Journal: Communications Biology Manuscript #: COMMSBIO-22-1668-T Title of Paper: Spatio-temporal dynamics enhance cellular diversity, neuronal function and further maturation of human cerebral organoids Thank you for giving the opportunity to submit our revised manuscript titled "Spatio-temporal dynamics enhance cellular diversity, neuronal function and further maturation of human cerebral organoids'' to Communications Biology. We appreciate the editor and reviewers for their valuable time and effort in our manuscript. We have revised our paper based on the insightful comments of the reviewers and highlighted the changes in the manuscript. We hope that the revised manuscript can meet the journal's publication requirements. Reviewer #1 (Remarks to the Author): In this manuscript, Saglam-Metiner et al. describe two novel engineering approaches (RCCS and uplatform) with the goal to improve harvestability, scalability, reproducibility and survival of human brain organoids during long term cultures. The authors start by elegantly demonstrating that the proposed models (RCCS and u-platform) exhibit less shear stress when compared to traditional cultures on shakers or spinner flasks. Moreover, the shear stress distribution in RCCS and u-platform is a lot more homogeneous than when compared to the current available systems (shakers and spinners). These novel methods offer a great potential to further improve quality and durability of human brain organoids in vitro, which have been negatively impacted by the limitations associated with current culture conditions. Despite the great potential, the manuscript needs significant improvement on the characterization of differentiation and maturation of cells within the human brain organoids to conclusively assess the value of these approaches to advance the field. Thanks for the insightful comments, which helped to increase the quality of the manuscript. Please find our point-by-point responses to each of the comments raised. Major concerns: 1. Cell composition variability is known to be a limitation of some human brain organoid protocols. In this study, the authors refer to reproducibility in terms of organoid size, which does not infer much about the quality of the differentiation and cell composition. Therefore, conclusions about reproducibility should take other parameters in consideration, such as cellular composition and cellular organization, instead of just organoid size We appreciate the reviewer's suggestion and agree that there should be additional data to strengthen the conclusions of the study. Therefore, we performed fluorescence-activated cell sorting (FACS) analysis of microglia and astrocytes from the RCCS organoids to support cellular composition in a revised text as Fig. 3d. Method details is given as follows; Initially, for single-cell suspension from RCCS organoids, three organoids were freshly harvested from vessels, collected to tube, washed with 1x PBS and immersed in 1 mL DMEM/F12 included papain (18.6 U/mL,P3125, and DNAse 1 (337 U/mL, EN0521, Thermo Fisher) at 37°C for 30 min on shaker. It was mechanically pipetted and vortexed at ten-minute intervals under sterile conditions. Next, 2% FBS was added to stop the enzymatic reaction and the resulting singlecell suspension was centrifuged at 400 rfc for 5 min. An additional incubation was made in PBS buffer (included 2 mM EDTA, 1% FBS and 337 U/mL DNAse 1, pH 7.4) at RT for 15 min, and singlecells were passed through a 70 µm cell-strainer. Then, single-cells were labeled with CD45 and CD11b primary antibodies for microglia sorting and were labeled with GFAP primary antibody for astrocyte cell sorting (Supp. Additionally, we added a graph to highlight the harvestability results in Fig. 2b and detailed all of the results in the text under the heading "Laminar flow increases reliable production of high-quality organoids with the highest harvestability, reproducibility and reduced batch-to-batch variabilities" as follows; On the 15th day of culture, generated organoids (derived from two different passage numbers of iPSCs at two different times) were transferred to RCCS, µ-platform, spinner and shaker, along with static culture as the control group (considered as day 0 of maturation), in order to determine the best dynamic system for physical and functional maturation of cerebral organoids, which were cultured for 120 days. Organoid size distribution variability is known to be a limitation of traditional human brain organoid protocols, as well as cellular composition and organization. Therefore, organoid reproducibility and batch-to-batch variability was firstly examined in terms of harvestability, macroscopic-microscopic observations and organoid size distribution (Fig. 2b,c,d,e,f,g). As such, cellular composition and cellular organization were assessed with cell sorting, immunostaining, western blot, qRT-PCR, TUNEL and glutamate secretion analyses for organoids sampled on various days (30, 60 and 120th days). 2. Overall presentation and description of data should be significantly improved. There is insufficient characterization that limits interpretation and conclusions. Moreover, proportion of cell types and protein expression patterns present in the study are inconsistent with what is known in the field. This data should be revised and validated more carefully. Below are some examples that require careful revision: a) Brightfield images (Fig. 2) and immunostainings throughout the manuscript raise concerns about the organoid differentiation quality. It is very hard to assess the quality of differentiation by these images and it is not very clear what information the authors want to convey with the H&E staining (Fig. 2). Additionally, the transparent regions observed in organoids from most conditions resemble cystic formation associated with off-target differentiation. Therefore, the authors should demonstrate that the organoids are indeed cortical organoids. This can be shown by high-quality images demonstrating a high proportion of FOXG1+ cells. FOXG1 is a forebrain marker that is expressed early during cortical development (PMID: 18983967, PMID: 24277810, PMID: 23995685). Moreover, the authors should include high-quality images including other standard markers to confirm successful cortical differentiation (e.g. EMX1, PAX6, SOX2, TBR1, CTIP2, SATB2), as well as verify the presence of rosette and neural tubelike structures in multiple timepoints The quality of all figures has been increased and revised in the illustration program. More clear information about H&E staining (images with small magnification have been removed to avoid misunderstanding) and transparent regions added to text with white arrows to indicate organized cell nucleus and neural rosette-like structures in Fig. 2c,d,e,f,g. As well as N-CAD/FOXG1 image replacement for RCCS organoids in Fig.4a, a new staining was carried out and high-quality images demonstrating high proportion of FOXG1+ and PAX6+ cells were added to assess the quality of differentiation in Fig.6a. Also, white arrows added to indicate TBR1+/PAX6+/SOX2+/FOXG1+ and DAPI+ neural rosette and neural tube-like structures in Fig b) The authors claim that both astrocytes (GFAP+) and neurons (NEUN+) are present since day 60 of organoid development. Previous studies with human brain organoids have demonstrated that differentiation in these in vitro 3D models recapitulate endogenous development where gliogenesis happens after neurogenesis. In one of these papers, astroglia are not seen before 6 months of development in vitro (PMID: 28445462). It is possible that modifications done by the authors to the protocol change this pattern, but it is recommended that higher magnification images, including more markers per cell type, are included to better illustrate the presence of these cells in their models. Thanks for pointing this out. The quality of all figures has been increased and revised in the illustration program. The GFAP+/S100B+ astrocytes and NEUN+/MAP2+ mature neurons were examined in detail by immunofluorescence, western blot or qRT-PCR analysis. Additionally, we performed FACS analysis of GFAP+ astrocytes from the RCCS organoids to support the presence of more cellular composition and added the results to revised text as Fig. 3d with 1 .3%±0.4 GFAP+ cell sorting rate. c) The authors claim presence of cell types such as microglia and oligodendrocytes that are usually not present in most protocols. It is also recommended to include higher magnification images of these cells with more markers (microglia: IBA1, P2RY12, TMEM119; oligodendrocytes: MBP, OLIG2, PLP) in order to validate their presence in the organoids. We added the following paragraph to the text; To further interrogate the presence of microglial cells, the ratio of CD45/CD11b double positive cells in RCCS organoids was found to be 1.3%±0.4 with FACS analysis, indicating the positive effect of microgravity on mesodermal-microglial cell differentiation (Fig. 3d) d) Staining for the postsynaptic marker, PSD95, seems unspecific in the representative images and would require co-staining with a presynaptic marker (GABA-A) and imagining using a confocal microscope. Thanks for the suggestion. Besides, postsynaptic PSD95 and presynaptic vGLUT1 staining, we additionally performed immunofluorescence staining with a presynaptic marker GABA-A to indicate GABAergic interneurons in organoids, and obtained high-quality images under the confocal microscope. GABA-A immunofluorescence staining is given in the results section in Fig. 5a. and details about GABA-A is provided in the results section as follows. Moreover, western blot analysis was performed to support results in Fig. 5c. During early corticogenesis, bursts of action potentials cause spreading of giant waves of calcium influxes through the developing cortex, which is described as giant depolarizing potentials (GDPs), depending both on excitatory glutamate and gamma-aminobutyric acid (GABA) inputs. Glutamate-dependent GDP-like events were reported in neuronal organoids. Reduced GDPs in >40 day organoids with GABA polarity switch were regarded as indicators of progressive neuronal network maturation 78 . On the other hand, the glutamate level that secreted from microglia was measured around 20 µM in healthy in vitro neuronal cultures 79,80 . Given that, the change in the uptake and consequently release rate of extracellular glutamate in 60 day RCCS and µ-platform organoids and growth in organoid sizes might be associated with more mature states. One of the distinguishing features of neuronal circuit and maturation is the switch of excitatory to inhibitory GABAergic neurotransmission. GABA-A, a presynaptic GABAergic interneuron receptor that modulates neurotransmitter release in both peripheral and central synapses 7,78,[81][82][83] , was more expressed in both RCCS and µ-platform organoids with CTIP2+ deep-layer of cortical neurons as of day 60 ( Fig. 5a,b,c). Thus, the neuronal network complexity was validated in RCCS and µ-platform organoids with the presence of both presynaptic glutamatergic VGLUT1+ and GABAergic GABA-A+ neurons which are critically involved in neuronal network oscillations, as well as postsynaptic PSD95+ neurons, GFAP+/S100B+ astrocytes and MBP+/OLIG2+ oligodendrocytes. e) KI67 staining should be nuclear; however in Fig. 6a the staining seems to be outside of the nucleus. Moreover, specifically in this panel, representative organoids depicted in the RCCS and u-platform raise concern about the quality of differentiation, as broad morphology seems cystic and irregular. Thanks for pointing this out, we agree that the Ki-67 staining should be nuclear. Therefore, we repeated Ki-67 immunofluorescence staining for all groups and obtained images that resulted in more nuclear staining than the previous stainings. We revised Ki-67 images in Fig. 6a and the quality of all figures has been increased and revised in the illustration program. Additionally, to support the immunofluorescence staining results, western blot and qRT-PCR analysis were performed for the Ki-67 marker, which can be found in Fig. 6b,c and added to the text. f) Font size in Fig. 1 should be increased in order to be seen. Fig.1 was increased in order to be seen g) Other examples within Fig. 3 i. typos (GFAB, migrolglia) Thanks, we corrected all typos throughout the text ii. qRT-PCR heatmap: it is not clear how the different markers vary along the time. The way it is presented looks like that some markers that are expected later in development (S100B) are rather expressed early, and downregulated later on qRT-PCR heatmap illustrations are revised as a system-based grouping to explicit their changes over time. The heatmaps were also colored with a monochromatic gradient that increases in intensity from low to high expression to illustrate the expression changes more clearly. iii. Specific cell marker (fold change plot): it is unclear what data is being plotted in this graph and what is the final conclusion related to this data. Thanks, mentioned graph is removed to avoid misunderstanding. 3. There are multiple overstatements and generalizations throughout the text that need to be revised by the authors. See some examples below: a) "Conventional cerebral organoid maturation approaches using spinners10 and shakers11, promote selfneuronal organization and formation of cerebral ventricular spaces by providing agitation. However, these methods have major limitations, including altered nutrient diffusion and apoptotic zones due to lack of vascularization12, immune cell deficiency, matrigel dependency, low reproducibility, scalability and high variability of the induced brain components and cells13." This statement should be rephrased as some of the limitations mentioned (e.g. apoptotic zones, lack of reproducibility) have already been addressed and overcome in prior studies (including the ones cited in this sentence). Thank you for suggestion, the sentence is rephrased as follows; However, these methods have major limitations, including altered nutrient diffusion and apoptotic zones due to lack of vascularization, immune cell deficiency, matrigel dependency, low reproducibility, scalability and high variability of the induced brain components and cells 12 . Different strategies have been developed to prevent these limitations such as by co-culturing with human umbilical vascular endothelial cells (HUVECs) 13 and human iPSC-derived endothelial cells not only to enhance oxygen/nutrient diffusion but also to leverage neural differentiation, migration and circuit formation during development 14 . b) The authors should be particularly careful about claiming "molecular and functional maturation of cerebral brain organoids". Not enough data is provided to make these claims and some of the available data is not of sufficient quality. For instance, colocalization of the postsynaptic marker, PSD95, and a presynaptic marker, demonstrated by confocal microscopy, would suggest that there are functional synapses present in the organoids, but would still be insufficient for concluding functional maturation of this organoids We appreciate the reviewer's insightful suggestion and agree with this. Thus, we performed additional glutamate analysis to examine molecular and functional maturation of RCCS and platform organoids as depicted in Fig. 5d and details are given in the methods and results sections as follows.. 6 To ascertain advanced maturation of RCCS and µ-platform organoids, we examined the release of glutamate. For this, samples of the medium (one week of used) were collected on day 60 of maturation process and analyzed with a colorimetric glutamate assay kit (MAK330-1K, Sigma-Aldrich), according to the manufacturer's instructions. Samples of fresh cerebral organoid differentiation medium were also run as negative controls. VGLUTs are responsible for the vesicular accumulation of glutamate, the major excitatory neurotransmitter that plays critical roles in neuronal signaling and cortical development, which is uploaded into synaptic vesicles within presynaptic terminals before undergoing regulated release at the synaptic cleft 76,77 . Therefore, we decided to examine the cumulative glutamate levels in one week of used maturation medium of RCCS and µ-platform organoids at days 30 and 60 to evaluate further maturation. Significant time-dependent increases were noted in glutamate levels for both groups (p<0.0001 and p<0.01, respectively), even more in RCCS organoids as 165±2.8 µM (p<0.001) (Fig. 5d). During early corticogenesis, bursts of action potentials cause spreading of giant waves of calcium influxes through the developing cortex, which is described as giant depolarizing potentials (GDPs), depending both on excitatory glutamate and gammaaminobutyric acid (GABA) inputs. Glutamate-dependent GDP-like events were reported in neuronal organoids. Reduced GDPs in >40 day organoids with GABA polarity switch were regarded as indicators of progressive neuronal network maturation 78 . On the other hand, the glutamate level that secreted from microglia was measured around 20 µM in healthy in vitro neuronal cultures 79,80 . Given that, the change in the uptake and consequently release rate of extracellular glutamate in 60 day RCCS and µ-platform organoids and growth in organoid sizes might be associated with more mature states. Besides the postsynaptic PSD95 and presynaptic vGLUT1 staining, we newly performed immunofluorescence staining with a presynaptic marker GABA-A to indicate GABAergic interneurons in organoids, and obtained high-quality images under the confocal microscope, which are depicted in Fig. 5a. Additionally, western blot analysis was performed to support result, which can be seen in Fig. 5c. c) In the Discussion section there is a series of overstatements: i. "(1) enriched cellular diversity (abundant neural/glial/ endothelial cell population)" > deeper characterization with higher quality images and more markers should be done to conclude this; Thanks for your comments. Besides rephrasing the sentences in the discussion section, more in-depth characterization was provided in the revised text with high-quality images and FACS, IF, WB and qRT-PCR analyses, as mentioned above. ii. "(2) structural brain morphogenesis (complex cortical layers, radia-glial organization, preplate splitting)" > no clear demonstration of brain morphogenesis; indeed, the lack of complex structure in brain organoids is one of the limitations of the field. Fig. 4b. Additionally, high-quality confocal images, neural rosette tube-like structures indicated by white arrows and CP layer borders indicated by yellow dashed lines were added to the text. A clear demonstration was provided with schematic illustration of cellular organization of matured cerebral organoids in iii. "further functional neuronal identities (glutamatergic and hippocampal neurons) and synaptogenesis (postsynaptic interaction) during whole human brain development" > further analysis using more sophisticated technologies would be necessary to validate this claim. For further analysis, glutamate measurements, additional IF and WB analysis for GABA-A marker were carried out to validate functional maturation of organoids. iv. "GFAP+/S100B+ astrocytes and NEUN+/MAP2+ mature neurons were abundant on day 60 in RCCS organoids richest in cell diversity, both RCCS and µ-platform organoids expressed CD11b+/IBA1+ microglia and MBP+/OLIG2+ oligodendrocytes at high levels as of day 60." > further characterization with more markers and higher quality images are necessary to support this claim. In addition to IF, WB and qRT-PCR analysis, we performed further characterization of cells with FACS analysis. New results have been added to the relevant sections in the revised text. v. "RCCS organoids with enormous cell architectures, exhibited excellent cortical plate and SVZ architecture at day 120, where TUJ+ cells located at the basal side of the ventricle-like structures with TBR1+ cells located in the deep layer of cortical-like plates as well as well-defined progenitor zone organization, neural identity and further neuronal differentiation with NESTIN+/TBR2+/PAX6+ and SOX1+/SOX2+ cells." > it is not clear what the authors are referring to. The sentences are rephrased in the discussion section, clear demonstration was provided with schematic illustration of cellular organization of matured cerebral organoids in Fig. 4b, as mentioned above. vi. "On the other hand, N-CAD+ apical epithelial and TTR+ choroid plexus epithelial cells were populated in RCCS organoids as of day 60, while both FOXG1+ forebrain, KROX20+ hindbrain and PROX1+ hippocampus related to midbrain and hindbrain identities were observed in µ-platform organoids at day 120." > further characterization with more markers and higher quality images are necessary to support this claim. Thank you, we performed further characterization with higher quality revised images and more markers (β -catenin) and new IF, WB and qRT-PCR analysis (for FOXG1). Furthermore, new β -catenin marker results have been added to the revised text as follows; Vascularization in cerebral organoids was reported to up-regulate the Wnt/β-catenin signaling and increase Ki-67 cell proliferation marker. Endothelial-like cells in vascularized organoids signal to neural stem cells, regulate their self-renewal and differentiation into neurons during CNS development 87 . Additionally, the adherens junctions control vRG cell's self-renewal, proliferation, differentiation and survival via active Wnt/β-catenin/N-cadherin signaling 88 . We also found that both protein and RNA expression levels of β-catenin were more elevated in microgravity driven RCCS organoids and low shear stress induced µ-platform organoids on day 120 (Fig 6a,b, 1. The authors raise a valid point that hemodynamic forces are underexplored in the brain organoid field, despite the known biological relevance of mechanosensitive signals associated with cerebrospinal fluid flow during development. However, the authors do not explore this aspect in their data or discussion. It would be interesting to tie their data back to what is known about this in the neurodevelopmental field. Within the reviewer's suggestion, we have detailed the results and discussion parts to be more explanatory. Minor concerns 1 and 2 are considered together and explained below 2. The authors hypothesize that microgravity and gravity-driven laminar flows enhance maturation of cerebral organoids recapitulating features of human embryonic cortical development. It would be interesting to discuss the findings that support this hypothesis. Recently, the transcriptomic analyses of cerebral organoids showed that mechanotransductionassociated genes including integrins, β-catenin, Wnt, and Delta-like pathway genes change under physical stress conditions 98 . Thus, organoids exposed to varying magnitudes of hydrodynamic forces are expected to display differences in structural and functional features during development. As such, cell proliferation of hPSCs has been shown to increase under simulated microgravity culture conditions with significantly higher expression levels of Ki-67 compared to 1g culture condition, leading to enhanced self-renewal, as revealed by the increased protein levels of the core set of pluripotent transcription factors 99 . In another study, human embryonic stem cell (hESC)-derived forebrain-specific neural-cortical organoids were shown to have the highest expression of Ki-67 when cultured under RCCS conditions after day 14 of the generation process compared to organoids cultured in static and other days of the generation in RCCS 100 . These results are in agreement with our study and support the increased expression levels of Ki-67 in RCCS organoids. Also, the use of RCCS under microgravity conditions in embryonic body formation from ESCs has been shown to promote more homogeneous EB formation and endoderm differentiation by modulating the Wnt/β-catenin pathway 101 . On the other hand, the most severe subtype of spina bifida, which is one of the neural tube defects that starts in the 4th week of pregnancy, occurs when the spinal cord is exposed to shear stress from amniotic fluid 102,103 . This negative effect of shear stress on brain and spinal cord formation during embryonic development highlights the importance of shear stress during organoid maturation. 3. "In terms of tissue structure, while the most dispersed organoid structure was observed in static culture and the most uniform distribution was observed in the shaker, the well-defined multi-patterned organoid morphologies with self-organization and desired cellular architectures in the inner-outer regions6,13,28 were observed in the RCCS microgravity bioreactor, followed by µ-platform and spinner systems (Fig. 2b,c,d,e,f)." It is not clear which parameters the authors used to say the latter. How was this quantified? More clear information about H&E staining added to text as follows and white arrows are added to Fig.2c,d,e, f,g. to allow better understanding (images with small magnification have been removed to avoid complexity) In terms of tissue structure, while the most dispersed organoid structure was observed in static culture and the most uniform cell distribution (unorganized) was observed in the shaker, the welldefined multi-patterned organoid morphologies with self-organization and desired cellular architectures in the inner-outer regions6,12,28 were more observed in the RCCS microgravity bioreactor (sequential organized cell nucleus and neural rosette-like structure indicated by white arrows), followed by µ-platform and spinner systems (Fig. 2c,d,e,f,g). Fig.2b and organoid size graphs in Fig.2c,d,e,f,g. What does "neurons at different polarized states" mean? This statement is detailed as follows; During human embryonic brain development, neurons and glial cells are derived from neural stem/progenitor cells that generate specific brain regions and cortical layers. The derivation is achieved by an asymmetric spatial organization of cells and different components, explained as cell polarity. In the polarization process, the ventricular zone (VZ) formed by neuroepithelial cells is followed by the subventricular zone (SVZ), where apical progenitors form basal progenitors, and then cortical plate (CP) by apical and basal progenitors forming neurons 50 . The degree of efficient differentiation is determined by the expression of neurons at different polarized states and progenitor specific markers. 5. Authors mentioned the presence of "cortical plate" in their organoids, but this is not clear from the images provided. Fig.4a, a new staining was carried out and high-quality images demonstrating a high proportion of FOXG1+ and PAX6+ cells were added to assess the quality of neuronal cell differentiation in Fig.6a. Also, white arrows added to indicate TBR1+/PAX6+/SOX2+/FOXG1+ and DAPI+ neural rosette and neural tube-like structures in Fig. 4a, 6a,b and Supp. Fig. 2 Fig.4b. Then, yellow dashed lines were added to indicate cortical plate layer borders in Fig.4a,5a, between the early-born neurons expressed CTIP2 in the deep-layer and the late-born neurons expressed SATB2 in the superficial upper-layer. . Next, a schematic illustration and new literature information was added to text to show cellular organization of matured cerebral organoids as 6. "Paired box protein 6 (PAX6) is a transcription factor expressed in neural stem cells and ventral forebrain radial glial progenitor cells. Although expressed in all brain regions, the highest expression level is reported at the cerebellum, which is involved in glial cell differentiation55." This sentence is confusing: it is unclear the relationship between glial cell differentiation and the cerebellum. Paired box protein 6 (PAX6) is a transcription factor expressed in neural stem cells along with ventral forebrain radial glial progenitor cells and has an important role in the differentiation of radial glial cells located in different regions of the CNS, such as the cerebral cortex, cerebellum, forebrain and hindbrain 59 . 7. "CTIP2, SATB2, and TBR1 were upregulated at protein and RNA levels which indicated cortical neuronal differentiation in both RCCS and µ-platform organoids mainly on day 120 (Fig. 4, Supp. Fig. 3), supporting the spatial distribution and functional organization of cortical neurons61." It has been demonstrated in 3D brain organoid cultures that generation of the different cortical neuron subtypes is not necessarily accompanied by structural/functional organization in the stereotypical layers of the human cerebral cortex (PMID: 24277810). Therefore, this sentence should be revised. The sentence is revised as follows: CTIP2, SATB2, and TBR1 were upregulated at protein and RNA expression levels in both RCCS and µ-platform organoids mainly on day 120 (Fig. 4, Supp. Fig. 3, Supp. Table 5), showing an insideout pattern of CP (yellow dashed lines indicate CP layer borders). Early-born neurons expressed CTIP2 and TBR1 in the deep-layer, whereas late-born neurons expressed SATB2 in the superficial upper-layer with the obvious spatial zone separation 65,66 8. In the methods, the authors mentioned that some modifications were done to the organoid protocol, but they do not mention what was done. The sentence is revised as follows to avoid any confusion; The cerebral organoid generation steps were based on the detailed method of Lancaster and Knoblich 116 with some minor modifications regarding the durations in consecutive steps starting from maturation, in which dynamic systems and flow rates are different from the applied method. Reviewer #2 (Remarks to the Author): Saglam-Metiner et al. reported that RCCS microgravity bioreactor and μ-platform presented improvement of the harvestability, scalability, reproducibility, and survival of cerebral organoids, as well as functional maturation. It is assumed that comparing these various 3D culture methods with each other required a lot of effort, and the results should provide important information to the scientific research field. However, their findings are just observations, and quantitative data are lacking. Statistical analysis of quantitative data is a basic requirement for scientific consideration. Furthermore, there are no data to allow the evaluation of physiological function, although the authors mentioned that their organoids have achieved functional maturity. There remain many issues to be resolved before reaching the conclusion the authors proposed. Thanks for the comments and suggestions, which helped to increase the quality of the manuscript. Please find our point-by-point responses to each of the comments raised. Major points: 1. The authors should add the quantitative data to explain the improvement of harvestability, scalability, reproducibility, and survival of cerebral organoids to compare the respective culture conditions, and they should present the results of statistical analysis. Thank you for the comment. We added a graph and statistical analysis to highlight the harvestability results in Fig. 2b. The sentences are revised in the text as follows; On the 15th day of culture, generated organoids (derived from two different passage numbers of iPSCs at two different times) were transferred to RCCS, µ-platform, spinner and shaker, along with static culture as the control group (considered as day 0 of maturation), in order to determine the best dynamic system for physical and functional maturation of cerebral organoids, which were cultured for 120 days. Organoid size distribution variability is known to be a limitation of traditional human brain organoid protocols, as well as cellular composition and organization. Therefore, organoid reproducibility and batch-to-batch variability was firstly examined in terms of harvestability, macroscopic-microscopic observations and organoid size distribution (Fig. 2b,c,d,e,f,g). As such, cellular composition and cellular organization were assessed with cell sorting, immunostaining, western blot, qRT-PCR, TUNEL and glutamate secretion analyses for organoids sampled on various days (30, 60 and 120th days). We used the ImageJ programme to quantify apoptotic zone vs nucleus, data were then calculated with GraphPad Prism 8.3.0. The quantitative results of apoptotic zone % in Fig. 6d with statistical analysis were added to the revised text. With respect to the scalability, the sentences in the discussion section were corrected as follows to eliminate the ambiguity; Although scale-up was not planned for this study, we presume that about 1000 matured organoids can be harvested in one batch, if we scale up to 500 ml RCCS STLVs, as such the µ-platform can be multiplied by connecting in a parallel or serial manner for scale-up purposes. 2. Although the authors described organoid-maturation, they only presented immunostaining and western blot data to show protein expression related to synapses such as PSD95. The authors should present the data of physiological analysis to show that their organoids have acquired functional maturation as neurons. Thanks for the suggestion. Besides, postsynaptic PSD95 and presynaptic vGLUT1 staining, we additionally performed immunofluorescence staining with a presynaptic marker GABA-A to indicate GABAergic interneurons in organoids, and obtained high-quality images under the confocal microscope. Fig. 5a. and details about GABA-A is provided in the results section as follows. Moreover, western blot analysis was performed to support results in Fig. 5c. During early corticogenesis, bursts of action potentials cause spreading of giant waves of calcium influxes through the developing cortex, which is described as giant depolarizing potentials (GDPs), depending both on excitatory glutamate and gamma-aminobutyric acid (GABA) inputs. Glutamate-dependent GDP-like events were reported in neuronal organoids. Reduced GDPs in >40 day organoids with GABA polarity switch were regarded as indicators of progressive neuronal network maturation 78 . On the other hand, the glutamate level that secreted from microglia was measured around 20 µM in healthy in vitro neuronal cultures 79,80 . Given that, the change in the uptake and consequently release rate of extracellular glutamate in 60 day RCCS and µ-platform organoids and growth in organoid sizes might be associated with more mature states. One of the distinguishing features of neuronal circuit and maturation is the switch of excitatory to inhibitory GABAergic neurotransmission. GABA-A, a presynaptic GABAergic interneuron receptor that modulates neurotransmitter release in both peripheral and central synapses 7,78,[81][82][83] , was more expressed in both RCCS and µ-platform organoids with CTIP2+ deep-layer of cortical neurons as of day 60 (Fig. 5a,b,c). Thus, the neuronal network complexity was validated in RCCS and µ-platform organoids with the presence of both presynaptic glutamatergic VGLUT1+ and GABAergic GABA-A+ neurons which are critically involved in neuronal network oscillations, as well as postsynaptic PSD95+ neurons, GFAP+/S100B+ astrocytes and MBP+/OLIG2+ oligodendrocytes. Additionally, we performed glutamate analysis to examine molecular and functional maturation of RCCS and platform organoids as depicted in Fig.5d and details are given in the methods and results section, as mentioned before. 3. Although the authors mentioned that a layered structure of brain has formed, the figures showed no such structure but only a scattering of cells expressing markers for the cortical layers. The authors should correct the description. Within the reviewer's valuable suggestion, a new staining of FOXG1+ and PAX6+ cells were added to assess the quality of neuronal differentiation in Fig.6a and white arrows added to indicate TBR1+/PAX6+/SOX2+/FOXG1+ and DAPI+ neural rosette and neural tube-like structures in Fig. 4a,6a,b and Supp. Fig. 2. Also, a schematic illustration and new literature information was added to text to show cellular organization of matured cerebral organoids with respect to MZ, CP, oSVZ, SVZ, VZ zones as depicted in Fig.4b. Yellow dashed lines were added to indicate cortical plate layer borders in Fig.4a,5a, between the early-born neurons expressed CTIP2 in the deep-layer and the late-born neurons expressed SATB2 in the superficial upper-layer. 4. The authors should provide the details of the equipment, and especially the μ-platform that was reported as being newly constructed in this paper. Thanks, "Supp. Fig. 1" has been revised by adding all system illustrations and detailed with the size/volume information used in the simulation. 5. The authors should compare the cell-type characteristics and maturation status in single-cell resolution methods like single-cell RNA sequence or detailed IFC investigation because organoid contains various types of brain cells. Quantification of these data will be helpful for understanding the beneficial points of the presented platform. We appreciate the reviewer's insightful suggestion. Besides detailed IF, WB and qRT-PCR analysis, the manuscript has been much improved within the newly performed quantitative FACS analysis to examine important types of brain cells such as CD45+/CD11b+ microglia and GFAP+ astrocyte in RCCS organoids, which is presented in Fig. 3d 6. The authors attempted to characterize different culture platforms by hydrodynamic analysis. This analysis clarifies the characteristics of how the medium moves and can be considered to be useful data. However, what this analysis reveals is limited to phenomena outside of organoids, such as the distribution of shear stress. This analysis unfortunately does not help us to understand what physical effects occur inside individual organoids and, as a result, affect differentiation and maturation. Care should be taken so as not to overstate this point, and to improve the description to avoid misunderstandings. Thanks for pointing this out, we agree that the flow simulation just characterizes the hydrodynamics (velocity, shear stress, flow distribution, etc.) acting on the organoids. However, the purpose of this work was not related to clarifying the underlying mechanisms of mechanotransduction signaling. Herein, the effects of different external fluid dynamics on maturation in cerebral organoids were examined with gene and protein expression levels. Furthermore, we believe that this work will shed light on future studies that will show which pathways responsible for mechanotransduction are activated by transcriptomic analyzes. With your comment, we avoided overstatements in the text and added more in-depth discussion as follows; Physical forces, such as shear stress, gravity and cyclic stretch affect mechanosensitive pathways or change the expression levels 22 . Recently, the transcriptomic analyses of cerebral organoids showed that mechanotransduction-associated genes including integrins, β-catenin, Wnt, and Delta-like pathway genes change under physical stress conditions 98 . Thus, organoids exposed to varying magnitudes of hydrodynamic forces are expected to display differences in structural and functional features during development. As such, cell proliferation of hPSCs has been shown to increase under simulated microgravity culture conditions with significantly higher expression levels of Ki-67 compared to 1g culture condition, leading to enhanced self-renewal, as revealed by the increased protein levels of the core set of pluripotent transcription factors 99 . In another study, human embryonic stem cell (hESC)-derived forebrain-specific neural-cortical organoids were shown to have the highest expression of Ki-67 when cultured under RCCS conditions after day 14 of the generation process compared to organoids cultured in static and other days of the generation in RCCS31. These results are in agreement with our study and support the increased expression levels of Ki-67 in RCCS organoids. Also, the use of RCCS under microgravity conditions in embryonic body formation from ESCs has been shown to promote more homogeneous EB formation and endoderm differentiation by modulating the Wnt/β-catenin pathway 100 . On the other hand, the most severe subtype of spina bifida, which is one of the neural tube defects that starts in the 4th week of pregnancy, occurs when the spinal cord is exposed to shear stress from amniotic fluid 101,102 . This negative effect of shear stress on brain and spinal cord formation during embryonic development highlights the importance of shear stress during organoid maturation. Minor points 1. Signal/background ratios of images are not enough. Revise the setting when acquiring images Thanks for the comment. The quality and resolution of all figures has been increased and revised in the illustration program. If required, all figures can be uploaded in tiff format, separately from the manuscript. 2. The shape and marginal border of organoids are irregular, and these organoids appeared to be dying in Figures 3-6. The authors can investigate and describe what happens, e.g. cellular migration, cell death, Final Revision Instructions To the Author-Please review the editorial comments and requests below and confirm that changes have been made in the manuscript in the right-hand column. This document must be uploaded as a related manuscript file. Please see our final file submission checklist for information about submitting your revised documents. Main manuscript file must be in Microsoft Word or LaTeX format. LaTex and Tex article source files must be accompanied by the compiled PDF for reference. The bibliography must be submitted separately (as a .bib file) or contained within the .tex file. All figures, tables, and supplementary items must be cited in the manuscript and numbered in the order in which they appear. Each Please ensure that all equations are supplied in an editable format upon resubmission. Equations must be numbered sequentially. Please check whether your manuscript contains third-party images, such as figures from the literature, stock photos, clip art or commercial satellite and map data. We strongly discourage the use or adaptation of previously published images, but if this is unavoidable, please request the necessary rights documentation to re-use such material from the relevant copyright holders and return this to us when you submit your revised manuscript. An appropriate permissions statement must be present in the relative figure caption for any third-party images. Please check that you have not copied any text directly from published work (even your own) without clear attribution, including one or more references. We run a plagiarism detection software and may need to request additional changes if we identify large blocks of identical text. An updated editorial policy checklist that verifies compliance with all required editorial policies must be completed and uploaded with the revised manuscript. All points on the policy checklist must be addressed; if needed, please revise your manuscript in response to these points. https://www.nature.com/documents/nr-editorial-policy-checklist.pdf. Please note that this form is a dynamic 'smart pdf' and must therefore be downloaded and completed in Adobe Reader. This file will not open in an internet browser. The reporting summary will be published alongside your manuscript therefore it needs to accurately represent your work. In this case, please take a closer look at the reporting summary and make sure things are completed correctly. If an item does not apply, for example human participants, I need you to check the NA box next to that item. No section should be left blank. Also, please make sure to include your name and date at the top of the document. If you require a new Reporting Summary form, please download it here: https://www.nature.com/documents/nr-reporting-summary.pdf. Please note that this form is a dynamic 'smart pdf' and must therefore be downloaded and completed in Adobe Reader. This file will not open in an internet browser. Your paper will be accompanied by a brief editor's summary when it is published on our homepage. Please approve the draft summary below or provide us with a suitably edited version (no more than 250 characters including spaces). Two engineering approaches, a rotating cell culture system (RCCS) and microfluidic platform (µ platform), are presented with the goal of improving harvestability, scalability, reproducibility and survival of human brain organoids in long term culture. ORCID Communications Biology is committed to improving transparency in authorship. As part of our efforts in this direction, we are now requesting that all authors identified as 'corresponding author' create and link their Open Researcher and Contributor Identifier (ORCID) with their account on the Manuscript Tracking System (MTS) prior to acceptance. ORCID helps the scientific community achieve unambiguous attribution of all scholarly contributions. For more information please visit http://www.springernature.com/orcid. For all corresponding authors listed on the manuscript, please follow the instructions in the link below to link your ORCID to your account on our MTS Page 4 of 22 before submitting the final version of the manuscript. If you do not yet have an ORCID you will be able to create one in minutes. https://www.springernature.com/gp/researchers/orcid/orcid-for-natureresearch IMPORTANT: All authors identified as 'corresponding author' on the manuscript must follow these instructions. Non-corresponding authors do not have to link their ORCIDs but are encouraged to do so. Please note that it will not be possible to add/modify ORCIDs at proof. Thus, if they wish to have their ORCID added to the paper they must also follow the above procedure prior to acceptance. To support ORCID's aims, we only allow a single ORCID identifier to be attached to one account. If you have any issues attaching an ORCID identifier to your MTS account, please contact the Platform Support Helpdesk at http://platformsupport.nature.com/ We regularly highlight papers published in Communications Biology on the journal's Twitter account (@CommsBio). If you would like us to mention authors, institutions, or lab groups in these tweets, please provide the relevant twitter handles in the right-hand column. We would welcome the submission of material for the 'Featured Image' section on the Communications Biology home page. Images should relate to the content of your manuscript but need not be contained within the paper. Photographs and aesthetically interesting images are preferred; diagrams are generally not used. Suggestions should be uploaded as a Related Manuscript file. Please provide 1200x675-pixel RGB images. You will also need to submit a completed Image License to Publish. Unfortunately, we cannot promise that your suggestions will be used. o If you include a title page, please check that the title and author list matches the main manuscript. • All Supplementary items must be referred to in the manuscript, and items must be mentioned in numerical order. Please do not include general references to "Supplementary Material"; instead refer to specific items. Supplementary Information files will be uploaded with the published article as they are submitted with the final version of your manuscript. Any highlighting or tracked changes should be removed from the file. All supplementary files have been overhauled, the title and author list has been added, excel files that were uploaded previously, as "Supp. The excel files that were uploaded previously, as "Supp. Table 2-3" have been revised and labelled as "Supp. Data 2-3". Finally, each file is provided as an excel file. It's mandatory to provide access to the numerical source data for graphs and charts: We strongly recommend depositing these to suitable repositories (such as Figshare, Dryad, or a data type-specific repository if one exists). Otherwise, all source data underlying the graphs and charts presented in the main figures must be uploaded as Supplementary Data (in Excel or text format). Note that only the data used directly for generating the charts needs to be supplied. The numerical source data for Title Page Please ensure that the author list provided in our manuscript tracking system matches the author list in the main manuscript. Please check that your author list and affiliations comply with the following: • Where relevant, "present address" must be provided separately as the final affiliation. • At least one corresponding author must be designated, and an e-mail address must be provided for each corresponding author (with a limit of one e-mail address per author). Manuscript title Please ensure the title clearly describes the central finding of the paper. We recommend writing the title as a declarative statement of approximately 15 words or fewer. Please include exact p-values where possible. We ask that you also include the name of the statistical test and the estimated effect size. If applicable, please also include the confidence interval. Avoid the use of the word "significant" unless referring the results of a statistical test. Please check that all gene and mRNA names are in italics. Protein names should not be in italics. Please confirm that only official gene/protein symbols are used and that species names are in italics. All data that support the conclusions drawn must be presented in the manuscript unless they are published elsewhere. We do not allow statements of "data not shown". We removed this part in the manuscript as suggested Please avoid abbreviating terms unless they are used five or more times. We ask that you avoid all non-standard 2 letter abbreviations. Use of speech marks around words or phrases should be avoided; if a phrase is non-standard, please explain the meaning instead; otherwise they are usually unnecessary. Axis and panel labels will be published as received. We recommend using a sans-serif font such as Arial or Helvetica. Data presentation in bar graphs and line graphs For all graphs depicting a single point value (e.g., mean) with error bars, you must add individual data points or convert the graph to a boxplot or dotplot. You may wish to refer to this blog post about representing data distribution in plots (particularly for small datasets). We strongly encourage the same for plots with multiple time courses depicted. See the June 24, 2019 CommsBio editorial for more details about this policy. Example plots are shown here: We converted Fig. 2b, Fig. 5d and Fig. 6d graphs to dot-plot like graphs as suggested. Examples of plots showing data distribution. Figure 2 from the editorial linked to above. When choosing a color scheme please consider how it will display in black and white (if printed), and to users with color blindness. Please consider distinguishing data series using line patterns rather than colors, or using optimized color palettes such as those found at https://www.nature.com/articles/nmeth.1618. The use of colored axes and labels should be avoided. Please avoid the use of red/green color contrasts, as these may be difficult to interpret for colorblind readers. Blots and gels All blots/gels must be accompanied by size markers in every figure panel. Uncropped and unedited blot/gel images must be included as Supplementary Figure Please pay close attention to our Digital Image Integrity Guidelines and to the following points below: • that unprocessed scans are clearly labelled and match the gels and western blots presented in figures. Unprocessed scans must be included in a supplementary figure. • that control panels for gels and western blots are appropriately described as loading on sample processing controls • all images in the paper are checked for duplication of panels and for splicing of gel lanes. Finally, please ensure that you retain unprocessed data and metadata files after publication, ideally archiving data in perpetuity, as these may be requested during the peer review and production process or after publication if any issues arise. Within the comments, the original blot images of the WB analysis of these proteinsgroups are given as Supp. Fig. 2, and the manuscript is revised according to the changes. The following should be considered when viewing blot images: *The protein standard (Bio-Rad, 161-0394) used in WB analysis while gel loading, could not be effective for visualization together with protein bands in chemiluminescence imaging system (Bio-Rad, ChemiDoc MP Imaging System containing a CCD camera) at a wavelength of 428 nm. Therefore, the kDa data of the protein bands were indicated by arrows. *As a result, although precisely the same amounts of protein were loaded onto the gels for each sample groups, the proteins that freshly isolated from different organoids harvested in different batches at different times, were sometimes loaded onto the different gels at different times and at different orders. *And some results are from experimental groups run on gels loaded with proteins of static organoids and/or day 30 organoids whose results are not shown in the study. For that, the orders of samples given in some membrans are also different. *On the other hand, most of the time, PVDF membranes were cut at places close to the recommended kDa sizes for targeted proteins, and effective labeling were made with a small amount of antibodies in order to study repetitively and economically. *Also sometimes, proteins with around the same kDa are labeled on the same cutted membrane after effective blocking. Tables in the main text Please check that your Tables comply with the following: • Do not include shading or colors. All Tables must contain black and white text only. • Any bold/italic formatting must be either removed or defined clearly in a Table footnote. • Where Tables contain images, each image should appear in its own cell in the absence of any text. • All Tables must have a brief title. Methods Please ensure that all information present in the Reporting Summary is also in the manuscript. This information is usually most appropriate in the Methods section. We allow unlimited space for Methods. The Methods must contain sufficient detail such that the work could be repeated. It is preferable that all key methods be included in the main manuscript, rather than in the Supplementary Information. Please avoid use of "as described previously" or similar, and instead detail the specific methods used with appropriate attribution. The Methods should include a separate section titled "Statistics and Reproducibility" with general information on how the statistical analyses of the data were conducted, and general information on the reproducibility of experiments (also those lacking statistical analysis), including the sample sizes and number of replicates and how replicates were defined. We revised "Statistical analysis" section as "Statistics and Reproducibility" and detailed in manuscript as recommended. We encourage you to include the following statement about the use of human brain organoids in your studies: All experiments involving hiPSCs from human subjects were performed in compliance with [please provide ethical review committee and any reference numbers]. Please also consider including one of the following statements: [] The authors concluded [in consultation with, or not in consultation with] ethicists that no testing for sentience or sentience-like capacities was necessary for this experiment; or [] The authors implemented the following measures to test or address any potential questions about sentience or sentience-like capacities in this experiment For this study, we used iPSC line that was previously derived from fibroblasts of a healthy donor and characterized by our project partners during their previous study that was published in Stem Cell Reports Journal (https://doi.org/10.1016/j.stemcr.2019.08.00 7). Therefore, there were no ethical review committee requirements for this study, which was revised as follows; "Maintenance of human induced pluripotent stem cells (iPSCs). iPSC lines that were previously reprogrammed from human dermal fibroblasts of healthy donors and characterized in terms of pluripotency markers and mycoplasma purity, were obtained from Izmir Biomedicine and Genome Center, Stem Cell and Organoid Technologies Laboratory 115 . So, there were no ethical review committee requirements for this study." If applicable, all oligo sequences, concentrations of antibodies, and sources of cell lines must be included in the Methods (these can also be provided in a main Table and cited in the Methods). Nature Portfolio journals encourage authors to share their step-by-step experimental protocols on a protocol sharing platform of their choice. The Nature Portfolio's Protocol Exchange is a free-to-use and open resource for protocols; protocols deposited in Protocol Exchange are citable and can be linked from the published article. More details can be found at https://protocolexchange.researchsquare.com/ Data Policies The Data Availability statement must include: • Access details for deposited data, including repository name and unique data ID. • How source data can be obtained. • A statement that all other data are available from the corresponding author (or other sources, as applicable) on reasonable request. Note that 'available upon request' is only appropriate if immediate data access has not been mandated by our policies or by the editors. See here for more information about formatting your Data Availability Statement: http://www.springernature.com/gp/authors/research-datapolicy/data-availability-statements/12330880 Data availability section was revised as fallow: Source data for the presented figures are provided as Supp. Data 1-2-3 with this paper. Further simulation data that is generated and analyzed during the current study is available from the corresponding author on reasonable request. Besides that, the normalized RNA expression levels of the specific neuronal/glial cell markers in different cell types downloaded from RNA single cell type data from Human protein atlas database; https://www.proteinatlas.org/about/downlo ad; rna_single_cell_type.tsv.zip. Also, Biological Process (Gene Ontology) and Tissue expression (TISSUES) enrichments of
2023-02-14T15:35:33.655Z
2023-02-14T00:00:00.000
{ "year": 2023, "sha1": "4b7e237f4036c7f22ae36817ba650360207a7969", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9e8365c029059d71ee9bd0c309165912cfc6e6df", "s2fieldsofstudy": [ "Medicine", "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245195959
pes2o/s2orc
v3-fos-license
Ambiguity and Unintended Inferences About Risk Messages for COVID-19 The World Health Organization established that the risk of suffering severe symptoms from coronavirus disease (COVID-19) is higher for some groups, but this does not mean their chances of infection are higher. However, public health messages often highlight the “increased risk” for these groups such that the risk could be interpreted as being about contracting an infection rather than suffering severe symptoms from the illness (as intended). Stressing the risk for vulnerable groups may also prompt inferences that individuals not highlighted in the message have lower risk than previously believed. In five studies, we investigated how U.K. residents interpreted such risk messages about COVID-19 (n = 396, n = 399, n = 432, n = 474) and a hypothetical new virus (n = 454). Participants recognized that the risk was about experiencing severe symptoms, but over half also believed that the risk was about infection, and had a corresponding heightened perception that vulnerable people were more likely to be infected. Risk messages that clarified the risk event reduced misinterpretations for a hypothetical new virus, but existing misinterpretations of coronavirus risks were resistant to correction. We discuss the need for greater clarity in public health messaging by distinguishing between the two risk events. vulnerable group leads to unintended inferences about the risks faced by the rest of the population. Ambiguity and Possible Misinterpretation of What is at Risk for Vulnerable Groups Risk messages are often ambiguous, meaning that it is unclear what exactly is risky (Gigerenzer & Edwards, 2003;Gigerenzer & Galesic, 2012;Harris & Corner, 2011;Nygren et al., 1996;Slovic & Lichtenstein, 1968). For COVID-19, "at risk" could refer to the probability of contracting the disease (i.e., becoming infected), the probability of suffering severe symptoms from the disease (Bi et al., 2020;Jones et al., 2020;Onder et al., 2020;Verity et al., 2020), or other health consequences, such as the probability of long-term COVID-19 symptoms (Nabavi, 2020). Current evidence suggests that social and behavioral factors (e.g., the motivation and ability to practice social distancing), rather than intrinsic characteristics (e.g., age, gender, or ethnicity), influence people's likelihood of contracting the disease (e.g., Jones et al., 2020). In contrast, there is strong evidence that certain intrinsic characteristics mean some people are more likely to suffer severe symptoms from COVID-19, such as being over 70 years of age or having underlying medical conditions (Onder et al., 2020). The different probabilities for contracting a disease and for suffering severe symptoms from it are not unique to . For example, the probability of getting the flu is about 3%-11% and comparable across age groups (Tokars et al., 2018). However, the probability of hospitalization is higher for adults over 65 years of age than younger adults. In the 2018-2019 U.S. flu season, for example, the national authority estimated that overall, only 1%-2% of flu cases in the population resulted in hospitalization in the overall population, but this rate of hospitalization was 5-10 times greater among over-65s, at 9%-10% of flu cases in this group (Centers for Disease Control & Prevention, 2020a). For COVID-19, the World Health Organization communicated both risks clearly, stating on its website in April 2020: "Evidence to date suggests that children and young adults are less likely to get severe disease" and "children and adolescents are just as likely to become infected as any other age group and can spread the disease" (World Health Organization, 2020c). However, as shown in Table 1, other messages are less clear about what is riskier for groups identified as "at increased risk." It is notably unclear that this "risk" does not refer to the chance that one may become infected. When one reads that people over 70 years old are "at higher risk from coronavirus," they may consider (as the message intends) that the elderly are more likely to be severely affected by . They may also consider that the elderly are more likely to contract the disease (and become contagious)-which is not what the message intends. These risk messages could therefore be misunderstood as meaning that vulnerable people are more likely to be infected by the virus, in addition to, or instead of, being more likely to suffer severe symptoms because of it. We could therefore expect that the "at increased risk" messages create a difference in the perceived infection probabilities for vulnerable and nonvulnerable individuals. This could be because people simply raise their perceptions of the probability that a vulnerable individual would be infected. However, an alternative, nonexclusive, possibility is that people might lower their perceived probability that nonvulnerable individuals would be infected. Inferences of Lower-Than-Usual Risks to Nonvulnerable Groups When risk communicators state that certain groups are "at increased risk," the message is intended to mean that the disease is more dangerous for some individuals (because they are more likely to suffer severe symptoms from it). Semantically, the statement says nothing about the change of risk levels to nonvulnerable groups, and is presumably not intended to be interpreted as such. However, people often infer, pragmatically, meanings that go beyond the semantics of what communicators have explicitly said (Horn, 2006). Inferences drawn from speakers' choices of words regularly shape the interpretation of language in general (e.g., Hilton, 2008;Ingram et al., 2014;Keren, 2007;Sher & McKenzie, 2006) and risk quantifiers in particular (Juanchich et al., 2020;Sirota & Juanchich, 2012). Evidence suggests that when a particular risk is emphasized, people can infer that other, independent, risks are less likely to occur (Park et al., 2021;Windschitl et al., 2017). Therefore, when health authorities repeatedly communicate regarding the risks for vulnerable groups, this message could be taken as implicitly meaning: "the risk for other (nonvulnerable) people is lower than what one might expect," rather than that risk simply being lower compared to the vulnerable group. At the start of the pandemic, when data was limited, any individual might have assumed that everyone faced Note. The U.S. Centers for Disease Control and Prevention message is clear on what is "at increased risk" whereas the Australian version is more ambiguous, with the U.K. NHS message being in between. Information gathered as of April 30, 2020 (Australian Government Department of Health, 2020; Centers for Disease Control & Prevention, 2020b; UK National Health Service, 2020). Public authorities may have changed the wording on their websites since. COVID-19 = coronavirus disease. equal risks from the virus. However, when faced with national health messages highlighting that "people who are 70 or older are at increased risk from coronavirus," instead of simply adjusting upward the perceived risks to the vulnerable group described in the messages, people may also have inferred that younger adults were at less risk than initially perceived. Because people regularly make pragmatic inferences, we hypothesized that exposure to risk messages that highlight higher risk to vulnerable groups would cause people to lower their perception of the risks to nonvulnerable individuals (instead of simply heightening their perceptions for vulnerable individuals). Lower perception of risks to nonvulnerable individuals could come in the form of lower perceived probability of severe symptoms or lower perceived probability of infection, depending on how the term "risk" is interpreted. We now know that the probability of severe symptoms for "nonvulnerable" individuals (the majority of people) is indeed lower than first expected (Verity et al., 2020), so this is not an inaccurate perception. However, believing that nonvulnerable individuals are less likely to contract COVID-19-whether compared to vulnerable individuals or compared to prior beliefs -is problematic because there is no consistent evidence that this is the case (e.g., Bi et al., 2020;Jing et al., 2020;Li et al., 2020). Earlier in the pandemic, some evidence suggested that COVID-19 case rates were lower in children (Stokes et al., 2020;Williams et al., 2021), which might suggest this was a nonvulnerable group for infection. However, evidence later emerged that children can, and are just as likely as adults, to be infected, even if they are less likely to develop symptoms (Zimmermann & Curtis, 2020). If frequent exposure to ambiguous "at risk" messages lowers perceived infection probability for nonvulnerable groups like children, this could lead to inaccurate risk perception because it would hinder people from adjusting their probability estimates upward to account for new knowledge. Further, the misinterpretation that one is less likely to be infected may reduce support for protective measures (Beale et al., 2021;Lewnard & Lo, 2020). The majority of the population can be classed as nonvulnerable, and it is important that they are able to accurately interpret risk messages and make appropriate inferences about their likelihood of contracting the virus so that they take appropriate protective measures to reduce transmission. Objectives of Research We posited that the terminology "at increased risk from coronavirus" raises two concerns. First, people may be confused about whether the higher risk is referring to contracting the disease or to developing severe symptoms. Second, people could draw incorrect inferences about the relative magnitudes of each risk across groups compared to their prior expectation. We report five studies evaluating the ambiguity of risk messages that describe specific groups as being "at increased risk" from coronavirus and the inferences that people draw from these risk messages. We expected that some people would believe that being "at increased risk" meant being more likely to contract coronavirus and that this interpretation would affect their probability perceptions for vulnerable people to become infected by coronavirus. Furthermore, we expected that exposure to the risk message (vs. no exposure) would lead people to lower their estimated probabilities that nonvulnerable individuals would be infected. Finally, to provide an evidence-based solution to resolve the ambiguity of the term "risk," we tested whether a clearer message that specifically mentioned exactly what risk is higher for vulnerable individuals would improve interpretations and probability perceptions (Experiments 4-5). Open Science Statement The five studies were preregistered. The preregistrations, along with materials and data for all the studies, are shared on the Open Science Framework (https://osf.io/q78ax/). All studies received approval from the University of Essex's research ethics committee prior to data collection. Study 1 Study 1 was conducted with U.K. residents recruited on Prolific on a single day (April 7, 2020). This was after the U.K. government had announced and enforced additional measures to control the COVID-19 pandemic (as of March 24, 2020): A stay-home order (including a ban on visiting other dwellings) with limited exemptions, closure of all except specified businesses and venues, and a ban on gatherings of more than two people in public spaces. In Study 1, participants were first randomly allocated to see a risk message regarding vulnerable groups or not. They then provided probability estimates of infection and severe symptoms for both vulnerable and nonvulnerable individuals before providing their interpretation of the risk message. We hypothesized that people would interpret that the term "risk" referred to the chance of developing severe symptoms (as intended by the message) but also believe that "risk" referred to the possibility of contracting the infection (which the message does not intend; H1.1). As a result, we expected that in addition to perceiving that vulnerable older adults had higher chances of hospitalization, participants would believe that they also had higher chances of infection compared to nonvulnerable others, and that this would be especially the case for people who believed that the term "risk" referred to probability of infection (H2.1). We also expected that exposure to the risk message (compared to no exposure) would increase the difference in estimated infection probability for vulnerable and nonvulnerable individuals because the message would lower the perceived probability that nonvulnerable individuals could become infected by the virus (H3.1). Finally, we expected that probability perceptions would be related to health recommendations participants would give to others (H4.1). Participants We recruited 396 participants (after excluding eight participants who failed an attention check, a preregistered exclusion criterium described below in the procedure). Participants were 56% female (43% male, 1% other or did not disclose), 79% White, and ages ranged from 18 to 79 years (M = 43.4, SD = 15.3 years). Further sociodemographic characteristics are reported in Table 2. Design, Materials, and Procedure We used a mixed design where we manipulated exposure to a risk message at the onset of the study between-subjects and vulnerability within-subjects. Participants were either exposed to a risk message or not (n = 198 each) before assessing the probability that someone would contract coronavirus and the probability that they would suffer severe symptoms from such an infection. In the vulnerability manipulation, the probability judgments focused on three individuals: An older adult over 70 (vulnerable), a healthy younger adult, and a healthy child (both nonvulnerable). The risk message was presented on a separate page from other questions as an image. The message was an image taken from the U.K. National Health Service (NHS) website and it identified the groups considered "at increased risk from COVID-19" (see Figure 1). In the experimental condition, participants read the message in Figure 1 and then proceeded to answer the questions about their probability perceptions, risk interpretation, and health recommendations. In the control condition, participants went straight to answering these questions. Probability Perception Questions On separate pages, participants evaluated the probability that three different individuals would be infected by the COVID-19 virus over the next 30 days and the probability that each of these individuals would require hospitalization if they were infected by the virus (see exact wording of the questions in Table 3). The three individuals were: A vulnerable individual who belonged to the group at increased risk (aged over 70 years) and two "nonvulnerable" individuals who did not (children, defined as younger than 18 years old, and adults aged between 18 and 50 years). Participants provided their estimates as a numerical probability (between 0 and 100%). Because the probability of hospitalization for coronavirus infection was conditional on already having coronavirus, the Note. -= Were not measured in a particular study. Sociodemographic variables for Study 5 (n = 454) are reported in the text. COVID-19 = coronavirus disease. a It is not possible to confirm whether someone has COVID-19 without a medical test; these tests are not widely offered in the U.K., so participants were only able to self-report symptoms. RISK MESSAGING FOR COVID-19 question about the chance of contracting coronavirus was always presented before the question about the chance of needing hospitalization after contracting coronavirus. Participants provided each estimate on a separate page, with order of presentation of the age groups randomized. 2 Health Recommendations After completing the probability perception questions, participants reported whether they would advise people from each of the three age groups to stay at home 24/7 over the next 14 days, using a 4-point scale (0: not at all, 4: yes, completely). This question was presented in a matrix table with all the age groups presented simultaneously. Risk Message Interpretation Participants then provided their interpretation of what the National Health Service meant when they advised that "some people are at increased risk of severe illness from coronavirus ." Participants could answer "Yes", "No", or "I do not know" for each of three interpretations (presented simultaneously in a matrix table in a set order, as in Table 4): Being in the higher risk group means having a greater chance of : : : • carrying the virus. • being infected by the virus. • being hospitalized because of the infection. The first option was listed as a filler item so that participants would be less likely to perceive that the answers to the question were exclusive. 3 Finally, participants provided sociodemographic information. Participants completed the online study at the end of a separate study asking about beliefs in conspiracy theories and health protective behaviors (Juanchich et al., 2021). The study included a preregistered attention check question ("Please select the option 'definitely not true' to show that you are reading the questions") to detect poor response quality. Statistical Analyses We tested preregistered hypotheses using planned analyses 4 about interpretations and probability perceptions (H1.1-H1.3) using a multivariate analysis of variance (MANOVA) including vulnerability (vulnerable or nonvulnerable [children and younger adults together]), participants' interpretation of the term "risk" (whether it meant chance of infection: "yes" vs. "no" and "do not know" combined), and message exposure condition (control or message) as fixed factors and probability perceptions for hospitalization and infection as dependent variables. The connection between probability perception and health recommendation (H4.1) was tested using correlational analyses. Statistical significance was determined at α = 0.05. For all effects involving variance analyses, we report partial η 2 effect sizes. 5 We report Cohen's d effect sizes for pairwise group comparisons. Interpretation of the Term Risk and Associated Probability Perception Most participants (95%) recognized that being at increased risk characterized the chance of being hospitalized (see Table 4). Correspondingly, the MANOVA found a main effect of vulnerability on probability perceptions: Participants perceived that older adults had higher chances of being hospitalized if they contracted coronavirus (M = 45.84%, SD = 30.82), compared to children and younger adults-the nonvulnerable individuals-on average (M = 16.58%, SD = 19.37), F(1, 392) = 601.88, p < .001, η 2 P = 0.61. Interestingly, viewing the risk message did not raise the perceived probability that 2 We also checked for two possible order effects. First, we tested whether presenting the infection probability question first or second (after the hospitalization question) had an effect on probability perceptions for each age group. There were no order effects on infection or hospitalization estimates for younger adults: M infection = 33% versus 36%, t infection (394) = 0.96, p = .337; M hospitalization = 16% versus 20%, t hospitalization (259.71) = 1.83, p = .068. Second, we tested the effect of presentation order of the two vulnerable versus nonvulnerable groups on probability perceptions. Participants who gave estimates for children first (compared to giving them later, after another age group) perceived higher infection and hospitalization estimates: For infection, M = 32% versus M = 22%, t(210.96) = −2.64, p = .009; for hospitalization, M = 17% versus 12%, t(216.19) = −2.12, p = .035. Participants who gave estimates for older adults first (compared to giving them later, after another age group) perceived lower infection estimates (M = 33% vs. M = 45%), t(394) = 3.49, p = .001, but no order effects were observed for hospitalization estimates, M = 44% versus 47%, t(394) = 0.87, p = .385. Note that these figures always reflect a higher probability estimate for older adults than younger adults and children, irrespective of which estimate was given first. 3 21% of participants considered the risk referred to carrying the virus, but all of them believed this was in addition to one of the other risk interpretations. We include a breakdown of participants' who said yes to the "carrying the virus" option and an analysis of its effect on probability perception in Supplementary Material on the OSF (https://osf.io/q78ax/). 4 These hypotheses were preregistered, but numbered in a different way. The preregistered analyses also included mediation tests for the effect of the message on recommendations to stay home via probability perceptions using PROCESS in SPSS (Hayes, 2013). The mediation analysis is reported in full in the Appendix and showed that there were effects of the message condition on the perceived probability of infection to children and that these perceptions were positively related to health recommendations, but no mediation was found. 5 In an analysis of variance, partial eta-square (η 2 P ) describes the proportion of variability associated with an effect after partialing out the variability associated with other effects in the analysis (Fritz et al., 2012). an older adult would be hospitalized in case of infection (compared to not seeing the message), M message = 45.82%, SD = 30.97, M control = 45.85%, SD = 30.75; F(1, 391) = 0.05, p = .832, η 2 P < .001. Supporting H1.1, over half our participants (56%) also interpreted "risk" to mean the chance of being infected with coronavirus (i.e., catching the disease), 95% CI [51%, 61%]. In line with this interpretation and supporting H2.1, participants perceived that older adults were more likely to become infected by the new coronavirus (M = 40.81, SD = 34.74) compared to younger adults and children taken together (M = 33.10, SD = 29.81), F(1, 392) = 85.03, p < .001, η 2 P = 0.18. People who interpreted that "risk" meant the chance of infection were especially likely to believe that older adults had a higher chance of contracting the virus (M = 47.04, SD = 34.46) compared to people who did not interpret "risk" as chance of infection (M = 32.86, SD = 33.53), interaction effect: F(1, 392) = 35.19, p < .001, η 2 P = 0.08. Participants also found it difficult to disentangle the probability of severe symptoms and the probability of infection, as indicated by the correlation between the two that we found for both children and younger adults together (nonvulnerable) and older adults (vulnerable), r nonvulnerable = 0.50, p < .001; r vulnerable = 0.55, p < .001. Effect of Exposure to Risk Message on Inferences About Probability of Infection for Nonvulnerable Individuals As shown in Figure 2, the effect of exposure to the risk message on the perceived probability of infection varied as a function of the level of vulnerability of the individual for whom the judgment was made and as a function of participants' interpretation of "risk" as chance of infection. The MANOVA supported our hypothesis that exposure to the message would affect probability perceptions (H3.1), with an interaction effect between vulnerability and message exposure, F(1, 392) = 5.95, p = .015, η 2 P = 0.02, and a significant three-way interaction between vulnerability, exposure to the risk message, and participants' interpretation of "risk" as infection, F(1, 392) = 3.95, p = .047, η 2 P = 0.01. People who did not endorse the "infection" interpretation (n = 174) had a similar probability perception for infection among both children and younger adults (nonvulnerable) and older adults (vulnerable) whether they saw the message or not (shown in the left panels of Figure 2, with the blue and red lines close and in a similar location for the top and bottom panels). However, participants who endorsed the "infection" interpretation (n = 222) believed that older individuals were more likely than the other individuals to be infected, and this tendency was stronger in the experimental condition: The estimated difference in the probability of infection between older adults and the other nonvulnerable individuals (as a group) was greater after exposure to the risk message, as indicated by the larger gap between the red and blue lines in the top right panel compared to the bottom right panel of Figure 2. Among participants who interpreted the "risk" as infection, a pairwise comparison showed that probability perceptions for nonvulnerable individuals (children and younger adults) was, as predicted, lower with exposure to the message than no exposure (M message = 27.20, SD = 23.62; M control = 34.64, SD = 33.59), but the effect was not statistically significant, t(188.81) = 1.90, p = .060, d = −0.26. Note. Participants were also given the following instruction for how to provide their answer: "Your answer can range from 0% to 100% and can include up to three decimal places (e.g., enter 0.01% for a chance of 1 in 10,000)." Individuals and ages in the square brackets were either a child < 18 years, a healthy adult aged 18-50 years, or an older adult aged >70 years, depending on the condition in the study. For Study 1, the two younger age groups were grouped and averaged to represent "nonvulnerable" individuals. Probability Perceptions and Health Recommendations Supporting H4.1, participants' probability perceptions were significantly positively correlated with how much people advised younger individuals to stay home, for infection probability: r = 0.12, p = .012 (child), r = 0.12, p = .014 (younger adult); and for hospitalization probability: r = 0.43, p < .001 (child), r = 0.13, p = .010 (younger adult). Notably for children, the correlation was much larger for perceived hospitalization probability than perceived infection probability, showing that severity had more influence than likelihood on recommendations. However, probability perceptions for older adults were not correlated with advice to this group to stay home, infection probability: r = 0.09, p = .091; hospitalization probability: r = 0.01, p = .824. Here, there was possibly a ceiling effect for advice to stay home (M = 3.81, SD = 0.50 for a 4-point scale). Mean recommendations for children and younger adults were M = 3.07 (SD = 0.80) and M = 3.09 (SD = 0.71). Interim Discussion Participants understood that "being at increased risk" meant being more likely to be hospitalized, but half also believed that the risk referred to the possibility of being infected with the new coronavirus. Participants perceived that nonvulnerable individuals (e.g., younger adults and children) were less likely to be infected than vulnerable ones, and especially so when they misinterpreted "at increased risk from coronavirus" to refer to the chance of coronavirus infection (not just severe symptoms, as was intended). We also found that exposure to the risk message did not increase participants' perception that vulnerable individuals could suffer severe symptoms from the illness. Instead, it affected participants' perception of the probability of infection. For people who interpreted the "risk" as the chance of being infected, exposure to the risk message lowered their estimated probability of infection for nonvulnerable individuals and raised their estimated probability of infection for vulnerable ones (three-way interaction effect). This meant that participants who misinterpreted the "risk" as chance of infection had a larger gap in their probability perception of infection for vulnerable and nonvulnerable individuals. Study 2 In Study 2, we sought to replicate the findings of Study 1: The misinterpretation of the risk terminology and the effect of an ambiguous risk message on probability perceptions. We extended Study 1 by testing whether the effects would still hold when participants were assessing risks for only one group (vulnerable or nonvulnerable) instead of both as was the case in Study 1. Repeating Study 1 with a between-subjects design thus allowed us to rule out the possibility that participants estimated different probabilities for different age groups simply because they were asked to repeat the estimates (as evaluations can change depending on whether they are made jointly or separately; Hsee, 1996). Our hypotheses were the same as Study 1 (here numbered H1.2-4.2). The study was conducted with U.K. residents from Prolific on a single day (April 30, 2020). At this point, the U.K. still had in place the same measures to limit the spread of COVID-19 as in early April 2020 when Study 1 was conducted. By this time, the NHS had also updated its "at increased risk" message to highlight three vulnerable groups of people: 70 years or older, pregnant, or with a condition that might increase risk. Participants We recruited 399 participants (after excluding seven participants who failed an attention check). Participants were 62% female (37% male, 1% other or did not disclose), 88% White. Ages ranged from 18 to 71 years (M = 35.1, SD = 12.6 years). Further sociodemographic characteristics are reported in Table 2. Design, Materials, and Procedure The materials and procedure were identical to Study 1, except participants only provided one set of probability perception judgments and recommendations, either for children or for older adults. Participants were randomly assigned to one of four betweensubjects conditions, which came from crossing the message manipulation from Study 1 (exposure to the risk message, n = 199, or not, n = 200) and the age of the individual for whom participants provided their probability perceptions and recommendations (children [nonvulnerable], n = 200, or older adults [vulnerable], n = 199). We focused on children for the nonvulnerable group because this was where the effect of the message was largest in Study 1, thus affording us more power to detect it while reducing the number of possible comparisons in the analysis. 6 The risk message was identical to Study 1. Study 2 was also completed online, at the end of a separate study similar to that in Study 1. Statistical Analyses We had the same analytical approach to test our hypotheses as in Study 1, except that vulnerability (vulnerable vs. nonvulnerable) was now entered as a between-subjects factor in the MANOVA. Interpretation of the Term Risk and Associated Probability Perception Most participants (96%) recognized that being at increased risk characterized the chance of being hospitalized (see Table 4). Consistently, participants perceived that older adults had a higher probability of severe symptoms due to COVID-19 (M = 46.03%, SD = 28.44%) than children (M = 13.41%, SD = 20.23%), F(1, 391) = 170.71, p < .001, η 2 P = 0.30. Exposure to the risk message did not affect participants' perception of hospitalization probability or the difference in hospitalization probability perception across the two groups, F(1, 391) = 0.68, p = .411, η 2 P < 0.01 (main effect); F(1, 391) = 0.18, p = .672, η 2 P < .01 (interaction effect). Again, supporting H1.2, a majority of participants (55%) believed that the term "risk" referred to the chance of being infected with coronavirus, 95% CI [50%, 60%]. Overall, participants perceived different probabilities of infection for vulnerable and nonvulnerable individuals (as shown by the gap between the blue and red vertical lines in Figure 3). Supporting H2.2, participants judged that older adults were more likely to contract coronavirus than children, F(1, 391) = 13.99, p < .001, η 2 P = 0.04. Probability perception was also shaped by participants' risk interpretation (left vs. right panels of Figure 3): Participants who responded that "risk" referred to the chance of infection perceived a higher probability of infection for both groups (compared to participants who did not interpret risk that way), F(1, 391) = 13.20, p < .001, η 2 P = 0.03. The difference in probability perception for the two groups (children [nonvulnerable] vs. older adults [vulnerable]) was slightly larger in people who believed risk referred to chance of infection (compared to people who did not), but this interaction term was not statistically significant, F(1, 391) = 1.84, p = .176, η 2 P = 0.01. Effect of Exposure to Risk Messaging on Inferences About Probability of Infection for Nonvulnerable Individuals We expected to replicate Study 1, where exposure to the risk message increased the gap between the perceived probability of infection for children and older adults by increasing probability perception for older adults and decreasing it for children (H3.2). However, the trends shown in Figure 3 showed that participants exposed to the message (compared to those who were not) lowered their probability perceptions for both groups. The analyses also did not support our expectation, as we did not find that exposure to the risk message interacted significantly with the interpretation of the risk message or the vulnerability of the person to predict infection probability perception, respectively: F(1, 391) = 0.92, p = .338, F(1, 391) = 0.43, p = .512. Exposure to the risk message did not have a main effect on risk perception either, F(1, 391) = 1.09, p = .609. For comparison purposes with Study 1, we conducted an independent samples t-test evaluating the effect of exposure to the risk message on how participants who believed that "risk" referred to the chance of infection judged children's probability of infection. This tested our hypothesis about unintended inferences more directly and showed at the descriptive level that the average difference was similar to Study 1 in direction and magnitude: Participants who were exposed to the risk message felt that children were 7% less likely to be infected by the virus compared to participants in the control group. However, this difference was not statistically significant, Probability Perceptions and Health Recommendations Advice to stay home was still high on average, especially for older adults (M older adults = 3.43, SD = 0.66, M children = 2.85, SD = 0.90). Supporting H4.2, participants' advice for children to stay home was positively correlated with probability perceptions of infection and hospitalization, with a larger correlation with hospitalization probability, r = 0.19, p = .006; r = 0.25, p < .001. There was also a positive correlation between advice for older individuals to stay at home and probability perception of infection and hospitalizationagain larger, and only statistically significant, for hospitalization probability, r = 0.08, p = .248; r = 0.17, p = .017. Interim Discussion Overall, Study 2 showed that the majority of our sample misinterpreted the "increased risk from coronavirus" as referring to the chance of infection in addition to (rather than only) the chance of severe symptoms. This was similar to the finding in Study 1. As with Study 1, Study 2 (with vulnerability group manipulation conducted between-subjects) also found that participants estimated that older adults had higher chances of infection than children, which indicated that the difference in probability perception previously observed was not simply due to participants repeating estimates in the within-subjects design of Study 1. However, Study 2 did not replicate the interaction effect found in Study 1, where exposure to a risk message increased participants' perceived probability of infection for vulnerable adults and, critically, reduced it for nonvulnerable individuals among participants who interpreted "risk" as chance of infection. The effect sizes for exposure to the message on these participants' infection probabilities for children were both small (d = −0.27 in Study 1 and d = −0.20 in Study 2) and we had a lower chance of detecting the effect in Study 2, where the vulnerability manipulation was between-subjects, reducing statistical power. 7 In Study 3, therefore, we aimed to replicate the test of this hypothesis while scaling up the statistical power by using a larger sample and the original within-subjects design for age groups. Study 3 In this study, we hypothesized that people would misinterpret what is at increased risk in an ambiguous version of the risk message (focusing on the statement "some people are at increased risk from "COVID-19", H1.3). We also hypothesized that this misinterpretation would lead people to infer that nonvulnerable individuals were less likely than vulnerable ones to be infected (H2.3), and that seeing the message (compared to not seeing it) would lead people to believe nonvulnerable individuals were less likely to be infected (H3.3). In line with the previous studies, we expected that probability perceptions would be related to health recommendations (H4.3). We used a within-subject design to have a greater statistical power to detect the effect of the message found in Study 1. Study 3 was conducted with U.K. residents from Prolific on a single day (July 28, 2020). At this point, the U.K. government had lifted the lockdown measures set in March 2020: Outdoor gatherings were allowed for up to six different households (from June 13, 2020) and indoor ones for six people from up to two different households (from July 4, 2020). The government had also announced that from August 1, 2020, it would no longer provide support (e.g., deliveries of essential supplies) for vulnerable individuals to self-isolate. Participants We powered our sample based on our smallest hypothesized effect: The effect of exposure to the message (vs. no message) on probability perceptions of infection for children. We recruited 432 participants, which gave 90% power to detect a small effect size between two independent groups (Cohen's d = 0.28, α = .05). Participants were 67% female (32% male, 1% other or did not disclose), 84% White. Ages ranged from 18 to 71 years (M = 33.2, SD = 12.0 years). Further characteristics are reported in Table 2. Design, Materials, and Procedure Participants completed the study online. Participants provided the probability of infection from coronavirus for a child and for an adult over 70 years old (see exact question wording in Table 3). Participants evaluated these probabilities for the child and the older adult on separate pages, in a counterbalanced order for each participant. We manipulated whether participants saw a risk message before completing the probability perception questions (n = 216 each group). We simplified the message and presented it as text, shown on the same page as the questions: 8 REMINDER: Some people are at increased risk from COVID-19 People who are over 70 years of age or people with a preexisting medical condition are at higher risk from COVID-19. Participants also completed the same risk interpretation question from Study 1, but did so either before or after the probability perception questions (random allocation). This allowed us to check whether their risk interpretation might have been a function of having seen the risk message or not. 9 Participants then completed a health recommendation task. They evaluated whether they would advise a healthy 15-year-old child and an older adult who was 75 years old (presented in random order on the same page) to take three protective measures: Self-isolate at home, social distance at all times, wear a face mask whenever on any outing. Participants gave their recommendations on a 5-point scale anchored at "not at all" and "yes completely." This scale was expanded to five points to mitigate the ceiling effect observed in Studies 1 and 2, where participants very largely agreed that older adults should "stay at home." It included two additional protective measures not included in Studies 1 and 2, to reflect changes in the U.K. government's guidance at the time of Study 3: "social distance at all times" and "wear a face mask whenever on any outing." This accounted for the fact that by the time of Study 3, the stay at home order had been lifted and replaced by this advice. The scale had satisfactory reliability for both individuals (.61 and .72). Finally, participants provided sociodemographic information. Statistical Analyses We analyzed the proportion of participants believing that the increased risk referred to the chance of infection (H1.3). To replicate the analyses from Studies 1-2, we ran analyses of variance (ANO-VAs) on participants' probability estimates, including as fixed factors vulnerability (vulnerable vs. nonvulnerable), exposure to the risk message (vs. no message), and risk interpretation (means infection vs. does not, and their interactions). We also ran preregistered group comparisons to specifically test H2.3 (the effect of risk interpretation on probability estimates for older adults) and H3.3 (the effect of the risk message on probability estimates for children). We tested H2.3 with an independent samples t-test comparing participants who interpreted "risk" as referring to the probability of infection by the virus versus those who did not. We tested H3.3 with an independent samples t-test comparing the perceived probability of children being infected for participants exposed to the risk message (compared to those who were not). Finally, we tested the link between probability perception and health recommendations (H4.3) with a correlational analysis as in the previous studies. Interpretation of the Term Risk and Associated Probability Perception As expected in H1.3, around 95% of participants interpreted that risk referred to the chance of severe symptoms requiring hospitalization, (95% CI [93%, 97%]; see breakdown in Table 4). More than half the sample believed that being "at increased risk" meant having an increased chance of being infected with coronavirus, 60%, 95% CI [56%, 65%]. The ANOVA found that overall, participants perceived older adults were more likely to be infected than children (as shown by the gap between the blue and red vertical lines in Figure 4), F(1, 425) = 152.99, p < .001, η 2 P = 0.27. Risk 8 We simplified the message and presented it on the same page as the questions to increase its salience while participants made their judgments. 9 The presentation order had no significant effect on the proportion of participants' responses for each of the three interpretations of "at increased risk": chance of infection, χ 2 (2) = 3.51, p = .173; chance of hospitalization, χ 2 (2) = 1.13, p = .570; chance of carrying the virus, χ 2 (2) = 2.59, p = .274. We also checked whether the presentation order affected probability perceptions; there was no effect on probability perceptions for either group, or the effects of the other independent variables, all ps > .10. interpretation also affected infection probability perception, with participants who thought there was an increased risk of infection (vs. those who did not) estimated higher probabilities of infection overall, F(1, 425) = 12.82, p < .001, η 2 P = 0.03. The difference in infection probability perception for children versus for older adults was larger in people who believed risk referred to probability of infection (vs. people who did not), with a significant interaction between these variables, F(1, 425) = 41. 73, p < .001, η 2 P = 0.09. The independent samples t-test found that as hypothesized (H2.3), participants who interpreted "risk" as referring to the chance of infection (compared to those who did not) perceived a greater likelihood that older adults would be infected (M risk is infection probability = 32.07, SD = 28.72; M risk is not infection probability = 17.70, SD = 25.41), t(388.92) = 5.43, p < .001, d = 0.52. Effect of Exposure to Risk Message on Inferences About Probability of Infection for Nonvulnerable Individuals We expected that participants who saw the risk message would perceive that children were less likely to be infected compared to a no-message control condition, especially when they interpreted that "risk" referred to the chance of infection. However, as shown in Figure 4, this was not the case. The ANOVA did not find that exposure to the risk message had a significant main effect, nor any significant interactions with vulnerability nor a three-way interaction with risk interpretation and vulnerability, F(1, 425) = 1.32, p = .252, η 2 P < 0.01; F(1, 425) = 0.05, p = .831, η 2 P < 0.01; F(1, 425) = 0.93, p = .336, η 2 P < 0.01, respectively. The preregistered t-test of the effect of message on probability estimates for children was also not statistically significant, (M message = 13.24, SD = 21.08, M control = 10.49, SD = 16.89), t(410.44) = −1.50, p = .135, d = 0.14. Probability Perceptions and Health Recommendations Participants' probability perceptions for infection were significantly positively correlated to protective health recommendations for children and older adults, r = .23, p < .001 and r = .22, p < .001. Interim Discussion Overall, Study 3 showed that, consistent with findings from Studies 1 and 2, more than half of the people surveyed misinterpreted the "increased risk from coronavirus" as the chance of infection rather than just the chance of severe symptoms. Across three studies, this interpretation was connected with a probability perception gap: The perception that nonvulnerable individuals (e.g., children) were less likely to contract COVID-19 than vulnerable individuals (e.g., adults over 70 years old). Study 4 proposed solutions to reduce this gap. However, in Study 3, we did not find evidence that participants who were exposed to the risk message (vs. those who were not) inferred that nonvulnerable individuals were less likely to become infected by the virus. This was in contrast with Study 1, where the message widened the gap in probability perception for the different groups, but consistent with Study 2. We suspected this might be related to high exposure to the same message outside of our studies and addressed this issue in Study 5. Study 4 Studies 1-3 pointed out the pitfalls of current communication strategies. In Study 4, we sought to provide a solution that addressed these pitfalls. We crafted an improved message that specifically mentioned what the increased risk referred to (i.e., "at increased risk of developing severe symptoms"). We expected that this would improve clarity as previous work indicated that risk messages were better understood when they were specific about risk events (Gigerenzer & Edwards, 2003;Gigerenzer et al., 2005;Gigerenzer & Galesic, 2012). We adapted the original risk message from Studies 1 and 2 to indicate the risk was for developing severe symptoms. We compared the new message to the original to test the hypothesis that the new message would counteract the misinterpretation that the risk was the probability of infection (H1.4). We also hypothesized that the new message (compared to the original) would reduce the discrepancy in probability perceptions that a vulnerable and nonvulnerable individual would become infected (H2.4). Study 4 was conducted with U.K. participants using Prolific on a single day (February 22, 2021). At this time, the U.K. had entered its third period of lockdown (since January 6, 2021), with all people to stay home except for limited reasons. Materials, Procedure, and Design Participants completed the study online. They were randomly allocated to view the original risk message from Studies 1 and 2 (n = 238) or an improved message that clarified the risk event (n = 236; see Figure 5). Participants read the text in the message on a separate page and then proceeded to the probability perception task, where the message always remained at the top of the page above two questions about the probability of infection and probability of hospitalization. We included both probabilities to check that the improved new message did not affect the probability perception for hospitalization. Participants estimated the probability of infection and hospitalization always presented in this order on the same page. They did these estimations for a child and for an adult over 70 on separate pages, with the order of presentation counterbalanced for each participant. Participants then proceeded to the risk interpretation question, in which they saw the risk message corresponding to their experimental condition and indicated their interpretation of the risk in the same way as in Studies 1-3. Finally, participants provided sociodemographic information. Statistical Analyses We ran three confirmatory analyses to test our hypotheses about the effect of the improved risk message compared to the original message. First, to test H1.4, we used a χ 2 test. Second, to test H2.4, we used an independent-samples t-test on the difference in probability perception between vulnerable and nonvulnerable individuals between message conditions. We also directly assessed the effect of the message on probability estimates using a mixed ANOVA on infection probability perception with vulnerability (within-subject), message (between-subjects), and their interaction as fixed factors. Results and Discussion Does the Improved Message Reduce the Ambiguity? As shown in Table 5, fewer participants interpreted the "risk" as the being about the chance of infection based on the improved message compared to the original message. 10 However, H1.4 was not supported as this reduction was not significant, χ 2 (2, N = 474) = 4.94, p = .085. Probability Perception of Contracting the COVID-19 Infection for Vulnerable and Nonvulnerable Individuals as a Function of Message Condition As shown in Figure 6, on average, participants judged adults over 70 to be more likely to contract the virus than children. The pattern was similar in the original risk message as well as in the improved risk message condition. Older adults were on average perceived as 15% more likely to be infected than children based on both the original and the improved message, M original message = 15.60%, SD = 26.79%; M improved message = 14.79%, SD = 25.90%), t(472) = 0.33, p = .739, d = −0.03. We expected that the improved message compared to the original would increase the perceived probability of infection of children and would reduce that of older adults. This was the case for children but contrary to H2.4, the same was also true for older adults, but in the ANOVA, neither the main effect of the message nor its interaction with vulnerability were statistically significant, F(1, 472) = 3.04, p = .082, η 2 P = 0.01, F(1, 472) = 0.11, p = .739, η 2 P < .001. The new message also had no detrimental effect on the perceived probability of hospitalization compared to the original, as older adults were overall still perceived to have a higher chance than children to be hospitalized, F(1, 472) = 989.97, p < .001, η 2 P = 0.68, with the message having no effect on this perception difference, F(1, 472) = 1.89, p = .169, η 2 P = 0.004, nor a main effect on hospitalization probability perception, F(1, 472) = 3.19, p = .075, η 2 P = 0.01. Infection Probability Perception as a Function of Risk Interpretation As shown in Figure 6, we found that people who interpreted "risk" as chance of infection (compared to those who did not) showed a much wider gap in probability perception between children and older adults, which was supported by a significant two-way interaction between risk interpretation and vulnerability, F(1, 470) = 88.75, p < .001, η 2 P = 0.16. Compared to people who did not endorse the infection interpretation, those who endorsed it believed that older adults were significantly more likely to be infected, but there was only a nonsignificant numerical difference in the belief that children were less likely to be infected, t(453.29) = 6.48, p < .001, d = 0.60 and t(472) = 1.84, p = .067, d = −0.17. The improved message did not have a significant effect on changing the gap in probability perceptions between older adults and children, F(1, 470) = 3.76, p = .053, η 2 P = 0.01. The interaction effect of message and risk interpretation was also not significant, F(1, 470) = 2.99, p = .085, η 2 P = 0.01. Interim Discussion Study 4's results replicated that people who misinterpreted "risk" as chances of infection exhibited a wider gap in infection probability perception for children and older adults. The improved message that explicitly stated the "risk" was of severe symptoms showed, descriptively, more intended interpretations that this risk referred to the probability of severe illness and not the probability of infection, but this improvement was not statistically significant. The new message also did not significantly affect probability perceptions. These results may indicate that the message was still not sufficiently improved, or that even when a message explicitly explained what is at risk, the term "risk" remained ambiguous, or that people had internalized the unintended interpretation that "risk" in the context of COVID-19 could refer to infection due to the repeated use of this ambiguous risk message over the past year. This possibility was tested in Study 5 by focusing on a new hypothetical context. Study 5 In the four studies reported above, we showed that U.K. residents misunderstood the health authorities' "at increased risk" message as meaning a higher chance of being infected, not just of developing severe symptoms. However, these studies brought mixed evidence about whether messages focusing on the risk to vulnerable individuals could decrease the perceived risk to others-the nonvulnerable majority. While participants perceived nonvulnerable individuals were less likely than vulnerable individuals to be infected, we found no causal evidence that this was because of the ambiguity of the risk message. Being exposed to the risk message did not significantly reduce people's perception of how likely children would be infected compared to a no-message (control) condition in Studies 2 and 3, although it did in Study 1. This inconsistency possibly occurred because of repeated exposure to this risk message throughout the pandemic. Aligned with this interpretation, we noted that the average perceived probability that children would be infected in the control conditions decreased over time from 29% in Study 1 (7 April), 23% in Study 2 (30 April), to 10% in Study 3 (28 July). In Study 5, therefore, we introduced a new hypothetical epidemic context. We hypothesized that in this novel context, an "increased risk" message would lead participants to misinterpret the risk, but improving the risk message could decrease the ambiguity in interpretation (H1.5). We also hypothesized that an "at increased risk" message compared to no message would lead participants to perceive a higher probability of infection for vulnerable individuals and a lower probability for nonvulnerable individuals, thereby causing a probability perception gap about the risk of infection (H2.5). However, we hypothesized that an improved risk message compared to an ambiguous message could reduce this gap (H3.5). Participants We recruited 454 participants; the sample size determined by a priori power analysis needed to detect a medium effect size of d = 0.34 (and assuming α = .05, 1−β = 0.90) in a two-group comparison between a control and an experimental condition (approximately n = 151 per group). Participants' ages ranged from 17 to 71 years (M = 34.39, SD = 12.35 years). Participants were 69% female (30% male, 1% other), 82% White, and 54% had a university degree. Design, Materials, and Procedure Participants were randomly allocated to one of three conditions: A control condition or two experimental conditions, depicted in . Participants read a basic scenario about a hypothetical new "virus Xora." In the control condition (n = 152), participants were not exposed to any risk communication message with the scenario. In the two experimental conditions, the basic scenario was accompanied by a risk communication message that described men as being "at increased risk" from this new virus, which was either an ambiguous message (n = 151) or an improved message that specified the risk of severe symptoms (n = 151). These messages are shown in Figure 7. After reading this information, participants assessed the probability that a man or a woman would contract the hypothetical new "Virus Xora" on the same page as the scenario (and risk message in the experimental conditions) using the response scales shown in Figure 7. After reading this information, participants assessed the probability that a man or a woman would contract the hypothetical new "Virus Xora" on the same page as the scenario (and risk message in the experimental conditions) using the response scales shown in Figure 7. Participants in all conditions subsequently provided their interpretation of what was more likely to happen when people were Table 5 Percentage of Participants Answering "Yes," "No," and "I Do not Know" to Three Different Interpretations of What "Risk" Means in the Risk Messages in Study 4 % that also selected the hospitalization interpretation 90% 88% Note. Participants responded to each of the three interpretations presented in a matrix table similar to the structure shown. a Intended interpretation. Figure 6 Participants' Perceived Probability of Infection From COVID-19 in Study 4 Note. Thefigure shows perceived COVID-19 infection probabilities for vulnerable and nonvulnerable individuals as a function of participants' interpretation that risk referred to the probability of infection (no/do not know, left panels, n = 240 vs. yes, right panels, n = 234) and as a function of the experimental message condition (original risk message, top panels, n = 238 vs. improved risk message, lower panels, n = 236). The shaded area shows the frequency density of responses. Solid vertical lines give the mean probability estimate in each condition. COVID-19 = coronavirus disease. See the online article for the color version of this figure. RISK MESSAGING FOR COVID-19 "at increased risk," as described in the risk message (shown in Figure 7). This risk interpretation measure was the same as Studies 1-4. Finally, participants completed sociodemographic information. Participants completed the online study at the end of a separate study with other scenarios (e.g., estimating the likely costs of a road project, judging a hypothetical GP visit). Statistical Analyses We used a χ 2 test for H1.5, that fewer people would misinterpret "risk" as chance of infection when exposed to the improved message compared to the ambiguous message and control conditions. To test the role of the risk messages on participants' infection probability estimates (H2.5-3.5), we first tested in the control group whether participants perceived men and women as equally likely to contract the new virus using a paired-samples t-test. To test our hypotheses that the risk message would impact probability perception, we first conducted an ANOVA using message condition (between-subjects), vulnerability (within-subjects), and their interaction as fixed factors. Then, to more specifically compare the different vulnerable groups, we conducted independent samples t-tests comparing participants' infection probability estimates for men and women between the three message conditions. Figure 7 The Hypothetical Scenario, Control and Experimental Conditions, and Exact Questions Used in Study 5 In the control (no message) and the ambiguous risk message condition, 66%-74% of participants believed the risk referred to the probability of infection (see Table 6). In contrast, based on the improved risk message, participants' interpretations were more consistent and only a minority (19%) endorsed the interpretation that being at "increased risk" meant having a greater probability of being infected by the virus (see Table 6). Indeed, supporting H1.5, the participants exposed to the improved message endorsed the "infection" interpretation significantly less often than those who saw no message or an ambiguous message, χ 2 (N = 303, df = 2) = 78.71, p < .001 and χ 2 (N = 302, df = 2) = 99.61, p < .001, respectively. Table 7 summarizes the differences in participants' perception of the probability that men (vulnerable) and women (nonvulnerable) would become infected, as a function of the risk message. Participants perceived men to be more likely than women to be infected by the virus across all conditions, but this was more pronounced in the two experimental conditions that described men as "more at risk." Indeed, the ANOVA showed a significant main effect of gender and interaction effect between gender and message condition, F(1, 451) = 122.21, p < .001, η 2 P = 0.21; F(2, 451) = 40.10, p < .001, η 2 P = 0.15, respectively. Infection Probability Estimates as a Function of Risk Message and Gender Our t-tests of the key comparisons showed that based on the ambiguous message (middle panel of Figure 8), participants perceived men were more likely to be infected than women (+16%), t(150) = 10.02, p < .001, d = 0.45. Based on the improved message (rightmost panel of Figure 8), participants still believed that men were more likely to be infected, but the difference was smaller and similar to that in the control condition (leftmost panel of Figure 8, +3%), t(150) = 4.43, p < .001, d = 0.12. As expected in H2.5, compared to the control condition, the ambiguous message showed, descriptively, that participants believed that men were more likely to be infected by the virus (+6%) and that women were less likely to be infected (−4%), but these differences were not statistically significant, t(301) = 1.91, p = .057, d = 0.22 and t(294) = −1.31, p = .191, d = −0.15. Finally, as expected in H3.5, the improved (compared to the ambiguous) message showed, descriptively, a decrease in the perception that men would be infected by the new virus (−5%) and increase in the perception that women would be infected (+4%), however, these differences were not statistically significant, t(300) = −1.48, p = .139, d = −0.17, and t(300) = 1.47, p = .142, d = 0.17. Interim Discussion In Study 5, we used a hypothetical new virus with arbitrarily assigned vulnerable groups to test whether at the start of a pandemic, ambiguous risk messages would affect people's interpretations of what was at risk and their subsequent probability perceptions of infection. We found the expected effect on interpretations: 74% of participants who viewed the ambiguous message (similar to the ones for COVID-19) interpreted "risk" as the chance of infection, but this was reduced to 19% among those who saw an improved message that specified that the risk was about suffering severe symptoms. Compared to the control and improved message conditions, exposure to the ambiguous message (like those used by various authorities at the beginning of the COVID-19 pandemic) also led to a larger difference in perceived infection probability between a vulnerable and nonvulnerable individual: Participants believed nonvulnerable individuals had a higher chance of infection than nonvulnerable ones. However, the pairwise comparisons only found small and nonsignificant evidence that participants perceived a nonvulnerable individual's infection probability to be lower after seeing an ambiguous message relative to the control (d = −0.15) and improved message (d = −0.17). General Discussion In five studies, we investigated two issues concerning the terminology "at increased risk," which is often used to describe epidemic risks, and tested how to remedy these issues. We tested if people were confused about whether the risk referred to becoming infected or developing severe symptoms from an infection, and if this confusion led to the perception that vulnerable groups were more likely to be infected. We also tested whether "at increased risk" messages unintentionally lowered (compared to one's baseline perception) the perceived probability that individuals not classed as vulnerable would be infected. Confusion About What Risk Is "Increased" Two probabilities are critical in responding to a pandemic: The probability of becoming infected (related to how easily the virus spreads) and the probability of suffering severe symptoms because of the infection (related to how consequential the virus is). With the coronavirus pandemic, some groups are more likely to suffer severe symptoms, but there are no intrinsic characteristics that predispose groups to contracting the infection, and therefore everyone needs to adopt appropriate behaviors to avoid contracting and spreading the infection (WHO, 2020b). Across the world, health organizations have aimed to protect the most vulnerable (e.g., older individuals or those with long-term medical conditions) by explaining that they are "at higher risk" from COVID-19-meaning that they are more likely to develop severe symptoms. In this work, we posited that the term "risk" is ambiguous in this context because it can be taken as referring to either the probability of severe symptoms or to the probability of infection. Our findings show that most U.K. residents recognized that the term risk referred to the probability of severe symptoms, but half of them also believed that it referred to the probability of infection. This inconsistent interpretation of the higher COVID-19 risk highlights the importance of clearly identifying what a risk refers to and is in line with prior research on risks related to other medical conditions or even more ubiquitous events such as weather forecasts (Fischhoff et al., 2009;Gigerenzer & Galesic, 2012). In Study 4, we tried to improve the risk message to reduce the misinterpretation of "risk" as the probability of infection. Although fewer individuals who saw this message believed the risk was of infection (46%), this was not significantly less than participants who saw the original message (53%). However, in the context of a new hypothetical illness (Study 5), an improved message did significantly reduce the misinterpretation that "higher risk" means a greater chance of being infected. The difficulty of correcting misunderstandings of risk in Study 4 could thus have stemmed from participants having already been frequently exposed to ambiguous communication about COVID-19 by that point. Misinterpretations of what is at "risk" are consequential for probability perceptions. While overall, participants tended to believe that vulnerable individuals, such as older adults, had a higher probability of coronavirus infection than nonvulnerable ones, this was especially the case for participants who misinterpreted "risk" as the chance of infection. At first glance, this pattern does not seem very problematic if it leads to more caution for vulnerable people. However, the flip side of this result is the perception that nonvulnerable individuals-meaning most of the population-have a lower chance of coronavirus infection. With this perception, the nonvulnerable majority may be more reluctant to follow health protection guidance (Bruine de Bruin & Bennett, 2020). Our data also showed an overlap between interpreting the risk as the chance of infection and the chance of carrying the virus-thereby infecting others (i.e., contagion), hinting that nonvulnerable individuals could also be perceived as less likely to spread the virus than vulnerable individuals. Does Focusing on Vulnerable Groups Lure Nonvulnerable Individuals Into a False Sense of Safety? In communicating risk, it is rarely the case that speakers intend for recipients to interpret information as no more and no less than what is communicated, because language is used pragmatically (Horn, 2006). Listeners interpret the meaning of words with reference to context (e.g., Moxey & Filik, 2010;Moxey & Sanford, 1993a) and expectations (e.g., Moxey et al., 2009;Moxey & Sanford, 1993b). As a result, speakers' choices of words are believed to convey implicit pieces of information (e.g., Hilton, 2008;Keren, 2007;Sher & McKenzie, 2006;Windschitl et al., 2017). Building on this pragmatic approach, we expected that emphasizing that some people are "at increased risk" could be taken to mean that others are less at risk than previously believed. We focused on the problematic possibility that people would perceive nonvulnerable individuals (i.e., most of the population) to have a lower than expected chance of contracting the new coronavirus. Importantly, in our work, we found across studies that children (who are considered nonvulnerable) were persistently perceived to have lower infection probability than older adults (who are considered vulnerable), and this gap in probability perception was quite wide. What caused the gap was, however, less clear. We found mixed evidence that directly exposing participants to an "at increased risk" message caused a lower perceived probability of infection for children. On average, participants believed children were less likely to be infected following exposure to the message in Studies 1 and 2, but these differences were small and not statistically significant (Study 1: d = −0.27, Study 2: d = −0.20), and the effect was not replicated in Study 3 (d = 0.14, opposite direction). There are several reasons why we only observed small and inconsistent effects of exposure to the message. In our experiments, we focused on children as an exemplar of nonvulnerable individuals, since evidence was clear and remained consistent over time that children were less likely than adults, especially older adults, to suffer severe COVID-19 symptoms. However, the evidence on the likelihood of infection was more mixed. Table 6 Percentage of Participants Answering "Yes", "No", and "I Do not Know" to Three Different Interpretations of What "Risk" Means in an "at Increased Risk" Messages in Study 5 Participants' perceived difference in infection probability for children and older adults in particular could be largely driven by earlier reports that there were lower COVID-19 incidence rates among children (Stokes et al., 2020;Williams et al., 2021). This does not fully explain why further exposure to a message would widen the difference in probability in Study 1, but it may be that people's knowledge and experience of the disease shapes their interpretation of messages about it. This interpretation subsequently affects probability perception, as we found that participants who interpreted the message to mean a higher risk of infection (compared to those who did not) perceived a greater difference in the probabilities that a child or older adult would be infected. If interpretations of risk messages are in part shaped by knowledge of the disease, this could also explain why the interpretations proved difficult to correct in Study 4. Another nonexclusive possibility for the lack of effect of experimental exposure to the message is that participants had frequent exposure to the message in the media, meaning that participants in the control group would also have seen it outside of the experiment. All our participants might therefore have already internalized the messages, thus limiting the ability of message exposure within our experiment to affect participants' perceived probabilities. Indirectly supporting this hypothesis, we found that over time, participants gave decreasing estimates of the probability that a child would be infected-presumably due to repeated exposure to risk messages (29% in early April, 23% in late April, and 10% in late July). This trend of lower infection estimates is at odds with the emerging evidence over this period that children's susceptibility to infection was greater than initially believed (Zimmermann & Curtis, 2020). However, when we tested in Study 5 whether people expected that not being vulnerable to a new illness meant being less likely to become infected, we still found a small effect, similar to Studies 1 and 2 (Cohen's d = −0.20), which was again not statistically significant. These findings could mean that the risk message does not have the hypothesized undesired effect of lowering risk perceptions for nonvulnerable individuals compared to a baseline, or that the effect is small. Future research focusing on larger sample sizes would be more appropriate to identify or rule out such a small effect. Evaluating its practical significance could also be important since risk perception is so pervasive and important for decisions that small effects can be consequential. Practical Consequences of Risk Perception for Safeguarding Behaviors An accurate perception of one's likelihood of contracting a virus or suffering severe symptoms from it are important for making good quality health decisions. We found that lower risk perceptions led to weaker health recommendations, especially for younger age groups. Focusing on the risk of severe outcomes plays an important role in promoting health behaviors in general (Brewer et al., 2007;Sheeran et al., 2014), and in particular for COVID-19 (Bruine de Bruin & Bennett, 2020;De Neys et al., 2020). In line with this research, we found that the risk of severe symptoms seemed to weigh particularly RISK MESSAGING FOR COVID-19 on people's recommendations for children, in particular, to self-isolate. However, a key danger of COVID-19 is its infectiousness and the need for people to stop the spread of coronavirus by protecting others and not just themselves from the possibility of severe symptoms. Perceptions about one's likelihood of contracting the disease are therefore highly important for dealing with the pandemic over the longer term. Conflating the likelihood of infection with that of severe symptoms is problematic because it means people are making important decisions based on information that may not be correct. For example, if people believe that younger individuals are inherently less likely to catch the virus than older ones, they may underestimate their own potential to infect others and spread the virus-including to more vulnerable individuals. Those in nonvulnerable groups could also believe it less necessary to adhere to onerous social distancing guidelines if they perceive themselves as less at risk of catching the virus. Further, emphasizing the communication to vulnerable groups places the onus of protective behaviors-and potentially the blame-on them rather than on the majority, who are actually the most likely to infect others because of their greater numbers (e.g., only 13% of the U.K. population are over 70 years of age; UK Office for National Statistics, 2020). The perceptions of risks to different groups (vulnerable and nonvulnerable) influences public support for actions such as "shielding" and reopening schools, and can therefore impact public health decisions. Disambiguating the risk of severe symptoms and risk of infection is also important for health communication about vaccines. Determining the efficacy of a vaccine is complex and may involve its ability to protect against infection, severe symptoms, or both (Hodgson et al., 2021). Some vaccines could reduce risk of severe symptoms despite not reducing the risk of infection (e.g., in the case of some new coronavirus variants; Roberts, 2021), so a blanket belief that both risks are similarly reduced would be erroneous. Yet people do believe that they can mix freely with others after being vaccinated (Syal, 2021), highlighting the need to communicate more clearly whether the chances of infection are indeed reduced, along with the reduction in chances of severe symptoms. How Can the Risks of Infection and Severe Symptoms be Clarified? Our results highlight that the term "risk" is ambiguous, so any health message about risk should fully explain what is at risk. In the context of an infectious disease such as COVID-19, this means communicating clearly and separately about the risk of infection and the risk of severe symptoms. It is important to be clear that some groups may be inherently at increased risk of severe symptoms whereas all groups have an inherently equal chance to contract the illness; but behavioral or situational factors can cause an increased risk of infection (e.g., the nature of people's work and/or housing situation). Our results show that communicating clearly at the start of an epidemic (Study 5) could help people better identify the intended risk event in the message. However, ambiguous communication, especially over a longer period of time, may result in misinterpretations of the risk event, which are not easily corrected (Study 4). Going forward, clearer messaging about infection versus severe symptom probabilities could be applied to communication about new risks, for example, those posed by newly emerging virus variants, to ensure these are better understood. Limitations and Further Research Our research provides evidence that can inform the effective communication of the various risks of COVID-19 to the public by making what is at risk explicit and by addressing all groups involved-especially those more likely to spread the virus. However, several limitations should be considered in further research. Our studies focused on risk perception for different age groups because these were distinct categories for which all health services had communicated risks for and advice to at the time of the studies (such as in Table 1). However, other factors increase risks of severe symptoms (e.g., having a respiratory health condition) that apply across age groups. We would expect similar results if we asked participants to estimate the risks to people with preexisting health conditions versus those without, but it would be good to extend this research to test this specifically. We also acknowledge that this research took place over an evolving pandemic situation where very little was known with certainty about the virus. Most of our studies were also conducted while the government directive was for people to stay at home. People's risk perceptions were likely shaped by changing information in the news, on public health websites, and indeed by nationwide restrictions on movements. Conclusion Effective risk communication requires people receiving information to interpret the message accurately. In the case of COVID-19, people need to be aware of the chances of two different events: The chance of being infected and the chance of suffering severe symptoms from the illness. Our studies highlight the consequence of ambiguous risk communication, where people interpret a message that some people are "at increased risk from COVID-19" to mean these people have a higher chance of infection in addition to suffering severe symptoms. Problematically, people who harbored this interpretation that an increased risk refers to the chance of infection perceived that nonvulnerable individuals were less likely to become infected by the new coronavirus than vulnerable older adults-resulting in a larger perceived difference in the chance of infection between vulnerable and nonvulnerable individuals. Future research should establish whether this belief could lead nonvulnerable individuals to incorrectly assume that their own chances of becoming infected are lower, and the extent to which viewing risk messages could aggravate or correct this probability perception. Nonetheless, our results show that communicating risks clearly at the start of a disease outbreak could reduce the interpretational ambiguity of risk messages. Received November 3, 2020 Revision received November 4, 2021 Accepted November 16, 2021 ▪ Note. A negative coefficient reflects lower probability perceptions and less advice to stay home for the risk message compared to the control group where participants did not see the message. Letters reflect the pathways in the mediation model illustrated in Figure A1. Note. A negative coefficient reflects lower probability perceptions and less advice to stay home for the risk message compared to the control group where participants did not see the message. Letters reflect the pathways in the mediation model illustrated in Figure A1. COVID-19 = coronavirus disease.
2021-12-16T16:54:29.396Z
2021-12-14T00:00:00.000
{ "year": 2022, "sha1": "1e209e22073295949a2593fecee2ff7d37d7db76", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1037/xap0000416", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e8980b22fbaa38bc82257c1c4dee646a54cc9170", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244539719
pes2o/s2orc
v3-fos-license
Molecular modelling studies to suggest novel scaffolds against SARS-CoV-2 target enzymes : In this study, molecular modelling study of previously synthesized compounds against SARS-CoV-2 target enzyme was performed. A subset of 156 compounds from an in-house database has been subjected to molecular modelling studies against the SARS-CoV-2 ADP-ribose phosphatase (ADRP, NSP3), Papain-like protease (PL pro ), and uridine specific endoribonuclease (NSP15) enzymes. We have identified one compound that is expected to inhibit the SARS-CoV-2 ADRP enzyme and one compound that is expected to inhibit the NSP15 enzyme. INTRODUCTION The novel coronavirus (SARS-CoV-2) is a respiratory tract infection that was first detected in Wuhan, China [1].This novel virus causes the severe acute respiratory disease named Coronavirus Disease 2019 (COVID- 19).Covid-19 is highly transmissible and has spread fast all over the world.This disease shows itself with symptoms such as high fever, cough, shortness of breath, headache, sore throat, runny nose, muscle and joint pain, weakness, loss of sense of smell and taste, and diarrhoea. The SARS-CoV-2 is an enveloped single stranded RNA virus that contains four structural proteins, namely Spike (S), Envelope (E), Membrane (M), and Nucleocapsid (N) [2][3][4].The S protein consists of the S1 and S2 subunits.The S1 subunit is involved in the ACE2 mediated virus attachment, while the S2 subunit provides membrane fusion.The N protein also plays an important role in viral entry.The E protein is necessary for viral assembly, and the M protein promotes spike incorporations as well as the facilitation of virion production [2].In addition, the SARS-CoV-2 virus contains 15 non-structural proteins (NSP 1 -15).Each NSP has a role in the life cycle and pathogenicity of the virus [5].Here we focus on ADP-ribose phosphatase (ADRP, NSP3), papain-like protease (PL pro ) and a uridine-specific endoribonuclease (NSP15). ADRP is responsible for both induction of viral replication as well as interference with the host immune response [6,7].Papain-like protease (PL pro ) is one of the major important cysteine proteases of SARS-CoV-2 that processes the polyproteins translated from the viral RNA-genome to yield the active functional proteins necessary for viral replication [8].Some studies shows that a coronavirus endoribonuclease delays activation of the host sensor system.The uridine-specific endoribonuclease is present in all coronaviruses.It processes viral RNA to avoid detection by RNA-activated host defence systems.Therefore, it considers as a promising drug target [9,10].This manuscript describes molecular modelling studies of 156 previously synthesized [11][12][13][14][15][16][17][18][19][20] and in house available compounds against the Sars-CoV-2 ADP-ribose phosphatase (ADRP, NSP3), Papain-like protease (PL pro , NSPX), and uridine specific endoribonuclease (NSP15) enzymes.Our results suggest that 2 compounds not previously investigated as inhibitors of Sars-CoV-2 enzymes may inhibit ADRP. RESULTS AND DISCUSSION Molecular modelling studies were performed to investigate whether the compounds under investigation could interact with the SARS-CoV-2 target proteins ADRP, PL pro , and uridine specific endoribonuclease.To this end, the compounds were docked into the active sites of the target enzymes.The highest scoring docked poses that showed complementarity with the active site were selected for 50 ns molecular dynamics (MD) simulations to investigate the stability of the pose, the dynamic binding interactions, and the binding energy. Modelling studies against ADRP All compounds were docked into the ADRP structure in the complex with ADP ribose (pdb: 6W02, 1.50 Å).Compounds 1 and 2 showed docked poses that suggested binding to the active site of ADRP (Figure 1).The docked pose of 1 [13] forms hydrogen bonds with the backbones of Val49, Leu126 and Ala129 (Figure 1A).In addition, the ligand forms - stacking with the sidechain of Phe132.The docked pose of 1 was stable during a 50 ns molecular dynamics simulation.The hydrogen bond between the ligand and the backbones of Val49, Leu126 and Ala129 were observed during 41%, 67% and 62% of the simulation time, respectively (Figure 2A).In addition, the ligand formed an interaction with Leu126 via a bridging water molecule.The - stacking with the sidechain of Phe132 was present during 35% of the simulation.Hydrophobic interactions occurred with Ala38 and Ile131.The calculated binding energy fluctuates between approximately -70 kcal/mol and -50 kcal/mol during the simulation (Figure 2B). A B B A The docked pose of 2 [16] in the active site of ADRP shows that oxygen of thiazolidinone of the ligand forms hydrogen bond with backbone nitrogen of Leu126.The backbone oxygen of Ala129 forms hydrogen bond with nitrogen of the ligand.In addition, other oxygen of the ligand forms hydrogen bond with the backbone nitrogen of the ligand and phenol ring of the ligand forms - stacking with Phe132 (Figure 1B).The docked pose of 2 was selected for a 50 ns MD simulation. During a 50 ns molecular dynamics simulation the docked pose of 2 was stable.Hydrogen bonds with Leu126 and Ala129 were observed during %64 and %97 of the simulation time, respectively.In addition, an interaction via bridging water molecules occurs between the ligand's oxygen and Ile131 during %63 of simulation time.The hydrogen bond which has been seen on docked pose interactions occurs during %13 of simulation time and have interactions occasionally via bridging water molecule between the ligand's two oxygen atoms and Val49 (<20% of the simulation time).Additionally, hydrogen bonds occur with Ser128 and Phe132 during %11 and %12 of the simulation time respectively (Figure 3A).The calculated binding energy fluctuates around approximately -75 kcal/mol during the simulation (Figure 3B).The cocrystal structure of APR in the active site of ADRP reveals that the APR's adenosine group form hydrogen bonds with the sidechain of Asp22 and the backbone of Ile23 (Figure S1, A).The ligand's carbonyl and phosphate groups form hydrogen bonds with the backbone nitrogen of Val49, Ser128 and Ile131.The ribose moiety forms hydrogen bonds with the backbone nitrogen of Asn40 and Gly48 and an aromatic hydrogen bond with the sidechain of Phe132.Hydrogen bonds with the ribose moiety of ligand is expected to be stronger compared to the aromatic hydrogen bond. During a 50 ns molecular dynamics simulation the binding pose as observed in the cocrystal structure was stable.The hydrogen bonds with Asp22 and Ile123 were observed during 99% and 98% of the simulation time respectively.Hydrogen bond with Val49 was observed during 77% of the simulation period.In addition, interactions via bridging water molecules occurs between the ligand's phosphate and Ala38 and Ala50 during %82 and %84 of the MD simulation time.Furthermore, other several interactions via bridging water molecules occurs between ligand and active site during %30 -%55 of the MD simulation (Figure S1, B).The calculated binding energy increases from approximately -100/90 kcal/mol to approximately -80/-70 kcal/mol during the simulation (Figure S1, C). Modelling studies against PL pro Only compounds 3 and 4 [16] were candidates for MD simulation.The docked pose of 3 in the active site of PL pro shows that hydrogen bonds are formed with Asp164, Arg166 and Tyr268 (Figure 4A).In addition, - stackings are formed with the sidechains of Tyr264 and Tyr268.The docked pose of 4 indicates the presence of hydrogen bonds with Tyr264 and Tyr273 (Figure 4B).Both poses were subjected to 50 ns MD simulations and neither the binding interactions nor the binding energy suggested binding of the compounds to PL pro . A B Figure 4.The docked poses of 3 (A) and 4 (B) in the active site of PL pro (pdb: 6W9C). Modelling studies against uridin specific endoribonuclease (NSP15) All compounds were docked into the active site of NSP15 and four docked poses were subjected to MD analysis.The docked pose of 5 [14] shows that ligand's two nitrogen atoms form hydrogen bonds with the sidechain and backbone of Ser294.The other nitrogen atom of the ligand forms a hydrogen bond with the sidechain of Lys290.Additionally, the oxygen atom of the ligand forms a hydrogen bond with the backbone of Gly248 (Figure 5A).Analysis of the MD trajectory indicates that the docked pose is not stable, and the binding energy increases towards approximately -30 kcal/mol (Figure 6).The docked pose of 6 [14] shows the presence of a hydrogen bond with Ser294 and - stacking with Tyr343.(Figure 5B).Again, the MD simulations indicate that the poses are not stable, and the binding energies do not suggest strong binding (Figure 7). CONCLUSION Previously synthesized compounds were subjected to molecular modelling studies, which consists of docking studies and molecular dynamics simulations, to investigate their potential to inhibit SARS-CoV-2 target enzymes.The results suggest that one compound (i.e., 2) may be able to inhibit ADRP and one compound (compound 7) NSP15. Preparation of protein structures The Crystal structure of Sars-CoV-2 ADP ribose phosphatase (ADRP, NSP3, pdb: 6W02), Papain-like protease (PL pro , pdb: 6W9C) and uridine specific endoribonuclease (NSP15, pdb: 6WLC) were obtained from the RCSB Protein Data Bank.Subsequently, the structure was prepared using the protein preparation tool of Schrödinger (v2021-1, Schrödinger, Inc., New York, USA).All water and buffer molecules were omitted.Subunit A was retained and all other subunits, if present, were omitted.Subsequently, hydrogen atoms were added, and the system was minimized using the OPLS4 forcefield. Docking Studies The ligand set was prepared using the LigPrep tool of Schrödinger and minimized with the OPLS4 forcefield.Subsequently, all ligands were docked into the binding sites of the target enzymes.The binding sites have been assigned as all residues within 5 Å of the cocrystallized ligand.Docking was performed using the Glide tool of Schrödinger with the SP settings.The three highest scoring poses were obtained for each ligand and the poses were subsequently minimized using the Prime tool and MM-GBSA forcefield.To this end, the ligand and all residues within 5 Å were unrestrained. High scoring compounds that formed binding interactions (hydrogen bonds, electrostatic interactions, and hydrophobic interactions) and showed complementarity in shape and (a)polarity were selected for molecular dynamics (MD) simulations. Molecular dynamics simulations The ligand-enzyme complexes obtained with the docking procedure were subjected to a 50 ns MD simulation using Desmond.The complex was first placed in an orthorhombic box (at least 10Å between complex and boundary) and then filled with Tip5P water molecules and 0.15 M NaCl.The amount of Na + or Cl -ions were adjusted to create a neutral system.Afterwards, all heavy atoms were restrained, and the system was minimized for 100 ps using the OPLS4 forcefield.Finally, the system was simulated for 50 ns under isothermic (Nose-Hoover chain, 1ps relaxation time) and isobaric (Martyna-Tobial-Klein, 2ps relaxation time, isotropic coupling) conditions without restraints.Snapshots were saved every 100 ps.Finally, the ligandprotein binding interactions as well as the MM-GBSA binding energy were calculated. Figure 2 . Figure 2. A) The binding interactions of 1 with the active site of ADRP during a 50 ns MD simulation.B) The MM-GBSA binding energy.Hydrophobic amino acids are indicated in green.Hydrogen bonds are indicated in purple and π-π stacking in green.Solvent accessible ligand atoms are indicated with a grey shading.The contact surface with hydrophobic residues in indicated in green. Figure 3 . Figure 3. A) The binding interactions of 2 with the active site of ADRP during a 50 ns MD simulation.B) The MM-GBSA binding energy.Hydrophobic amino acids are indicated in green and cationic residues are indicated in red.Hydrogen bonds are indicated in purple.Solvent accessible ligand atoms are indicated with a grey shading.The contact surface with cationic residues in indicated in red. Figure 6 . Figure 6.A) The binding interactions of 5 with the active site of NSP15 during a 50 ns MD simulation.B)The MM-GBSA binding energy.Hydrophobic amino acids are indicated in green and cationic residues are indicated in purple.Hydrogen bonds and - stackings are indicated in purple and green respectively.Solvent accessible ligand atoms are indicated with a grey shading.The contact surface with cationic residues in indicated in red. Figure 7 . Figure 7. A) The binding interactions of 6 with the active site of NSP15 during a 50 ns MD simulation.B) The MM-GBSA binding energy.Hydrophobic amino acids are indicated in green and cationic residues are indicated in purple.Hydrogen bonds and - stackings are indicated in purple and green respectively.Solvent accessible ligand atoms are indicated with a grey shading.The contact surface with cationic residues in indicated in red. Figure 8 . Figure 8. A) The binding interactions of 7 with the active site of NSP15 during a 50 ns MD simulation.B)The MM-GBSA binding energy.Hydrophobic amino acids are indicated in green, polar residues are indicated in blue and cationic residues are indicated in red.Hydrogen bonds and - stackings are indicated in purple and green respectively.Solvent accessible ligand atoms are indicated with a grey shading.The contact surface with cationic residues in indicated in red.
2021-11-25T16:19:23.019Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c2de5b6025c040b99dc9238603327334230700a5", "oa_license": null, "oa_url": "https://jrespharm.com/pdf.php?id=965", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e50de37568bade3fae87a9858bdd5b39c6ec01a7", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
259045096
pes2o/s2orc
v3-fos-license
MSFCA-Net: A Multi-Scale Feature Convolutional Attention Network for Segmenting Crops and Weeds in the Field : Weed control has always been one of the most important issues in agriculture. The research based on deep learning methods for weed identification and segmentation in the field provides necessary conditions for intelligent point-to-point spraying and intelligent weeding. However, due to limited and difficult-to-obtain agricultural weed datasets, complex changes in field lighting intensity, mutual occlusion between crops and weeds, and uneven size and quantity of crops and weeds, the existing weed segmentation methods are unable to perform effectively. In order to address these issues in weed segmentation, this study proposes a multi-scale convolutional attention network for crop and weed segmentation. In this work, we designed a multi-scale feature convolutional attention network for segmenting crops and weeds in the field called MSFCA-Net using various sizes of strip convolutions. A hybrid loss designed based on the Dice loss and focal loss is used to enhance the model’s sensitivity towards different classes and improve the model’s ability to learn from hard samples, thereby enhancing the segmentation performance of crops and weeds. The proposed method is trained and tested on soybean, sugar beet, carrot, and rice weed datasets. Comparisons with popular semantic segmentation methods show that the proposed MSFCA-Net has higher mean intersection over union (MIoU) on these datasets, with values of 92.64%, 89.58%, 79.34%, and 78.12%, respectively. The results show that under the same experimental conditions and parameter configurations, the proposed method outperforms other methods and has strong robustness and generalization ability. Introduction Agriculture is one of the fundamental human activities, which ensures the global food security. However, the weeds in farmland can cause severe damage to the growth and yield of crops, because weeds directly compete with crops for sunlight, water, and nutrients. In addition, they also become a source for spreading diseases and pests in crops [1]. Weed control helps in promoting sustainable agricultural development, thus improving the agricultural production efficiency, reducing the waste of agricultural resources, and protecting the ecological environment to achieve sustainable agricultural development [2]. Over the years, various weed control measures, such as agricultural prevention and control, plant quarantine, manual weeding, biological weed control, and chemical weed control have been explored to develop agricultural technology [3]. The conventional physical weeding operations are costly and inefficient. Currently, the chemical weed control methods are the most widely used methods [4]. However, the traditional chemical weed control methods involve spraying herbicides uniformly across the field. This not only leads to high costs but also causes environmental pollution due to the excessive herbicide use [5]. The development of information and automation technologies has opened a new era of weed control. It is of great significance to perform precise mechanical and chemical weed control measures to quickly and effectively eliminate weeds [6]. With the development of digital imaging technologies and the advancements in robotic intelligent agricultural machinery, such as field weed-removal robots, which utilize image processing technology, great results have been achieved [7,8]. In 2015, a German robotics company called Deepfield [9] launched the first generation of weeding robots that identify weeds by using cameras. The precise weed removal methods, such as selective weeding, specific point herbicide spraying, and intelligent mechanical hoeing effectively reduce the harm of pesticides and improve the quality of agricultural products [10]. Please note that weed identification is crucial for intelligent weed removal. The vision-based weed identification methods mainly use digital image processing techniques to differentiate various crops based on different features extracted from crop images [11]. Ahmed et al. [12] used support vector machines (SVM) to identify six types of weeds by using a database containing 224 images and achieved satisfactory accuracy under certain experimental conditions using a combination of optimal feature extractors. Sabzi et al. [13] used a machine vision prototype based on video processing and meta-heuristic classifiers to identify and classify potatoes and five types of weeds. Brilhador et al. [14] used edge detection techniques to detect weeds in ornamental lawns and sports turf, aiming to reduce pesticide usage. Various filters were tested, and the sharpening (I) filter with the aggregation technique and a cell size of 10 provided the best results. A threshold value of 78 yielded an optimal performance. However, slight differences in the results were observed between ornamental lawns and sports turf. Parra et al. [15] used UAVs with digital cameras to detect charlock mustard weed in alfalfa crops using RGB-based indices, which proved effective and avoided confusion with soil compared to NDVI. Combining RGB indices with NDVI reduced overestimation in weed identification. This methodology can generate weed cover maps for alfalfa and translate into herbicide treatment maps. However, these methods are unable to perform effectively in complex field environments. For instance, image acquisition in real environments may suffer from uneven exposure due to strong or weak lighting conditions, resulting in reduced recognition accuracy. Moreover, crops and weeds have different sizes and shapes, and the generalization ability of image processing systems in complex backgrounds is poor, resulting in suboptimal recognition results. Moreover, digital image processing techniques require manual feature selection, and the segmentation performance of the models is susceptible to human experience interference. Recently, convolutional neural networks (CNN) have greatly promoted the progress of computer vision [16]. Contrary to the traditional machine learning algorithms, deep learning algorithms automatically perform feature selection and have a higher accuracy as well. These methods have been widely applied in agricultural image processing [17]. The CNNs have been used to predict and estimate the yield of mature stage rice based on remote sensing images acquired using unmanned aerial vehicles (UAVs) [18]. A deep learning-based robust detector for the real-time identification of tomato diseases and pests was proposed in [19]. Hall et al. [20] constructed a CNN for carrot and weed classification during the seedling stage. The authors used the texture information and shape features, significantly improving the accuracy of plant classification. Olsen et al. [21] constructed a large, public, multi-class deep-sea weed dataset and used ResNet50 for weed classification. Since the proposal of FCN [22], the image semantic segmentation models, such as UNet [23], DeepLabV3 [24], and DeepLabV3Plus [25], have emerged and widely applied in the agricultural weed segmentation field [26]. The semantic segmentation models quickly extract features from the crops and weeds, without requiring complex background segmentation and data model establishment during the extraction process. You et al. [27] proposed a segmentation network for segmenting the sugar beet crop. Yu et al. [28] proposed several networks for accurately detecting weeds in dogtooth grass plants. Sun et al. [29] fused near-infrared and RGB images into a four-channel image by analysing the feature distribution of a sugar beet dataset, and proposed a multi-channel depth-wise separable convolution-based segmentation and recognition method for sugar beet and weed images. The authors achieved real-time segmentation by using the MobileNet. Zou et al. [30] proposed a simplified UNet-based semantic segmentation algorithm to separate weeds from soil and crops in images. Please note that the aforementioned weed segmentation algorithms are typically based on popular semantic segmentation models, which use fine-tuning and conventional channel or spatial attention mechanisms to improve the performance of the networks. These attention mechanisms usually involve concatenating residuals or using 1 × 1 or 3 × 3 convolutions to implement channel-or spatial-wise attention in the network. However, these improvement methods neglect the role of multi-scale feature aggregation in network design. The previous literature shows that multi-scale feature aggregation is crucial for segmentation tasks [31,32]. Consequently, these approaches fail to effectively connect features in the spatial and channel dimensions of the upper and lower layers, and suffer from various problems, such as image illumination interference, differences in the size between crops and weeds, and mutual occlusion between crops and weeds. These problems severely affect the performance of field weed segmentation. Moreover, the existing deep learning-based weed segmentation models often require large amounts of data for training but the data for agricultural weed segmentation is scarce and difficult to obtain. It is noteworthy that the performance of these models is not efficient for small training sets. In response to the shortcomings of previous research and existing problems, we design a weed segmentation algorithm that can be applied to various complex environments and crops, providing algorithmic support for intelligent weed control. This work proposes a field weed segmentation model based on convolutional attention mechanism. The proposed method uses large asymmetric strip convolution kernels to extract features. The proposed method achieves faster and more accurate field weed segmentation, as well as addresses multi-scale and complex background weed segmentation tasks. The rest of the manuscript is organized as follows. In Section 2, we present the data used in this work and the proposed network. In Section 3, we present the experimental results and analysis. Section 4 discusses the research. Finally, this work is concluded in Section 5. Soybean Dataset The soybean dataset is acquired from a soybean field in the National High-Tech Agricultural Park of Anhui Agricultural University located in Luyang District, Hefei City, Anhui Province. We select soybean seedlings aged 15-30 days for data collection. The equipment used for image acquisition includes a DJI handheld gimbal, model Pocket 2. The acquisition device is positioned about 50 cm above the ground. The video resolution is set at 1920 × 1080 with a frame rate of 24 frames per second (fps). Afterwards, we extract frames from the video to obtain 553 images to construct the soybean dataset. In order to ensure faster training and convenient manual annotation, we resized the images to 1024 × 768. This resolution strikes a balance between computational efficiency and preserving sufficient visual details for accurate image analysis, making it a commonly used resolution in many computer vision applications and datasets. We randomly assign them to training, validation, and test sets in a ratio of 7:2:1. This allocation ratio allows for a reasonable balance between the training, validation, and testing requirements within the limited dataset. Selecting 70% of the data for the training set provides an adequate number of samples for model training. The validation set, comprising 30% of the data, is used to adjust the model's hyperparameters and fine-tuning. We reserved 10% of the data as the test set, providing a sufficient number of samples to accurately evaluate the model's performance. We manually annotated the images using the open-source tool Labelme. Each annotated image corresponds to an original image, with different colours representing different categories. The soybean seedlings are annotated in green, weeds are annotated in red, and the soil is annotated in black, as shown in Figure 1. Sugar Beet Dataset The sugar beet dataset is sourced from BoniRob [33]. The images in this dataset are captured at a sugar beet farm near Bonn, Germany. In 2016, a pre-existing agricultural robot was used to record the dataset, which focused on sugar beet plants and weeds. The robot was equipped with a JAI AD-130GE camera, with an image resolution of 1296 × 966 pixels. The camera is positioned underneath the robot's chassis, with a mounting height of approximately 85 cm above the ground. The data collection spanned over three months, with data being recorded approximately three times a week. The robot captured multiple stages of sugar beet plant during its growth. The official dataset contains tens of thousands of images. In this work, the labels are divided into three categories: sugar beet crops, all weeds, and background. For convenience, we use 2677 randomly selected images to create the sugar beet dataset. We randomly split the dataset into 70% training, 20% validation, and 10% test sets. As shown in Figure 2, green annotations represent sugar beet crop, red annotations represent weeds, and black annotations represent soil. Carrot Dataset The carrot dataset is sourced from the CWFID dataset [34]. The images in this dataset are collected at a commercial organic carrot farm in northern Germany. The images are captured during the early true leaf growth stage of carrot seedlings using a JAI AD130GE multispectral camera, which can capture both visible and near-infrared light. The images have a resolution of 1296 × 966 pixels. During the acquisition process, the camera is positioned vertically above the ground at a height of approximately 450 mm, with a focal length of 15 mm. In order to mitigate the effects of uneven lighting, artificial illumination is used in the shaded area beneath the robot to maintain consistent illumination intensity across the images. The dataset consists of 60 images, and we randomly split 70% of the samples for training, 20% for validation, and 10% for testing. As shown in Figure 3, green annotations denote the carrot seedlings, red annotations represent the weeds, and black annotations represent the soil and background. Rice Dataset The rice dataset is sourced from the rice seedling and weed dataset [35]. The images in this dataset have a resolution of 912 × 1024 pixels and captured using an IXUS 1000 HS camera with f-s 36-360 mm f/3.4-5.6 IS STM lens. The camera was 80-120 cm above the water surface of the fields during image capture. The dataset contains 224 images with corresponding annotations in 8-bit greyscale format. We convert the original annotations into 24-bit RGB format, and randomly split the dataset into training, validation, and test sets in a ratio of 7:2:1. As shown in Figure 4, green annotations represent the rice seedlings, red annotations represent the weeds, and black annotations represent the water or other backgrounds. Model The encoder-decoder architecture is commonly used in weed segmentation tasks. The encoder is responsible for rough classification of an image. It transfers the extracted semantic information to the decoder and maps the low-resolution features learned in the encoding stage to the high-resolution pixel space through skip connections by integrating the local and global context. The decoder gradually upsamples the feature maps to restore the resolution equalling the size of the input image, and outputs the predicted classifications for each pixel. The attention mechanism is an adaptive selection process widely used in deep learning. It allows the network to focus on important regions of an image, thereby improving the performance and generalization ability of the model. In semantic segmentation, attention mechanisms can be categorized into channel attention and spatial attention [36]. Different types of attention serve different purposes. For instance, spatial attention focuses on important spatial regions [37][38][39], and channel attention aims to selectively attend to important channels or feature maps [40,41]. However, the existing weed semantic segmentation models often overlook the adaptability of channel dimension. Inspired by the visual attention network [42], SegNext [43] re-examines the features considered by successful segmentation models and identified several key components for improving the performance of the model. SegNext proposes to use a large kernel attention mechanism to construct channel and spatial attention. The authors show that the convolutional attention is a more effective way to encode the contextual information as compared to self-attention mechanisms used in the Swin transformer [44]. Therefore, we use a convolutional attention mechanism to construct the proposed weed segmentation network. The convolutional attention mechanism consists of multiple sets of different convolutional kernels and deep convolutions, as shown in Figure 5. The 3 × 3 deep convolutional layer aggregates local information, while multiple sets of different depthwise strip convolutions are used to capture multi-scale contextual information. The 1 × 1 convolutions establish the connections between different channels. In the proposed multiscale convolutional attention (MCA), larger kernel sizes are used to capture global features. The MCA consists of three sets of strip convolutional kernels with different sizes. Each set is composed of two large convolutional kernels with relative sizes of 1 × 5 and 5 × 1, 1 × 11 and 11 × 1, and 1 × 17 and 17 × 1, combined in parallel to form multi-scale kernels. The proposed MCA can be mathematically expressed as follows: where F in represents the feature after passing through a 1 × 1 convolution and GELU activation. F out is the output of the attention map. ⊗ denotes element-wise matrix multiplication. Conv 1×1 represents a 1 × 1 convolution operation, and Conv 3×3 represents a 3 × 3 convolution operation. MSK i , i ∈ {0, 1, 2, 3} denotes the i-th branch in Figure 5, where MSK 0 represents the direct connection used to preserve the residual information. In each branch, two depth-wise strip convolutions are used to approximate standard depth-wise convolutions with larger kernels, as the strip convolution is lightweight and serves as a complement to grid convolutions assisting in the extraction of strip-like features [45,46]. We used the convolutional attention mechanism MCA mentioned above to construct an MSFCABlock consisting of an MCA and an FFN network, as shown in Figure 6. MS-FCABlock strengthens the feature association between the encoder and decoder. In the MSFCABlock, first, the contact feature is passed through a 3 × 3 convolution and batch normalization (BN). Then, it is connected with the output of the MCA module by using a residual connection. Subsequently, the feature is processed by the feed-forward network (FFN) with a residual connection. The FFN structure in the MSFCABlock maps the input feature vectors to a high-dimensional space and then applies a non-linear transformation by using an activation function resulting in a new feature vector. This new feature vector contains more information compared to the original feature vector. The global contextual modelling multi-layer perceptron (MLP) and large convolution capture the global contextual features from long-range modelling, thus allowing the proposed MSFCABlock to effectively extract the features. The overall architecture of the proposed MSFCANet is shown in Figure 7. The proposed network consists of an encoder and a decoder. The encoder uses a VGG16 network as the backbone, where the blue blocks represent the convolutional layers. Since we use a VGG16-based encoder, the convolutional layers are fully convolutional. The pink blocks represent the max-pooling layers, and the green blocks represent the upsampling layers. We use the transpose convolution method for upsampling, which can learn different parameters for different tasks, thus making it more flexible compared to other methods, such as bilinear interpolation. The yellow blocks represent concatenation, and the brown blocks represent the proposed MSFCABlock. The proposed MSFCABlock combines features from different layers of the encoder during the decoding process, resulting in excellent and dense contextual information integration for weed segmentation, as well as richer scene understanding. It enhances the role of multi-scale feature aggregation in network design. The large convolution with small parameter size also reduces the number of network parameters. Loss In order to achieve more accurate segmentation of crops and weeds, we designed multiple losses for training the model, including plant loss, crop loss, weed loss, and cropweed loss. In this work, plant loss in the loss function considers crops and weeds as the same class to calculate the loss. This helps in balancing the crops and weeds in the proposed model. The cross-entropy loss, which is commonly used for semantic segmentation based on CNNs considers the high-frequency distribution of images as an important feature of the CNN. However, when the number of foreground (crops and weeds) pixels is much smaller than the number of background pixels, the background loss dominates, resulting in poor network performance. Therefore, in this experiment, cross-entropy loss is not used to calculate the plant loss. Instead, Dice loss [47] is employed, originating from the Dice coefficient, which is a measure of set similarity often used for comparing the similarity between two samples. Please note that the Dice loss is a region-based loss, i.e., the loss and gradient for a certain pixel not only depends on its label and predicted value, but also on the labels and predicted values of other pixels. The Dice loss can be used in cases of class imbalance. Considering the characteristics of Dice loss and the practical situation of this work, plant loss in this research adopts Dice loss in order to effectively calculate the overall loss of crops and weeds. The plant loss is computed as follows: where y andŷ represent the ground truth and predicted values of the pixels, respectively. B, H, and W denote the channel size, height, and width of the image, respectively. In this work, crop and weed loss are calculated separately for crops and weeds. Considering that the loss calculation for crops and weeds also lacks high-frequency components, the use of cross-entropy loss may result in inaccurate region detection. Therefore, Dice loss is used to calculate these losses. For crop-weed loss, in order to efficiently optimize the severe class imbalance between the crop and weed categories, we use focal loss. The focal loss [48] addresses the issues of imbalanced training samples and different learning difficulties of samples. It is a variant of the cross-entropy loss, as shown in (3), with the addition of parameters α and β. These parameters are used to address the problems of difficult samples and imbalanced quantities. In (4), the role of α is to weight the loss of different classes of samples, where a higher weight is assigned to the class with fewer samples. On the other hand, the role of β is to handle the imbalance between easy and hard samples during the training process, where the number of easy samples is much larger than the number of hard samples. By adding a weight β, the loss of easy samples is significantly reduced, thus allowing the model to focus more on optimizing the loss of hard samples. Therefore, the total loss used for training is expressed as follows: where the total loss is a dimensionless metric, representing a measure of dissimilarity between the predicted and truth segmentation, with a value of 0 indicating perfect agreement and a value of 1 indicating complete dissimilarity. Parameter Evaluation In this work, we focus on semantic segmentation, which is a pixel-level prediction. Therefore, we adopt MIoU, Crop IoU, Weed IoU, Background IoU, F1-score, precision, and recall as the evaluation metrics. Please note that IoU is an important metric for measuring the accuracy of image segmentation. It is defined as the ratio of the intersection of the predicted and ground truth sets to their union and is mathematically expressed as follows: where TP (true positive) represents the intersection of the ground truth and predicted values and FN (false negative) + FP (false positive) represents the union of the ground truth and predicted values. MIoU is the average of Crop IoU, Weed IoU, and Background IoU, which is the intersection over union values for these three classes. It is calculated by taking the average of the IoU values for each class, which represents the ratio of intersection to union for each class. The pixel precision refers to the ratio between the number of correctly classified pixels and the total number of pixels correctly predicted in the image. The average precision is the mean precision calculated for each class. The pixel precision reflects the accuracy of positive predictions among the predicted positive samples, i.e., the accuracy of predictions for positive samples. It is calculated as follows: The recall, also known as sensitivity or recall rate, reflects the probability of correctly identifying positive samples among the actual positive samples. It is calculated by using the following expression: The F1-score is the harmonic mean of precision and recall. It is calculated by using the following mathematical expression: Model Training In this work, we used an Intel Core i9-13600KF CPU ( Testing on the Soybean Dataset In order to validate the performance of the proposed MSFCA-Net, experiments were conducted on the soybean weed dataset and the results were compared with other state-of-the-art methods, including FCN, FastFcn, OcrNet, UNet, Segformer, DeeplabV3, and DeeplabV3Plus. Table 1 shows the performance metrics, including MIoU, Crop IoU, Weed IoU, Bg IoU (Background IoU), recall, precision, and F1-score for the proposed MSFCA-Net and the aforementioned models based on the soybean weed test set. The quantitative analysis of results shows that the proposed MSFCA-Net performs efficiently on the soybean dataset and outperforms the other models. The proposed model achieves MIoU, Crop IoU, Weed IoU, Bg IoU, recall, precision, and F1-score of 92.64, 92.64, 95.34, 82.97, 99.62, 99.57, 99.54, and 99.55%, respectively, superior to the other models. In particular, the MIoU and Weed IoU of the proposed method are 2.6 and 6% higher compared to the second ranked OcrNet, respectively. This is due to the fact that the proposed MSFCA-Net utilizes skip connections to map the low-resolution features learned during the encoding stage to high-resolution pixel space semantically, exhibiting high performance in the presence of sample imbalance and hard-to-learn classes. Therefore, the proposed model has a strong advantage over current popular models in terms of dealing with sample imbalance and learning ability on hard-to-learn samples. Figure 8 shows partial segmentation results of our method, MSFCA-Net, and other methods on the test dataset, where green represents soybean, red represents weeds, black represents background, and the labels denote manually annotated images. Analysis of the prediction results of the eight network models shows that MSFCA-Net produces more refined segmentation results and exhibits excellent noise resistance capabilities. This is because MSFCA-Net integrates multi-scale features using the multi-scale convolutional attention mechanism, effectively incorporating local information and global contextual information. The OcrNet, UNet, and Segformer tend to misclassify classes in the image and cannot accurately segment soybean seedlings and weeds. The FCN, FastFcn, DeeplabV3, and DeeplabV3Plus produced segmentation results that reflect the basic morphology of the predicted classes, but with blurred edges and lower accuracy. Our proposed method had the best segmentation results, with clear contours, complete details, smooth images, and segmentation results closest to manual annotation, indicating that the MSFCA-Net network model can effectively and accurately segment weeds, soybean, and background in the images. Testing on the Sugar Beet Dataset Next, we conduct experiments on the sugar beet dataset, comprising a total of 2677 images, with 1874 images in the training set. As compared to other datasets used in this work, the sugar beet dataset is relatively large and used for training eight different models. The results obtained using the test set are shown in Table 2. The results show that the performance of all other models on the sugar beet dataset is significantly lower compared to their performance on the soybean dataset. Although the sugar beet dataset has more training images, the background is more complex and the quality of data collection is relatively poor compared to the soybean dataset. As a result, training other networks becomes more challenging. This indicates that other models have higher requirements for the quality of training data and lack robustness in learning complex samples. On the other hand, the proposed MSFCA-Net still shows good performance in this challenging scenario. MSFCA-Net performs well in terms of various metrics as compared to the other models. The proposed model achieves MIoU, Crop IoU, and Weed IoU of 89.58, 9562, and 73.32%, respectively, ahead of the second ranked OcrNet by 3.5, 3.4, and 6.8%, respectively. Figure 9 shows the partial segmentation results of various networks on the test set, where red represents the weeds, green represents the soybean plants, and black represents the background. The "Image" refers to the original sugar beet image, and "Label" refers to the original annotated image. As presented in Figure 9, although other networks are able to recognize the categories, they show an inferior performance in terms of details and edge contours as compared to the proposed MSFCA-Net. By comparing the pink boxes in these images, it can be observed that other networks exhibit segmentation errors to varying extents, which is attributed to their poor performance in handling complex backgrounds. On the other hand, the segmentation results of the proposed MSFCA-Net are better, with more accurate classification of sugar beet, weeds, and the background. Testing on the Carrot Dataset In the carrot dataset, there are 60 images. Based on the 70% random split, only 42 images were used to train the network. The prediction results of the eight different networks based on the test set are shown in Table 3. The results show that with few samples and high prediction density per pixel, training the model with limited samples is prone to overfitting and poor segmentation performance. Furthermore, the results show that the performance of other models is relatively low, indicating that the existing models are not effective in crop and weed segmentation for small datasets with severe sample scarcity. However, the proposed model performs significantly better compared to the existing models. The proposed model obtains MIoU, Crop IoU, and Weed IoU of 79.34, 59.84, and 79.57%, respectively, higher compared to the second ranked OcrNet by 4.2, 1.1, and 10.4%, respectively. This proves that the proposed model has a strong learning ability on small sample datasets. Figure 10 shows the partial segmentation results of various networks obtained using the test set, where green represents the carrot seedlings, red represents the weeds, and black represents the background. 'Image' is the original image and 'Label' is the corresponding manually annotated image. The results show that FCN, FastFcn, and DeeplabV3 not only have inaccurate classification results on the test set, but also have blurry segmentation of weed and carrot crop contours. Although OcrNet, UNet, Segformer, and DeeplabV3Plus show some improvements in the contours of carrot seedlings and weeds compared to FCN, FastFcn, and DeeplabV3, they still have significant errors in class-wise segmentation prediction. This is because these network lack the ability to learn from small sample datasets. In contrast, the proposed network's segmentation results on the test set are almost identical to the original annotated images. This further demonstrates the strong segmentation capability of the proposed MSFCA-Net on complex and intertwined crop and weed small datasets. Testing on the Rice Dataset The rice dataset contains 224 images. Based on the 70% split, 157 images were used to train the model. Figure 4 shows that rice seedlings have numerous and dense leaves, and the weeds in the water often overlap with each other, thus making it difficult for the segmentation network to distinguish between rice and weeds. Moreover, the rice seedlings grow in water, and water produces reflections, thus increasing the difficulty of weed segmentation. Such data demands high feature extraction capabilities from the segmentation network due to the relatively coarse annotation provided by the official dataset. The results presented in Table 4 show that the performances of many networks are significantly impacted. The proposed model's performance in terms of Weed IoU is only 68.70%, while FastFcn, DeeplabV3, and DeeplabV3Plus achieve 69.54, 69.96, and 70.19%, respectively. These models outperform the proposed model in terms of Weed IoU because the proposed MSFCA-Net enhances the learning of difficult samples in the presence of class imbalance, resulting in a more balanced learning effect. Therefore, although the proposed model's performance in terms of Weed IoU is not as good as these models, it performs significantly better in terms of Crop IoU and Bg IoU, with an MIoU of 78.12%, which is 3.2% higher than the second best model. Figure 11 shows that FCN, FastFcn, and Segformer perform poorly as they fail to accurately predict the categories of rice and weeds, and many small weeds are not segmented. OcrNet, UNet, DeeplabV3, and DeeplabV3Plus are able to predict the categories of rice and weeds, but their contours are relatively rough. On the contrary, the proposed MSFCA-Net uses the multi-scale convolutional attention mechanism to effectively fuse multi-scale features, resulting in more refined segmentation results on the rice test set and more accurate classification. Therefore, the proposed model demonstrates more balanced performance when facing the complex background and class imbalance of the rice dataset, confirming the advantages of combining the convolutional attention mechanism with the hybrid loss training mode in the proposed model. Ablation Experiments We conduct ablation experiments by using the soybean dataset to evaluate the contribution of different components of the proposed MSFCA-Net in the segmentation performance. We quantitatively and qualitatively compared the MSFCA-Net with existing image semantic segmentation methods. The results of the ablation experiments are shown in Table 5, where BaseNet refers to the encoder-decoder network structure based on VGG16, BABlock refers to the block using conventional 1 × 1 and 3 × 3 convolution kernels as an attention mechanism, which serves as a comparison with the MSFCABlock module in the MSFCA-Net. The hybrid loss refers to the hybrid loss proposed in this work. In total, six sets of comparative experiments were conducted: (1) BaseNet using the encoder-decoder structure based on VGG16; (2) adding the BABlock mechanism on top of the BaseNet model; (3) using the MSFCABlock module on top of the BaseNet model from experiment 1; (4) adding the hybrid loss to the BaseNet model; (5) using the BABlock module and adding the hybrid loss training mode on top of the BaseNet model; (6) using the multi-scale convolutional attention mechanism with different kernel sizes and the hybrid loss training mode on top of the BaseNet model. Table 5 shows the BaseNet based on the VGG16 encoder-decoding structure as a benchmark. In the second experiment, adding the BABlock mechanism to the BaseNet model slightly improves the performance of the model. However, in the third experiment, when we use the MSFCABlock with complete multi-scale convolutional kernels, the performance of the model improves significantly, with MIoU, Crop IoU, and Weed IoU reaching 91.72, 94.29, and 81.28%, respectively. This represents an improvement of 1.37, 2.97, and 4.86%, respectively, compared to the model used in the second experiment, indicating that the proposed MSFCABlock has a strong capability in terms of multi-scale feature extraction. In the fourth experiment, we observe that the combination of Dice and focal losses in the hybrid loss training mode improves the performance of the model, showing a higher performance when dealing with class imbalance. In the fifth experiment, even with the addition of the hybrid loss training mode on top of the model used in the second experiment, the performance improvement is still limited. This is because the BABlock has limited capability in extracting multi-scale features by using simple 3 × 3 and 1 × 1 convolutions, resulting in a lower segmentation accuracy. In the sixth experiment, we test the complete MSFCA-Net and the results showed a significant performance improvement, with MIoU, Crop IoU, and Weed IoU reaching 92.64, 95.34, and 82.97%, respectively. As compared to the BaseNet + BABlock + Hybrid Loss model in the fifth experiment, the improvements in MIoU, Crop IoU, and Weed IoU are 1.30, 1.72, and 2.18%, respectively. Compared to the BaseNet, the proposed MSFCA-Net showed even higher improvements of 4.31, 4.45, and 8.32% in terms of MIoU, Crop IoU, and Weed IoU, respectively. This is because the multi-scale convolutional kernels in MSFCA-Net focus more on multi-scale features, thus allowing better fusion of low-and high-level features, enhancing the model's ability in feature extraction. The above ablation experiments demonstrate the effectiveness of the proposed multiscale convolutional kernels with the convolutional attention mechanism and hybrid loss training mode for weed segmentation. It has been shown that the proposed MSFCA-Net performs well in segmenting crops, weeds, and background in agricultural images from four agricultural image datasets, with a strong performance and generalization ability, demonstrating its superiority. Discussion Currently, there are many semantic segmentation methods for crop weed segmentation based on the UNet model with simple modifications. Guo et al. [49] added a depth-wise separable convolution residual to a UNet, assigning different weights to each channel of the feature map obtained based on the convolutional operations, and using adaptive backpropagation to adjust the size of the one-dimensional convolutional kernel. This module slightly increases the number of parameters, but improves the network's feature extraction performance and enhances attention on the channels. However, the ability of this network to extract deep features for weed segmentation is insufficient and it lacks spatial attention. This method's segmentation performance is greatly influenced by imbalanced categories of crops and weeds and the generalization of the model is poor. Brilhador et al. [50] proposed a modified UNet for crop and weed segmentation. Their training approach involved using annotated patches of images to effectively identify specific regions of crops and weeds, enabling detailed shape segmentation. The use of patch-level analysis can lead to data augmentation effects. However, it is crucial to consider that the presence of crops and weeds within the patches can vary based on their sizes. Therefore, if the dataset being used has a lower ratio of crops and weeds, training the model may pose challenges. In summary, the performance of this approach is notably influenced by the characteristics of the dataset. Zou et al. [30] simplified the neural network by removing some deep convolutional layers from the UNet to achieve a lightweight network. After fine-tuning, the performance based on their data exceeded that of the original UNet. This method reduces the computational complexity and extraction of multi-scale deep features by the network. However, when facing issues, such as complex backgrounds and overlapping crops and weeds, the network struggles to achieve a good segmentation performance, resulting in a significant decrease in the segmentation accuracy. In order to address the issues of the existing weed segmentation methods based on semantic segmentation models, we have developed a field crop weed segmentation model using a multi-scale convolutional kernel attention mechanism based on multiscale asymmetric convolutional kernel design. The proposed MSFCABlock enhances the network's attention in both channel and spatial dimensions by focusing on better contextual information fusion between the encoder and decoder, thus improving the multiscale feature aggregation capability and achieving high performance in complex scenes. By comparing the results across different datasets, all models performed significantly better on the soybean and sugar beet tests compared to the carrot and rice tests. We analysed that the superior performance on our self-collected soybean dataset can be attributed to more accurate labelling of the data and a rich variety of samples in the training set. In the sugar beet dataset, the larger quantity of data helped the network in feature extraction and learning during training. However, the carrot and rice datasets posed challenges due to their smaller size and higher complexity, which could significantly affect the model's learning capacity. Additionally, variations in data collection equipment and angles further contributed to the differences in results across different datasets. Overall, our model showed less susceptibility to these factors compared to other models. When comparing different models on the same dataset, the proposed model outperforms current popular models in almost all metrics on four different datasets, demonstrating its strong performance in handling complex scenes and imbalanced categories, as well as its strong generalization ability. From the results and segmentation graphs, it is evident that our MSFCA-Net achieved excellent performance compared to models such as FCN, Unet, and Deeplabv3 when dealing with small datasets, complex background masking, and class imbalance issues. Especially on the carrot dataset, the proposed model showed high performance even with only 42 training images, indicating its strong learning ability on small sample datasets. Although there are many types of weeds in the field, they are usually grouped into a single category and cannot be segmented into specific types of weeds. For field crops, all weeds should be removed as targets. This work also focuses on the segmentation of crops, weeds, and background as three categories. However, with the development of smart agriculture, a single weed classification model may not meet the needs of intelligent weed segmentation system. Accurate identification and analysis of weed types are necessary for specific pesticide formulations based on statistical field information. Additionally, our model is suitable for precise image segmentation and requires a certain distance between the camera and the soil to ensure image clarity. Therefore, our weed segmentation method may not be well suited for applications in the field of UAVs. In future research, we will further deepen our study based on weed species segmentation and the application of weed segmentation on UAVs. Conclusions In this work, we proposed the MSFCA-Net, a multi-scale feature convolutional attention network for crop and weed segmentation. We used asymmetric large convolutional kernels to design an attention mechanism that aggregates multi-scale features, and employed skip connections to effectively integrate the local and global contextual information, thus significantly improving the segmentation accuracy of the proposed model and enhancing its ability to handle details and edge segmentation. We also designed a hybrid loss calculation mode combining Dice loss and focal loss. In addition, we designed separate loss functions for crops and weeds. This hybrid loss effectively improved the performance of the proposed model in handling class imbalance, enhancing its ability to learn from difficult samples. The experimental results show that our model demonstrated significantly better performance compared to other models on the soybean, sugar beet, carrot, and rice datasets, with mIoU scores of 92.64, 89.58, 79.34, and 78.12%, respectively. This confirms its strong generalization ability and ability to handle crop and weed segmentation in complex backgrounds. The ablation experiments on the network confirms the proposed model's ability to extract features using asymmetric large convolutional kernels and spatial attention. We also captured and manually annotated a dataset of soybean seedlings and weeds in a field, thus enriching the dataset of agricultural weed data and providing rich and effective data for future research. This work has important implications for the development of intelligent weed control and smart agriculture. Our study still face challenges, such as variations in field lighting conditions, mutual occlusion between crops and weeds, and uneven sizes and quantities of crops and weeds. While research has addressed these challenges to some extent, real-world field conditions can introduce additional complexities that were not fully considered in this study. Therefore, our future research will focus on deploying weed segmentation networks in real-world physical weed control robots and conducting relevant studies on targeted agricultural spraying.
2023-06-03T15:08:13.618Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "a6bc2bbe3a2df264a636a9c41724a77bf171ce77", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/agriculture13061176", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "395e827e6d944fab9cd1dc41f44c8fbf70e09684", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
259204136
pes2o/s2orc
v3-fos-license
Exclusive $\eta_c$ production from small-$x$ evolved Odderon at a electron-ion collider We compute exclusive $\eta_c$ production in high energy electron-nucleon and electron-nucleus collisions that is sensitive to the Odderon. In perturbative QCD the Odderon is a $C$-odd color singlet consisting of at least three $t$-channel gluons exchanged with the target. By using the Color Glass Condensate effective theory our result describes the Odderon exchange at the high collision energies that would be reached at a future electron-ion collider. The Odderon distribution is evolved to small-$x$ using the Balitsky-Kovchegov evolution equation with running coupling corrections. We find that while at low momentum transfers $t$ the cross section off a proton is dominated by the Primakoff process, the Odderon becomes relevant at larger momentum transfers of $|t|\geq1.5$ GeV$^2$. We point that the Odderon could also be extracted at low-$t$ using neutron targets since the Primakoff component is strongly suppressed. In the case of nuclear targets, the Odderon cross section becomes enhanced thanks to the mass number of the nuclear target. The gluon saturation effect induces a shift in the diffractive pattern with respect to the Primakoff process that could be used as a signal for the Odderon. I. INTRODUCTION AND MOTIVATION The Odderon was suggested 50 years ago [1,2] as the C-odd (C = −1) partner of the C−even (C = +1) Pomeron in mediating a t-channel colorless exchange in elastic hadronic cross sections.The original idea [3] to measure the Odderon through a difference in pp vs pp elastic cross sections brought much excitement recently [4] thanks to the precise pp measurement by the TOTEM collaboration [5] at the collision energies close to the pp D0 Tevatron data [6].On the other hand, considering elastic hadronic cross sections makes it difficult to understand the Odderon in the context of perturbative QCD. As opposed to pp collisions, ep collisions provide a cleaner environment to extract the Odderon, particularly in the exclusive production of particles with a fixed C-parity.A prominent example here is the η c production [7][8][9][10][11][12][13][14][15][16] where the heavy charm quarks ensure that the process is sensitive to the gluons in the target.With the C-parity of η c being C = +1 and that of the emitted photon being C = −1, the amplitude becomes directly proportional to the Odderon.η c plays a role analogous to the J/ψ production in case of the Pomeron.Unlike J/ψ, which has been extensively measured at HERA, there is no measurement of exclusive η c production so far.This would hopefully change with the high luminosities feasible at the upcoming Electron-Ion Colliders (EIC) [17][18][19] (or even with the LHC in the ultra-peripheral mode [20]) and is therefore a motivation for our work. The high collision energies that will be reached at the EIC can offer unique insights into the small-x component of the target wavefunction (x represents the parton momentum fraction) where the gluon density is large according to the effective theory of the Color Glass Condensate (CGC) [21][22][23][24].Within the framework of CGC, the Odderon is the imaginary part of the dipole distribution [25,26] with the trace taken in the fundamental representation.The Wilson line V (x ⊥ ) is defined in Sec.II below, in Eq. (7). The small-x evolution of the Odderon is given by the imaginary part of the Balitsky-Kovchegov (BK) equation for the dipole [25][26][27].Indeed, one of our main goals is to numerically solve the coupled Pomeron-Odderon BK system for the case of the proton and for nuclear targets.Whereas in the linear regime the Odderon and the Pomeron evolution is independent, the non-linearity of the BK equations alters the Odderon significantly when the dipole size is of the order of the inverse of the saturation scale Q S [25][26][27][28][29]. From a theoretical perspective, the difficulty in computing the η c cross section comes from the uncertainty in the magnitude of the Odderon.While earlier works on η c production [7][8][9][10] suggest a differential photo-production cross section in the range of 10 2 pb/GeV 2 , more recent computations [15] indicate that the cross section would be somewhat smaller -of the order of 10 2 fb/GeV 2 , and therefore overshadowed by the large background due to the Primakoff process in the low-|t| region.This could be circumvented by considering instead neutron targets for which the low-|t| Coulomb tail is absent allowing the Odderon to be probed even at low-|t|.These studies so far have focused on the Odderon in the dilute regime where x is moderate and gluon density is not too large.Theoretical computations of the η c cross sections in the case of a dense proton or a nuclear target are so far unexplored and constitute another of our motivations. In Sec.II we undertake the computation of the amplitude for η c production in the CGC formalism.In Sec.III we solve the coupled Pomeron-Odderon BK system numerically using the kernel with running-coupling corrections and in the approximation where the impact parameter is treated as an external parameter [30].For the Pomeron initial condition we are using a fit to the HERA data (supplemented by optical Glauber in case of nuclei) [30].For the Odderon initial condition in case of nucleon targets we consider a recent computation in the light-cone nonperturbative quark model by Dumitru, Mäntysaari and Paatelainen [31].In case of nuclear targets we rely on a small-x action with a cubic term in the random color sources [32].Sec.IV is devoted to the numerical results for the exclusive η c photo-production for the proton and the nuclear targets.Our main findings, laid out in the concluding Sec.V, are as follows.Probing the Odderon using proton targets requires rather high momentum transfers |t| ≳ 1 − 3 GeV2 to access the region where the Primakoff background is subdominant.In case of neutron targets we find the Primakoff contribution to be negligible, allowing in principle, the extraction of the Odderon even at low-|t|.For nuclear targets the Odderon (Primakoff) cross section becomes enhanced roughly as ∼ A 2 (∼ Z 2 ), where A (Z) stand for the mass (atomic) number.The diffractive pattern in the Odderon cross section gets shifted by a few percent in comparison to the Primakoff cross section.This could serve as a distinctive signature of the Odderon. II. THE CROSS SECTION FOR EXCLUSIVE ηc PRODUCTION IN THE CGC FRAMEWORK The amplitude and the cross section for exclusive η c production γ * (q)p(P ) → η c (∆)p(P ′ ) has been recently computed using light-cone wave functions at leading twist for the Odderon in [15].For earlier works see [7,9].While some of the results from [15] carry over to our computations we find it worthwhile to quickly go over the derivation of the amplitude starting from the CGC framework [21][22][23][24]33] in momentum space and also taking into account the all-order multiple scatterings on a target, that is a dense proton or a nucleus.The cross section is computed in the frame where the target is moving along the light-cone minus coordinate, so that its momenta is P µ = (P + , 0, 0 ⊥ ), and that of the virtual photon q µ = (q + , q − , 0 ⊥ )1 .As for the kinematic variables of the process we denote with t the momentum transfer where x is the momentum fraction carried by the exchanged Odderon and W 2 = (q + P ) 2 is the invariant mass of the γ * -target system.We have q 2 = −Q 2 as the photon virtuality, P 2 = P ′2 = 0 and ∆ 2 = M 2 P is the squared mass of the produced η c particle. A. The Odderon contribution The amplitude for exclusive η c production can be written in complete analogy to that for J/ψ production -for a very clear recent exposition see for example [34].We follow closely the notation used in [34] and write the CGC amplitude for η c production as where q c = 2/3 is the charge of the charm quark in units of e = √ 4πα, α = 1/137 with l and l ′ representing the charm quark momenta as in Fig. 1.We work in the A − = 0 gauge where the virtual photon polarization vector ϵ µ (λ, q) is given as ϵ µ (0, q) = (Q/q − , 0, 0 ⊥ ), ϵ µ (λ = ±1, q) = (0, 0, ϵ λ ⊥ ) = (0, 0, 1, λi)/ √ 2 and is the charm quark propagator with mass m c .We use (iγ 5 ) as the Dirac structure for the vertex for η c production [7,15], for the moment treating the η c wave function in perturbation theory.For the phenomenological computation this will be replaced with a non-perturbative model η c light-cone wave function [15,35], see Eq. ( 18) below.Inserting the effective CGC vertex [36,37] (see also [38]), where with ρ a (y − , z ⊥ ) being the classical color source in the target, the amplitude becomes where the θ-functions are dictated by the singularities of the quark propagators in the complex l + and l ′+ plane.We can conveniently project out the Odderon by considering a diagram with the fermion flow in the opposite direction.Of course, with appropriate change of integration variables this simply gives back (8).Utilizing instead C-parity transformation only on the Dirac part the resulting trace has an opposite sign to (8).Combining the two contributions we come up with the (color averaged) amplitude as where the amplitude ⟨M λ ⟩ is with the Odderon distribution explicitly projected out.We have used r for short. It is convenient to further separate out the Odderon distribution from the rest as where the reduced amplitude A λ (r ⊥ , ∆ ⊥ ) (after light-cone l + and l ′+ integrals) is given as and We have used the following abbreviations: Computing the Dirac trace in ( 14) we find The result ( 16) is proportional to m c because the Dirac trace contains 4 vertices and 3 fermion propagators in addition to γ 5 .Intuitively, when the photon splits into a q q pair their spins are aligned, and not flipped by the eikonal interaction with the target.In order for the q q to combine into a spinless meson after the collision, we need a spin flip and this is provided by m c .As another consequence of the eikonal interaction, we find the longitudinal photon λ = 0 decouples, as already noticed in [7,15] and in a related process in [39]. After computing the l ⊥ and l ′ ⊥ integrals we find where δ ⊥ ≡ 1 2 (z − z)∆ ⊥ is the off-forward phase [40] and we have separated out the λ and ∆ ⊥ independent part of the reduced amplitude as A(r ⊥ ).We have also introduced the standard replacement [35] K 0 (ε ′ r ⊥ )/(2π) → ϕ P (z, r ⊥ ) to write the amplitude in terms of the η c meson light-cone wave function ϕ P (z, r ⊥ ) [15,35].In the numerical computations we are using a "Boosted Gaussian" ansatz from [15] with N P = 0.547, R 2 P = 2.48 GeV −2 and m c = 1.4 GeV [15].The integrand in (17) can be understood as a γ * − η c wave function overlap.Our result differs from (48) in [15] obtained using light-cone wave function approach by a relative sign between the two terms in the square bracket.Ref. [15] uses the γ * wave function from [35], however this is known to be incorrect, see e. g. [41].Using instead the γ * wave function from [41,42] we have explicitly confirmed the result in (17). It is useful to parametrize the Odderon distribution by the Fourier series where 2 Formally we have Q ′2 = −M 2 P , and so the perturbative wave-function would become singular for time-like momenta.However, this becomes irrelevant in practice as we are replacing the perturbative wave function with a model, see (17). We will consider its Fourier transform ( 11) O(r ⊥ , ∆ ⊥ ) and expand it in Fourier series where With this parametrization the amplitude ( 12) can be found in the following form where we have conveniently factored out the polarization independent amplitude ⟨M⟩ as This is the result that will be used in the numerical computations in Sec.IV where we will be keeping only the lowest k = 0 mode.The photo-production cross section is obtained as It is instructive to provide an estimate of ( 25) at leading twist.In Appendix A we have performed a model computation of the Odderon distribution and more details can be found in Sec.III A. Restricting to the first nontrivial Fourier mode we find where T A (∆ ⊥ ) is the Fourier transform of the transverse profile of the target T A (b ⊥ ), see (44) below.C 3F is defined in (48).Taking the limit m c → ∞ the cross section ( 25) is obtained in the following form and so the Odderon cross section gets enhanced by ∼ A 2 in case of nuclear targets.To get this result we have used [15,43] where R P (0) is the radial wave function at the origin. B. The Primakoff process The Primakoff process corresponds to a situation with an odd number of photons instead of gluons exchanged from the target.Intuitively, we would expect the Primakoff effect to be most important in the region ∆ ⊥ ≃ 0 due to the long-range Coulomb tail of the charged target.As in the previous Sec.II A we work in the eikonal approximation for the target interaction, with photons instead of gluons in the Wilson lines [44][45][46].We thus write Here is a Wilson line accounting for multiple scattering on a electromagnetic field of the target −ZeT A (x ⊥ )/∂ 2 ⊥ [44][45][46].Here the transverse charge density is given as ZT A (x ⊥ ).Because of the α suppression we are ignoring multiple scatterings and expand the eikonal phase to the first nontrivial order.Passing to the variable ∆ ⊥ instead of b ⊥ we have which is the same as Eq. ( 22) in [15] up to a factor due to the difference in the definition.We also obtain the Fourier moments as that are to be used directly in (24).At this point it is useful to obtain an estimate in the m c → ∞ limit, similar to what was done for the Odderon in (27).We get which displays the characteristic 1/t Coulomb behavior in contrast to the Odderon case (27) where we have instead a suppression factor |t|/m 4 c .Note that T A (∆ ⊥ ) is nothing but the electromagnetic charge form factor from the Rosenbluth formula [47]. In order to evaluate the Primakoff cross section numerically we must specify the profile function T A (b ⊥ ).For the proton (neutron) targets we are replacing ZT A (∆ ⊥ ) → F p,n 1 (∆ ⊥ ), respectively, with F p,n 1 (∆ ⊥ ) being the proton (neutron) charge form factors for which we are using a recent determination from [48].For a nucleus we use a Woods-Saxon distribution, see (44) below.In this work we do not attempt to differentiate between the nuclear electromagnetic distribution and the strong interaction distribution of a nucleus, although in principle they could be different, see [49,50]. III. NUMERICAL SOLUTIONS OF THE ODDERON EVOLUTION AT SMALL-x Denoting the dipole distribution in the fundamental representation as the fully impact parameter dependent BK equation reads [51,52] where (35) lead to unphysically large Coulomb tails in b ⊥ originating from a lack of confining interactions in the BK kernel [53].This issue has been addressed [54][55][56][57], at different levels of sophistication, by various modifications of the kernel in the infrared.In this work we make no attempt to tackle this difficult problem and instead resort to a local approximation b 1⊥ → b ⊥ and b 2⊥ → b ⊥ used in [30] (see also a discussion in [58]) where the b ⊥ -dependence effectively becomes an external parameter. Splitting the dipole into Pomeron and Odderon pieces as In the above Eqs.( 36), (37) we have replaced the conventional BK kernel with the running-coupling kernel (according to the Balitsky's prescription) [59] α that will be used in our numerical computations.Here with [30] N f = 3, C 2 = 7.2, Λ QCD = 0.241 GeV and â is a parameter determined by the condition lim r 2 ⊥ →∞ α S (r 2 ⊥ ) = α fr where α fr = 0.7. A similar system of equations was solved in [27-29, 60, 61], but the b ⊥ dependence was not addressed.Nevertheless, some generic conclusions from these works also apply to our computations.Thanks to the non-linearity of the BK equation (35), the Pomeron and the Odderon do not evolve separately.Only in the small-r ⊥ limit where N (r ⊥ , b ⊥ ) → 0 the nonlinear terms in (37) can be neglected and the system is decoupled.When this happens, the first two terms in the square brackett (37) cancel each other and the Odderon will become exponentially suppressed in rapidity [25,28,29].In contrast, in the large r ⊥ region, where N (r ⊥ , b ⊥ ) → 1, the nonlinear terms play an important role to cancel the first and the second term in the square bracket in (37) causing again an exponential suppression [25,[27][28][29].Such a lack of geometric scaling seems to be a general feature of not only the Odderon but also higher dipole moments in general [56]. A. Initial conditions For the Pomeron initial conditions we use a fit to HERA data from Ref. [30].Therein, the Pomeron for the proton is modelled as where where we pick up R p from the relationship πR 2 p = σ 0 /2 = 4πB p .In a recent work by Dumitru, Mäntysaari and Paatelainen [31] the Odderon for a proton target was calculated starting from quark light-cone wavefunctions at NLO.We refer to this as the DMP model and employ it in our numerical computations. In case of a nucleus we use again the results from Ref. [30], with the Pomeron distribution given as in (40) but with is the transverse profile of a nuclear target.The parameters in (41) are given as Q 2 S,0 = 0.06 GeV 2 , e c = 18.9 and σ0 2 = 16.36 mb [30].T A (b ⊥ ) is obtained by integrating the Woods-Saxon distribution [30] T which is normalized to unity b ⊥ T A (b ⊥ ) = 1.This fixes n A as −8πn A dLi 3 (−e R A /d ) = 1.Here d = 0.54 fm, R A = 1.12A 1/3 − 0.86A −1/3 fm [30].These Woods-Saxon parameters are numerically very close to the fit values from [62]. The initial condition of the Odderon for a nuclear target is based on the Jeon-Venugopalan (JV) model [32], which involves a cubic term added to the standard McLerran-Venugopalan small-x functional where In [32] (see also [25]), it was found that the Odderon distribution from the above functional takes the following form where and where G(x ⊥ − z ⊥ ) is a 2D Green function (A2) and we have inserted the target profile T A (b ⊥ ), see the discussion in the Appendix A. Eq. ( 47) can be interpreted as a single perturbative Odderon with any number of perturbative Pomeron insertions.Starting from (47) we deduce the following result for the Odderon initial condition where in the JV model we would have The details of the computation leading to (50) are given in the Appendix A. B. Numerical solutions The system of BK equations ( 36)-( 37) was solved on a (r ⊥ , b ⊥ , ϕ rb ) grid, where ϕ rb = ϕ r − ϕ b .As mentioned earlier, we consider b ⊥ as an external parameter and solve the BK equation for each value of b ⊥ separately.The integral over r 1⊥ in the equations ( 36) and ( 37) is evaluated over a lattice in (r ⊥ , ϕ rb ) using adaptive cubature [63,64].The lattice is equally spaced in log r ⊥ from r ⊥ = 10 −6 GeV −1 to 10 4 GeV −1 with n r ⊥ = 500 lattice points and in ϕ rb from ϕ rb = 0 to 2π with n ϕ rb = 100 lattice points.For each value of b ⊥ , the equations ( 36) and (37) together represent a system of 2 × n r ⊥ × n ϕ rb coupled differential equations representing the values of the Pomeron and the Odderon over the grid.This system of differential equations is solved using a three-step third order Adams-Bashforth method with a step size in rapidity ∆Y = 0.1 for up to Y = 5.The first two timesteps required to initiate the Adams-Bashforth method were obtained using Ralston's second order method.We have validated our numerical treatment of the BK system in two ways.First, since we have adopted our parametrization of the Pomeron from [30], we have checked that our results for the BK evolved the dipole amplitude in the proton and in the nuclei agree with [30].Second, we checked that we were able to reproduce fully the results for the BK evolution of the spin-dependent Odderon presented in [29].We additionally checked several different methods for solving the BK system (including the Euler method, a range of Adam-Bashforth methods, and the fourth order Runge-Kutta method) and found the third-order Adams-Bashforth method to be optimal.At this point we make a comment about the angular dependence.The Pomeron initial condition ( 40) is independent of ϕ rb , while the cos(ϕ rb ) moment in the Odderon initial condition (50) will generate a cos(2ϕ rb ) moment in the Pomeron through the ∼ O 2 term in (36).In principle, this further backreacts onto the Odderon through the ∼ N O pieces generating a higher cos(3ϕ rb ) moment in the Odderon.However, in our numerical computation we find that already the cos(2ϕ rb ) term is numerically tiny in support of the similar findings reported in [27,29] 3 .For this reason, in the following results we will discuss the Odderon solution only in the context of its dominant O 1 (r ⊥ , b ⊥ ) moment. On Fig. 2 we show the first Odderon moment O 1 (r ⊥ , b ⊥ ) for the proton target using the DMP model as initial condition as a function of r ⊥ for several finite values of b ⊥ .Going from the full line at the initial condition x = 10 −2 the Odderon is severely affected in magnitude when evolving to smaller x as can be seen by the thin dashed curve where x = 10 −3 and a thin dotted curve where x = 10 −4 , verifying numerically the lack of geometric scaling for the Odderon.Moving on to the b ⊥ dependence, the left plot on Fig. 3 shows O 1 (r ⊥ , b ⊥ ) as a function b ⊥ with r ⊥ fixed and for different values of x.For illustrative purposes we plot on the right the result for the proton target as obtained in the JV model.Interestingly, while the DMP model Odderon is peaked within the proton the JV model Odderon is peaked at higher b ⊥ due to the ∼ dT p /db ⊥ term. Comparing the results in the DMP and the JV models, we can quantify some of the model uncertainties concerning the magnitude of the Odderon.For this purpose we take the absolute ratio of the η c production amplitudes in the DMP and the JV models in the case of the nucleon target.In the limit ∆ ⊥ → 0, and for Q 2 = 1 GeV 2 , we find ⟨M⟩ p, DMP /⟨M⟩ p, JV → 0.026.On the other hand, an upper bound on the Odderon is imposed by the group theory constraint [28,65] In the small-r ⊥ limit this simplifies to O 2 (r ⊥ , b ⊥ ) ≤ N 3 (r ⊥ , b ⊥ )/9 [28].We have checked that the DMP model satisfies this bound.Using the JV initial condition for nuclei we can quantify (52) as a bound on the magnitude of λ and numerically we find that model coupling is somewhat below the bound, namely where λ JV is given by ( 51) and the superscript refers to the atomic number for different species of nuclei.We have checked that ( 52) is satisfied for all r ⊥ and b ⊥ , where for the latter, we considered the domain for which the nuclear saturation scale is above the minimum bias saturation scale of the proton.We will thus consider λ up to λ max .For orientation purposes, the lowest coupling we consider for nuclei will be given as λ = 0.026λ JV , where the proportionality factor 0.026 is fixed by the DMP vs. JV amplitude ratio for the proton target discussed above.Finally, on Fig. 4 we show the results for the b ⊥ dependence of the O 1 (r ⊥ , b ⊥ ) for the nuclear targets: Au (left), Cu (center) and Al (right) using the JV model.Evolving to smaller values in x, the peak in the Odderon distribution drops in magnitude but also shifts to slightly larger b ⊥ .This will leave an interesting consequence in the diffractive pattern of the cross section as we will explain in the following Section IV. IV. NUMERICAL RESULTS FOR THE CROSS SECTION In this Section we show the results of the numerical computation of the photoproduction cross section for the exclusive processes γ * p → η c p, γ * n → η c n and γ * A → η c A, where we consider the Au, Cu and Al nuclei.The numerical 0.0 0.5 1.0 1.5 2.0 2.5 3.0 computation of the cross section ( 25) is based on the amplitude for the Odderon contribution given by (24).To compute the Primakoff cross section we use the same Eq.( 24) with the replacement where Ω 2k+1 (r ⊥ , ∆ ⊥ ) is given by (32).In all the computations considered, we restrict to the lowest k = 0 Fourier moment of the amplitude.We have explicitly checked that the contributions from the higher moments are strongly suppressed both in the case of the Odderon and the Primakoff contributions relative to the k = 0 case.For the Fourier transform in the impact parameter b ⊥ we used the Ogata quadrature method [66].We first discuss the numerical results for exclusive γ * p → η c p photoproduction.Fig. 5 shows the cross-section as a function of |t| for several values of x and Q 2 .The computation is performed using the DMP model.The result shows a rather small |t|-slope of the cross section.This is a generic feature of the quark based approach as the three gluons in the Odderon can couple to three different quarks leaving the proton intact even at relatively large momentum transfer [7,67].The Primakoff cross section overwhelms the Odderon cross section at small |t|, but this gets reversed 0.00 0.02 0.04 0.06 0.08 for |t| ≳ 1.5 GeV 2 thanks to a small |t|-slope of the Odderon cross section.The small-x evolution reduces the Odderon cross section by roughly an order of magnitude when going from x ∼ 10 −2 to x ∼ 10 −4 .However, it is still above the Primakoff background for |t| ≳ 2-3 GeV 2 , with the |t|-slope remaining roughly the same.Our conclusion for proton targets is thus similar to that of [15] where the computation was performed at moderate x ∼ 0.1.The Odderon extraction from collisions on the proton target would thus require measurements of the cross section at potentially large momentum transfers even when x is small x ≲ 0.01.For neutron targets the Primakoff cross section is only a very small contribution and the Odderon can be probed even at low |t| and/or low x -see Fig. 6. On Fig. 7 we show the numerical results for the γ * A → η c A cross section for Au (left), Cu (center) and Al (right) targets.The Odderon coupling λ is set to the maximal value allowed by the group theory constraint (53).The Odderon (and the Primakoff) cross section become enhanced by the mass (atomic) number of the target.For example, using maximal coupling allowed by the group theory constraint (λ = λ max ), the Odderon cross section can reach up to about 10 nb/GeV 2 for Au.Taking instead λ = 0.026λ JV (the factor 0.026 is determined by the DMP vs JV amplitude ratio) as an assumption for the lowest estimate, leads to ∼ 5 pb/GeV 2 . Both the Odderon and the Primakoff contributions show characteristic diffractive patterns that are mostly of a geometric origin.However, it is clearly visible that the diffractive pattern for the Odderon cross section is altered compared to the Primakoff case: the diffractive dips are shifted to smaller |t| even for the initial condition and the shift becomes more pronounced as x gets smaller or |t| gets larger.To understand this result, notice that according to the leading twist estimates in (27) and (33) the Odderon and the Primakoff cross sections behave as dσ/d|t| ∝ |t|T 2 A ( |t|) and dσ/d|t| ∝ T 2 A ( |t|)/|t|, respectively.We are lead to the conclusion that the shift of the diffractive pattern when comparing the Odderon and the Primakoff cross section is a consequence of multiple scatterings in the Odderon amplitude.This finds additional support by the evolution to smaller x where, as a consequence of the growth of the saturation scale, multiple scattering effects become increasingly important, acting to further increase the shift. Considering the total cross section, where the Odderon and the Primakoff contributions must be added coherently, the relative sign between the two amplitudes determines whether they interfere constructively or destructively.In our computation this is controlled by the sign of the Odderon coupling parameter λ.Using the JV model the sign is negative, see (51).Thanks to the dT A /db ⊥ term, this gives a positive O 1 (r ⊥ , b ⊥ ) overall, see Fig. 4. For comparison, the DMP model computation for proton targets [31] also yields a positive O 1 (r ⊥ , b ⊥ ), see Fig. 3.While positive O 1 (r ⊥ , b ⊥ ) seems to be preferred by model computations, on Fig. 8 we compute the total cross section considering both signs of O 1 (r ⊥ , b ⊥ ) (or, equivalently, λ).For O 1 (r ⊥ , b ⊥ ) > 0 (λ < 0) the results are given on the left panel of Fig. 8.In this case the interference of the Odderon and Primakoff amplitudes is mostly constructive.Our result demonstrates that the multiple scattering effect in the Odderon amplitude, that shifts the diffractive pattern relative to the Primakoff component, can leave its trace also in the total cross section depending on the magnitude of the total Primakoff FIG. 8.The γ * Au → ηcAu cross section for three considered values of the odderon coupling up to the maximal value allowed by the group theory constraint (52).On the left (right) panel the sign of the Odderon coupling parameter is chosen as λ < 0 (λ > 0).The purple curves stand for the total cross section, with individual line styles representing different values of λ. Odderon.On the right panel of Fig. 8, the opposite case of O 1 (r ⊥ , b ⊥ ) < 0 (λ > 0) is displayed.The two amplitudes are now out of phase and interfere destructively, resulting in a severe distortion of the diffractive pattern in the total cross section in comparison to the Primakoff contribution only.We conclude that in both cases the known Primakoff diffractive dips could be filled in the total cross section.This could be used as a signal of the Odderon from exclusive η c production off nuclear targets.Considering different nuclear species could be a valuable tool in verifying this suggestion. V. CONCLUSION In this work we have computed the exclusive η c production in ep and eA collisions as a potential probe of the Odderon.Our computation relies on the CGC formalism where the effect of multiple scatterings is taken explicitly into account in a description of scattering off a dense target at small-x.We have numerically solved the BK evolution equation in impact parameter b ⊥ and dipole size r ⊥ for the coupled Pomeron-Odderon system.The numerical results demonstrate a rapid drop in the Odderon with evolution in line with the results in the literature [25,28,29]. Due to a large Primakoff background we find that in order to isolate the Odderon component of the cross section for the proton target, it is required to have relatively large momentum transfers: |t| ≳ 1.5-3 GeV 2 for x ∼ 10 −2 − 10 −4 .On a qualitative level this is rather similar to the conclusions drawn in the previous works [7][8][9][10]15].A new result is that the |t|-slope is not altered by small-x evolution, although the cross section does reduce in magnitude.Exclusive scattering off a neutron leads to a negligible Primakoff component and represents a new opportunity to probe the Odderon at low |t|.In practice this could be done using deuteron or He 3 targets with spectator proton tagging in the near forward direction, see for example [68,69]. For the nuclear targets we have found that the saturation effects in the Odderon distribution distorts the diffractive pattern in comparison to the Primakoff process.The effect is a few percent in magnitude and accumulates for smaller x and/or larger momentum transfers.Depending on the coupling of the Odderon, it is possible that the diffractive dips of the Primakoff process get filled by the Odderon component of the cross section.Such a distortion of the diffractive pattern in comparison to the known nuclear charge form factors might be a new way to measure the Odderon component in the nuclear wave function. As our final remark, we wish to clearly state that the actual experimental measurement of the Odderon component of the exclusive η c cross section is certainly challenging.Firstly, the Odderon itself is small, and so the cross section with proton (or neutron) targets tends to be low (∼ 10 2 fb/GeV 2 ).This could be circumvented by considering nuclear targets instead as the Odderon cross section is enhanced roughly as ∼ A 2 .With the maximal Odderon coupling allowed by the group theory constraint the cross section can be in the range of nb/GeV 2 .However, experimental extractions of a shift in the diffractive pattern in γ * A → η c A, found at moderate/high |t|, calls for a good control of the incoherent background -a related discussion, albeit for the Pomeron, can be found in [70,71].Secondly, the branching ratio for η c to charged hadrons is only a few percent [72] with a serious background from feed-down of J/ψ subsequently decaying as J/ψ → η c γ with γ undetected [7,14,73].Nevertheless, η c has been measured through its hadronic channel in e + e − by BABAR [74] and so such difficulties might be overcome also at EIC. Measuring at least the Primakoff component seems to be a feasible starting point [16].In any case, we consider the conclusions drawn from our results to be rather generic that would also be present in case of other quarkonia states or light mesons. For Θ(x ⊥ , y ⊥ ) we similarly have Θ(x ⊥ , y ⊥ ) ≃ (πR 2 A )i where we already expanded for r ⊥ → 0. Assuming also small p ⊥ we have . (A7) The zeroth order term above vanishes by rotation invariance.Using the second term we perform the angular integrals Integrating further over k ′ ⊥ leads to 1 2π For the final integration over k ⊥ we are only interested in extracting the leading log.We can drop the second term in (A9) as it vanishes in the limit m → 0. Focusing on the first term, we eventually find Using (46) the prefactor in (47) is Combining everything leads to A rather similar expression, that also involves the derivative of the transverse profile function was found in [77], see also [78].This expression is usually found in terms of a single transverse coordinate integral that can be solved [79] to get the O(r ⊥ , b ⊥ ) ∼ r 3 ⊥ behavior. FIG. 2 .FIG. 3 . FIG. 2. The first Fourier moment O1(r ⊥ , b ⊥ ) of the Odderon distribution of the proton in the DMP model as a function of r ⊥ for different values of x and at the impact parameters b ⊥ = 0.6 fm and 0.4 fm. − 4 FIG. 4 . FIG. 4. The first Fourier moment O1(r ⊥ , b ⊥ ) of the Odderon distribution of the nuclei in the JV model as a function of b ⊥ for different values of x.Left plot is for the Au, center is for Cu and right is for Al nuclei. FIG. 5 .FIG. 6 . FIG. 5. |t| dependence of the γ * p → ηcp cross section with the DMP model.The contribution from the Primakoff process is shown separately. FIG. 7 . FIG.7.The γ * A → ηcA cross section for three different targets: Au (left), Cu (center) and Al (right).The odderon coupling is fixed to the maximal value allowed by the group theory constraint(53).
2023-10-07T15:16:59.265Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "b45342708c8da0338293afd1598b9d5cce8baa93", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.108.074005", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "b45342708c8da0338293afd1598b9d5cce8baa93", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261607642
pes2o/s2orc
v3-fos-license
Inhibition of selenoprotein synthesis is not the mechanism by which auranofin inhibits growth of Clostridioides difficile Clostridioides difficile infections (CDIs) are responsible for a significant number of antibiotic-associated diarrheal cases. The standard-of-care antibiotics for C. difficile are limited to fidaxomicin and vancomycin, with the recently obsolete metronidazole recommended if both are unavailable. No new antimicrobials have been approved for CDI since fidaxomicin in 2011, despite varying rates of treatment failure among all standard-of-care drugs. Drug repurposing is a rational strategy to generate new antimicrobials out of existing therapeutics approved for other indications. Auranofin is a gold-containing anti-rheumatic drug with antimicrobial activity against C. difficile and other microbes. In a previous report, our group hypothesized that inhibition of selenoprotein biosynthesis was auranofin’s primary mechanism of action against C. difficile. However, in this study, we discovered that C. difficile mutants lacking selenoproteins are still just as sensitive to auranofin as their respective wild-type strains. Moreover, we found that selenite supplementation dampens the activity of auranofin against C. difficile regardless of the presence of selenoproteins, suggesting that selenite’s neutralization of auranofin is not because of compensation for a chemically induced selenium deficiency. Our results clarify the findings of our original study and may aid drug repurposing efforts in discovering the compound’s true mechanism of action against C. difficile. Clostridioides difficile (formerly Clostridium difficile) is a Gram-positive, endospore-forming strict anaerobe and the leading cause of antibiotic-associated diarrhea (~15-25% of cases) 1,2 .C. difficile infections (CDIs) typically occur in patients with gut dysbiosis and can lead to severe clinical complications such as pseudomembranous colitis and toxic megacolon 3 .During infection, C. difficile causes disease and induces inflammation by producing two large exotoxins-TcdA and TcdB-which damage the intestinal lining through the glucosylation of Rho-family GTPases in host epithelial cells 4 .According to a recent CDC report, CDIs were responsible for approximately 223,900 hospitalized patient cases and 12,800 deaths in 2017 5 .Moreover, CDIs have contributed to approximately $1 billion in U.S. healthcare costs 5 . The standard-of-care antibiotics for treating CDI are fidaxomicin and vancomycin 6,7 .If neither drug is available, metronidazole is recommended as an alternative 6,7 , though this former first-line antibiotic is regarded as obsolete due to its high rates of treatment failure 8 .In fact, CDI recurrence occurs in ~15-30% of patients treated with metronidazole and vancomycin despite their effectiveness in inhibiting C. difficile growth 8,9 .On the other hand, fidaxomicin is a narrow-spectrum antimicrobial with greater potency and is typically associated with comparatively lower recurrence rates 10,11 , though treatment failure has also been reported 12 .Moreover, while not the primary issue encountered in CDI management, antimicrobial resistance is still a cause for concern as drug-resistant clinical isolates have been reported for all three antibiotics [13][14][15][16] .Overall, the current repertoire for treatment is quite limited, especially since fidaxomicin was the last CDI drug approved by the U.S. Food and Drug Administration (FDA) in 2011 17 .If no new alternatives are added to the current list of standard-of-care antibiotics, the rising rates of recurrence and antibiotic resistance could outpace efforts to keep CDI under reasonable control. Auranofin is an FDA-approved anti-rheumatic gold (Au) compound that possesses antimicrobial activity against C. difficile 18,19 .Many reports have highlighted its inhibitory activity against C. difficile vegetative cells and sporulation, its ability to reduce toxin levels and protect Caco-2 cells against their lethal effects, and its efficacy in preventing CDI and disease recurrence in mouse and hamster models [20][21][22][23][24] .While the mechanism of action auranofin sensitivity of an array of C. difficile strains deficient in some or all selenoproteins (Table 1) using a modified version of the Clinical and Laboratory Standards Institute (CLSI) broth microdilution method 33 .While CLSI recommends the agar dilution method for antimicrobial susceptibility testing of anaerobes 33 , we chose broth microdilution because of its practicality and less cumbersome methodology.Moreover, broth microdilution has been reported to perform similarly to agar dilution in susceptibility tests of C. difficile 34,35 , though we are aware that others have observed substantial differences between both methods and argue against routine testing with broth microdilution 36,37 .Since our goal was to compare relative differences in minimum inhibitory concentrations (MICs) between strains rather than report standardized values that could be translated to the clinic, broth microdilution was therefore deemed appropriate for this study. Briefly, we cultured each strain in supplemented brain heart infusion (BHIS) broth containing varying concentrations of auranofin at 37 °C for 48 h.At the end of the growth period, we established each strain's MIC of auranofin by measuring the optical density of each culture at 600 nm (OD 600 ).With this method, we first determined the auranofin sensitivity of wild-type strains R20291 (MIC = 2 µg/mL) and JIR8094 (MIC = 8 µg/ mL) (Figs.1A and 2A).The standard-of-care CDI therapeutics, fidaxomicin and vancomycin, were also included as positive controls for the assay.Accordingly, R20291 was inhibited by 0.125 µg/mL fidaxomicin and 1 µg/mL vancomycin while JIR8094 was inhibited by 0.016 µg/mL fidaxomicin and 4 µg/mL vancomycin (Supplementary Fig. S1). Based on our laboratory's previous work, it was proposed that auranofin inhibits the growth of C. difficile by forming a complex with Se, thereby depleting the amount of bioavailable Se for trafficking and eventual incorporation into selenoproteins 19 .Thus, if auranofin's activity arises from the inhibition of selenoprotein biosynthesis, a strain lacking selenoproteins (i.e., a selD mutant) would theoretically be resistant to the compound and harbor a significantly elevated MIC compared to wild type.Despite this assumption, we surprisingly found that the MICs for R20291 (Fig. 1A), KNM6 (ΔselD) (Fig. 1B), and KNM9 (ΔselD::selD + ) (Fig. 1C) were all equally 2 µg/mL auranofin, suggesting that the compound's activity does not stem from targeting selenoproteins.To determine if this phenomenon was strain dependent, we repeated the assay with JIR8094 and LB-CD7 (selD::ermB) and likewise saw no increase in the MIC.However, while the wild-type strain JIR8094 exhibited an MIC of 8 µg/ mL (Fig. 2A), the selD::ermB strain was actually more susceptible to auranofin as it failed to grow at 4 µg/mL (Fig. 2B).This slight increase in sensitivity was surprising as it seemed to suggest a complex relationship between auranofin's antimicrobial activity and the selenoproteins in JIR8094.Out of curiosity, we evaluated two Prd Table 1.Bacterial strains used in this study. Bacterial strain Description (relevant genotype) Reference/source R20291 Wild type, ribotype 027 mutants-LB-CD4 (prdB::ermB) and LB-CD8 (prdR::ermB)-and one Grd mutant-LB-CD12 (grdA::ermB)using the same assay in order to determine which reductase plays a greater role in this phenomenon, if any.Interestingly, we discovered that all three mutants exhibited the same MIC of 4 µg/mL as the selD::ermB strain (Fig. 2C,D,E).While these data seem to suggest that a mutation in either of these selenoproteins renders C. difficile JIR8094 more sensitive to auranofin, a simple two-fold difference in MIC is likely not enough evidence for this.Regardless, these data clearly show that auranofin inhibits the growth of C. difficile in the absence of selenoproteins. Selenite supplementation neutralizes auranofin's activity against C. difficile even in the absence of selenoproteins.We previously demonstrated that supplementing the culture medium with Se www.nature.com/scientificreports/(either as sodium selenite or L-selenocysteine) exhibits a protective effect against auranofin, which we had interpreted as excess Se overcoming the apparent nutritional deficiency caused by the formation of Au-Se adducts 19 .Since auranofin still inhibits the growth of selD mutants as well as wild-type strains, we wanted to determine if selenite supplementation would still influence auranofin's antibacterial activity.When we repeated the previous assay using BHIS broth augmented with 5 µM selenite, we surprisingly observed a two-fold increase in the MICs of all strains (with the exception of JIR8094) (Figs. 3 and 4), suggesting that excess selenite dampened auranofin's activity regardless of whether selenoproteins were present.To verify if this response could be exacerbated at higher doses, we repeated the same assay with 50 µM selenite.Under these conditions, all strains grew regardless of the auranofin concentration (Figs. 3 and 4).These results clearly demonstrate that selenite's protective effect against auranofin cannot be explained as simply overcoming a Se deficiency imposed by the compound.While this phenomenon could potentially be interpreted as chemical inactivation by Se, it must be noted that Thangamani et al. reported no selenite-dependent neutralization of auranofin's activity against methicillin-resistant Staphylococcus aureus 38 , suggesting that there are different species-specific mechanisms at play.Finally, given the fact that selenite exhibits varying toxicity to some bacteria 39,40 , we wanted to determine if this was potentially acting as a confounding variable in our experiments.When we cultured our strains in BHIS broth containing varying selenite concentrations, we subsequently observed no difference in growth yields even up to 100 µM (Supplementary Figs.S2 and S3).This result correlates with a publication that reports a staggering MIC of 27 mM sodium selenite against two C. difficile isolates 41 . Discussion In this work, we unexpectedly discovered that auranofin inhibits the growth of C. difficile mutants lacking selenoproteins.This result was perplexing as we originally thought that auranofin's antimicrobial activity against C. difficile was mainly due to the inhibition of Se metabolism 19 .Our idea had been supported by several lines of evidence: (i) auranofin prevented the uptake of 75 Se and its incorporation into selenoproteins in both C. difficile and anaerobically grown Escherichia coli; (ii) the anaerobic growth yield of an E. coli ΔselD mutant was unaffected by auranofin compared to wild type; and (iii) auranofin exhibited little to no activity against Clostridium perfringens and Clostridium tetani (i.e., clostridia that lack selenoproteins) 19 .Additionally, we had found that the oral pathogen Treponema denticola-an organism with a strict Se requirement for growth 42 -was also susceptible to auranofin, as the compound likewise prevented the uptake and incorporation of 75 Se into its selenoproteins 43 .Consistent with our initial observations of auranofin's activity against C. difficile 19 , the compound's growth inhibition of T. denticola could be attenuated by supplementation with either sodium selenite or L-selenocysteine 43 . Clearly, our idea of targeting Se metabolism in C. difficile was predicated on the assumption that the pathogen required Se for growth in the same manner as T. denticola, when in reality, genetic techniques have revealed that selenoproteins are actually not essential to C. difficile 28 .Thus, when dealing with organisms that carry dispensable selenoproteins (e.g., E. coli and C. difficile), Se metabolism becomes a poor candidate for a drug target.Moreover, it is obvious that auranofin's effects in these bacteria are far more complex than initially assumed; for example, it is unknown why an E. coli selD mutant gains slight resistance to auranofin while a C. difficile selD mutant exhibits no appreciable change in sensitivity.Further research should focus on fully characterizing the compound's multiple modes of action in order to truly understand their effects in different pathogens. As of now, auranofin's mechanism of action against C. difficile is unknown, but a promising candidate may exist within the thioredoxin (Trx) system, which utilizes disulfide reductase activity to protect cytosolic components against oxidative stress and maintain thiol redox homeostasis 44 .The Trx system is comprised of Trx, Trx reductase (TrxR), and NADPH 44 .Trx reduces aberrant disulfides in the cell using a thiol-disulfide exchange mechanism that inevitably causes itself to be oxidized; TrxR utilizes electrons from NADPH to reduce Trx, allowing it to resume its surveillance of the cytosol for more oxidized substrates 44 .Interestingly, auranofin is known to be a selective inhibitor of TrxR in mammalian cells and parasites [45][46][47] .Likewise, auranofin has been shown to inhibit bacterial TrxR in some clinical pathogens such as Mycobacterium tuberculosis, S. aureus, and Helicobacter pylori [48][49][50] .Harbut et al. 48even proposed that auranofin's poor activity against several Gram-negative bacteria is actually due to the presence of the glutathione system, which can provide compensatory disulfide reductase activity in the event of a compromised Trx system.Thus, in bacteria lacking glutathione (i.e., most Gram-positives), auranofin-dependent inhibition of TrxR is expected to be lethal.It is therefore tempting to believe that auranofin could be exhibiting a similar mechanism in C. difficile due to two important observations: (i) a trxR gene exists within the grd operon 25 , and (ii) the cysteine-to-glutathione biosynthesis pathway is reportedly absent from the genome 51 .Alternatively, Thangamani et al. claimed that auranofin likely possesses multiple modes of action, as the compound was able to inhibit several biosynthetic pathways in S. aureus (e.g., DNA, protein, and cell wall syntheses) 38 .Moreover, the authors suggest that auranofin's weak activity against Gram-negatives may instead be due to the presence of the outer membrane and efflux pumps, rather than the redundant activity of the glutathione system 38 .Specifically, they showed that several Gram-negative pathogens were only susceptible to auranofin when the permeabilizing agent polymyxin B nonapeptide was present; moreover, an E. coli double mutant lacking both TrxR (trxB) and glutathione reductase (gor) did not differ in auranofin sensitivity compared to wild type 38 .Overall, these data imply that inhibition of TrxR-akin to inhibition of selenoprotein synthesis in C. difficile-may not be the only mechanism that this compound utilizes against bacteria.A classic technique to determine the mechanism of action of an antimicrobial involves the careful isolation of spontaneous drugresistant mutants in vitro; however, numerous groups have clearly reported an inability to generate spontaneous auranofin-resistant mutants of several bacterial species using this method 38,48,50,[52][53][54] .Likewise, our attempts to isolate spontaneous auranofin-resistant C. difficile mutants were met with failure, which further supports the idea of auranofin possessing multiple modes of action. Materials and methods Bacterial strains and growth maintenance.All C. difficile strains used in this study are listed in Table 1. Growth experiments were performed in a Coy anaerobic chamber under an atmosphere of ~1.0%H 2 , 5% CO 2 , and >90% N 2 .Strains were routinely maintained on BHIS agar (37 g/L brain heart infusion, 5 g/L yeast extract, 0.1% L-cysteine).When indicated, overnight cultures were prepared by inoculating 5 mL BHIS broth with single colonies of the appropriate strains followed by 16-24 h of incubation at 37 °C. Broth microdilution assay.MICs were determined using a modified broth microdilution assay as per the CLSI M11 33 .Briefly, auranofin was dissolved in 100% dimethyl sulfoxide (DMSO) and subsequently diluted to achieve working stocks at 20× concentration in 50% DMSO.Similarly, fidaxomicin was dissolved in 100% DMSO while vancomycin hydrochloride was dissolved in deionized water.Diluted test compounds (5 μL) were Figure 1 . Figure 1.A C. difficile ΔselD mutant has the same sensitivity to auranofin as wild type.C. difficile strains (A) R20291, (B) KNM6, and (C) KNM9 were grown in BHIS broth augmented with 2.5% DMSO and varying concentrations of auranofin at 37 °C for 48 h.The OD 600 of each culture was recorded at 48 h.The experiment was performed twice.Data points represent the means of triplicate cultures while error bars represent standard deviations. Figure 2 . Figure 2. Mutations in selenophosphate synthetase, proline reductase, or glycine reductase do not confer resistance to auranofin.C. difficile strains (A) JIR8094, (B) LB-CD7, (C) LB-CD4, (D) LB-CD8, and (E) LB-CD12 were grown in BHIS broth augmented with 2.5% DMSO and varying concentrations of auranofin at 37 °C for 48 h.The OD 600 of each culture was recorded at 48 h.The experiment was performed twice.Data points represent the means of triplicate cultures while error bars represent standard deviations. Figure 3 . Figure 3. Selenite supplementation decreases auranofin sensitivity even in the absence of selenoproteins.C. difficile strains (A) R20291, (B) KNM6, and (C) KNM9 were grown in selenite-supplemented BHIS broth augmented with 2.5% DMSO and varying concentrations of auranofin at 37 °C for 48 h.Sodium selenite was added to give a final concentration of 5 µM (red open circle) or 50 µM (red filled circle).The OD 600 of each culture was recorded at 48 h.The experiment was performed twice.Data points represent the means of triplicate cultures while error bars represent standard deviations. Figure 4 . Figure 4. Selenite supplementation decreases auranofin sensitivity in a manner independent of selenophosphate synthetase, proline reductase, or glycine reductase.C. difficile strains (A) JIR8094, (B) LB-CD7, (C) LB-CD4, (D) LB-CD8, and (E) LB-CD12 were grown in selenite-supplemented BHIS broth augmented with 2.5% DMSO and varying concentrations of auranofin at 37 °C for 48 h.Sodium selenite was added to give a final concentration of 5 µM (red open circle) or 50 µM (red filled circle).The OD 600 of each culture was recorded at 48 h.The experiment was performed twice.Data points represent the means of triplicate cultures while error bars represent standard deviations.
2023-09-09T06:17:43.324Z
2023-09-07T00:00:00.000
{ "year": 2023, "sha1": "226fcf38ec46296e45fc777ef1ea6fc0842081c9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-36796-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efac8f28790f8c26a379c7bca3fb726fdf249e41", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
40701590
pes2o/s2orc
v3-fos-license
ENDOSCOPIC PLASMA ARGON COAGULATION IN TREATMENT OF WEIGHT REGAIN AFTER BARIATRIC SURGERY: WHAT DOES THE PATIENT THINK ABOUT THIS? Background Bariatric surgery, especially Roux-en-Y gastric bypass is an effective treatment for refractory morbid obesity, causing the loss of 75% of initial excess weight. After the surgery, however, weight regain can occur in 10-20% of cases. To help, endoscopic argon plasma coagulation (APC) is used to reduce the anastomotic diameter. Many patients who undergo this treatment, are not always familiar with this procedure and its respective precautions. Aim The aim of this study was to determine how well the candidate for APC understands the procedure and absorbs the information provided by the multidisciplinary team. Method We prepared a questionnaire with 12 true/false questions to evaluate the knowledge of the patients about the procedure they were to undergo. The questionnaire was administered by the surgeon during consultation in the preoperative period. The patients were invited to fill out the questionnaire. Results We found out that the majority learned about the procedure through the internet. They knew it was an outpatient treatment, where the anesthesia was similar to that for endoscopy, and that they would have to follow a liquid diet. But none of them knew that the purpose of this diet was to improve local wound healing. Conclusion Bariatric patients who have a second chance to resume weight loss, need continuous guidance. The internet should be used by the multidisciplinary team to promote awareness that APC will not be sufficient for weight loss and weight-loss maintenance in the long term. Furthermore, there is a need to clarify again the harm of drinking alcohol in the process of weight loss, making its curse widely known. INTRODUCTION W eight regain after bariatric surgery, especially Roux-en-Y gastric bypass, shows a high prevalence and can occur 18-24 months after the surgical procedure. There are many possible causes, such as returning to bad eating habits as before the operation, excessive alcohol consumption, sedentarism and believing too much that only bariatric surgery will make up the whole weight loss and weight-loss maintenance process, and that the compromise of patients in their chronic treatment for obesity is safe. Although having patients search for a multidisciplinary team regularly is the best way of controlling and treating weight regain, an innovative and non-invasive treatment known as endoscopic argon plasma coagulation (APC) 2 has been utilized in Brazil since 2009 by the authors to help control it. APC is an outpatient procedure that is serial and performed by upper digestive endoscopy; it gradually reduces the diameter of the gastrojejunal anastomosis, increases gastric emptying time and causes early satiety, reducing food intake and thereby fostering weight loss. However, patients do not always know and understand the directions of the multidisciplinary team for the procedure. The necessity to break the cycle of weight regain, which affects many, makes any information that gives hope for weight loss the best option for the moment. The tendency is that, after the procedure, doubts appear and need to be resolved, in the case they had not been understood at the first session of APC. The objective of the present study was to analyze how well the candidate for the APC procedure understands the process and absorbs the information provided by the multidisciplinary team. METHODS A questionnaire was prepared with 12 true/false questions (Figure 1), for the purpose of evaluating the knowledge of the patient about the procedure they were to undergo. The questionnaire was administered by the surgeon during consultation in the preoperative period. The patients were invited to fill out the questionnaire and asked to sign an informed consent form. The study was approved by the Research Ethics Committee of Hospital Vita Batel, Curitiba, PR, Brazil. Sixty-nine questionnaires were administered to patients of both sexes, candidates for the procedure to decrease weight regain by dilation of the gastrojejunal anastomosis or to complete the weight loss not achieved only with bariatric surgery. Of the patients who filled out the questionnaire of knowledge about the APC procedure, two turned in the questionnaire blank (2.89%). Another two participants (2.89%) filled out only the first page, leaving it incomplete. Incomplete and blank questionnaires were excluded from the sample, which then totaled 65 patients. Of these, only nine had undergone a previous psychological interview about APC (13.8%). This variable did not interfere with the results of the questionnaire. One of the participants who turned in a blank questionnaire and another responding only to the first page were excluded. The data obtained were tabulated on a spreadsheet (Excel, version 2007) and the mean of the responses determined. RESULTS Of the total number of patients who filled out the questionnaire, 40 (61.5%) knew about the procedure through the internet, who considered the means of access and disclosure more effective, four (6.15%) through friends, and eight (12.3%) from medical recommendation, while six were informed directly by their surgeon (9.12%) and one through a magazine (1.5%). Other reasons were given by five persons (7.65) in seeking APC (Figure 2A). Of the group in general, 89.2% of the interviewees were aware that APC did not guarantee weight-loss maintenance, versus six (9.2%) who thought that it would ( Figure 2B). When asked what was the effective function of the procedure, 59 persons (90.7%) had a good comprehension of the procedure that it served to narrow the food passageway and to allow more time for gastric satiety, which can minimize food intake. However, three (4.16%) considered this information false and another three (4.16%) left the question blank ( Figure 3A). ORIGINAL ARTICLE With regard to the number of necessary sessions, the rate of doubts was high, where 69.2% of the interviewees believed that up to three sessions were needed, 1.5% (one patient) understood that only one session was needed, 7.6% (five) thought it was important to have five to seven sessions, and one person (1.5%) left the question blank ( Figure 3B). On the necessity of hospitalization, anesthesia and rest after the procedure, 16 patients (24.6%) considered it necessary to be hospitalized and to get rest besides anesthesia, 67.6% (44 persons) understood that this special care was not needed, and only five (7.6%) did not answer the question ( Figure 4A). When questioned if APC is an outpatient procedure, minimally invasive, 98.4% (64 patients) confirmed the information and only one (1.5%) did not give an answer. A contentious issue was with the consumption of alcohol after the procedure, showing divided opinions: 40% of the interviewees believed that alcohol could have a negative effect on the stomach; 47.7% thought that alcohol did not cause any problem and 12.3% left the question blank ( Figure 4B). The restriction of food consistency after APC is very important and 73.8% of the interviewees had the idea that a liquid diet fostered wound healing and protection of the stomach, but still, a large percentage (18.4%) believed that the limitation of food consistency was to promote weight loss ( Figure 5A). Of the interviewees, 80% knew that the anesthesia utilized for APC was not the same as for bariatric surgery and that only endoscopy was involved, facilitating recovery and return to work and daily activities; but six (9.2%) thought that the anesthesia was the same as in the previous operation and seven (10.7%) left the question blank ( Figure 5B). For 87.6% of the patients, eating anything would not be allowed and there would be a need for dietary restrictions similar as in bariatric surgery; but 3.07% (two persons) believed that they could eat all the food they wanted, and six (9.2%) did not know how to respond ( Figure 6A). To undergo APC, 89.2% understood that they needed to have had bariatric surgery, but three thought that prior bariatric surgery was not necessary and four left the question blank ( Figure 6B). A worrisome result was that 36.7% of the candidate patients for APC believe that it was not wrong to drink alcohol after the procedure, and only 50.7% were aware that the drinking alcohol beverages could hamper weight loss and cause discomfort. Furthermore, eight persons (12,3%) did not know how to answer this question. DISCUSSION Obesity is a chronic disease that requires continuous treatment to obtain satisfactory results. What was observed in this study was that many patients who sought various treatments were not always able to follow the directions they received from the multidisciplinary team, besides having some misconceptions about the treatments proposed and the limitations, be they dietetic or behavioral 4 . The majority of those operated wanted to maintain their weight loss and slimmer body image, but they were not always prepared to follow the guidelines that chronic treatment requires 10 . The use of the internet 8 , with the formation of obesity support groups, is becoming an effective means in spreading information on the subject, as confirmed in the present study, where 44,6% of the interviewees knew about APC through this form of communication. With bariatric surgery, patients need to change their lifestyle to adapt to new conditions. With APC, which consists in an electrocoagulation technique without contact in which radiofrequency energy is applied to the tissue by means of the ionized argon gas, 87.6% understood its limitation and conceded that it would not be the absolute solution to avoid weight regain and that they would have to change their habits to obtain satisfactory results. According to Carolyn et al. 3 , during the use of calorie restricted diets, patients need to be careful in their choice of macronutrients -with priority for protein -, to diminish the risk of weight regain. Among the many treatment options for weight loss, the patients understood that the APC procedure was not invasive and that it did not require hospitalization, that all would be discharged soon after the procedure, and that the objective was the narrowing of the food passageway. The majority understood that the number of sessions would be two or three, where this would depend on the individual. In relation to alcohol, it is worrisome that 47.6% considered such beverage was not harmful to the health of someone subjected to APC, and 40% of the patients only believed that alcohol could have a corrosive effect after the procedure. According to Yusef Kudsi et al. 11 , there is a high prevalence of alcoholism before the operation, and this could be the main reason for weight regain in those operated. Considered a calorie additive, alcohol requires continuous multidisciplinary follow-up 9 . Along this same line, 50.7% of the interviewees recognized that alcohol could have harmful effects on health, but still 36.9% did not consider this notion to be true. Ferguson et al. 6 , in studying lifestyle, found that those who drank alcoholic beverages and drugs before the bariatric procedure had a greater predisposition to maintain this habit and regain weight. In this study data prior to the operation were not analyzed. Regarding a liquid diet, a high percentage of patients (18.4%) believed that the aim of this restriction in consistency was to promote weight loss, which was not true. The use of liquids immediately after APC is to facilitate wound healing in the stomach and, consequently, to elicit weight loss rebound, basically by loss of water and muscle mass, which is not physiologically healthy but needed at that moment. About 88% of the interviewees understood the necessity of restricting calories to promote weight loss as desired and changing eating habits as suggested for the postoperative period of bariatric surgery. It is known that afterwards patients may return to old eating habits, with soft foods in detriment of protein-rich foods. This occurs because of the difficulty of accepting meats, due to the lack of chewing and decrease in digestive enzymes. This dietary choice makes it difficult to lose weight and maintain weight loss in the long term 1 . Himpens et al. 7 demonstrated that weight regain occurs among gastric bypass patients and that the patients need alternative treatments to deal with returning morbidities. The majority of patients (89.2%) accepted that they needed to have had bariatric surgery before APC, and that only simple anesthesia as for endoscopy would be utilized. Erick et al. 5 examined the mechanisms of weight regain after loss and concluded that precautions need to be continuous since this is a chronic disease. Since there is a great abundance of information on obesity on the internet, much of without scientific merit, teams specialized in obesity need to have effective means of offering support to patients, with more in-depth information on the methods of treatment and their consequences and limitations. It should be especially clarified that the psychopathological disturbances in the preoperative period, or after weight regain and treatment with APC, will not be resolved with the procedure per se, and that they need to be handled and controlled with specific treatments. Various correlation studies should be developed to better understand who are the operated patients who regain weight and what are their expectations of APC. CONCLUSIONS Bariatric patients that have a second chance to resume weight loss need continuous guidance. The internet should be used by the multidisciplinary team to promote awareness that APC alone will not be sufficient for weight loss and weight-loss maintenance in the long term. Furthermore, there is a need to reemphasize the harm of drinking alcohol in the process of weight loss, making its curse widely known.
2017-06-30T22:30:51.582Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "0a83806a69fcf5fb723e72fafbd42988bc85cdee", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1590/s0102-6720201400s100012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a83806a69fcf5fb723e72fafbd42988bc85cdee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8689368
pes2o/s2orc
v3-fos-license
High Elmo1 expression aggravates and low Elmo1 expression prevents diabetic nephropathy Significance About one-third of patients with type 1 diabetes mellitus develop nephropathy, which often progresses to end-stage renal diseases. The present study demonstrates that below-normal Elmo1 expression in mice ameliorates the albuminuria and glomerular histological changes resulting from long-standing type 1 diabetes, whereas above-normal Elmo1 expression makes both worse. Increasing Elmo1 expression leads to aggravation of oxidative stress markers and enhances the expression of fibrogenic genes. Suppressing Elmo1 action in human patients could be a promising option for treating/preventing the progressive deterioration of renal function in diabetes. Human genome-wide association studies have demonstrated that polymorphisms in the engulfment and cell motility protein 1 gene (ELMO1) are strongly associated with susceptibility to diabetic nephropathy. However, proof of causation is lacking. To test whether modest changes in its expression alter the severity of the renal phenotype in diabetic mice, we have generated mice that are type 1 diabetic because they have the Ins2Akita gene, and also have genetically graded expression of Elmo1 in all tissues ranging in five steps from ∼30% to ∼200% normal. We here show that the Elmo1 hypermorphs have albuminuria, glomerulosclerosis, and changes in the ultrastructure of the glomerular basement membrane that increase in severity in parallel with the expression of Elmo 1. Progressive changes in renal mRNA expression of transforming growth factor β1 (TGFβ1), endothelin-1, and NAD(P)H oxidase 4 also occur in parallel with Elmo1, as do the plasma levels of cystatin C, lipid peroxides, and TGFβ1, and erythrocyte levels of reduced glutathione. In contrast, Akita type 1 diabetic mice with below-normal Elmo1 expression have reduced expression of these various factors and less severe diabetic complications. Remarkably, the reduced Elmo1 expression in the 30% hypomorphs almost abolishes the pathological features of diabetic nephropathy, although it does not affect the hyperglycemia caused by the Akita mutation. Thus, ELMO1 plays an important role in the development of type 1 diabetic nephropathy, and its inhibition could be a promising option for slowing or preventing progression of the condition to end-stage renal disease. Human genome-wide association studies have demonstrated that polymorphisms in the engulfment and cell motility protein 1 gene (ELMO1) are strongly associated with susceptibility to diabetic nephropathy. However, proof of causation is lacking. To test whether modest changes in its expression alter the severity of the renal phenotype in diabetic mice, we have generated mice that are type 1 diabetic because they have the Ins2 Akita gene, and also have genetically graded expression of Elmo1 in all tissues ranging in five steps from ∼30% to ∼200% normal. We here show that the Elmo1 hypermorphs have albuminuria, glomerulosclerosis, and changes in the ultrastructure of the glomerular basement membrane that increase in severity in parallel with the expression of Elmo 1. Progressive changes in renal mRNA expression of transforming growth factor β1 (TGFβ1), endothelin-1, and NAD(P)H oxidase 4 also occur in parallel with Elmo1, as do the plasma levels of cystatin C, lipid peroxides, and TGFβ1, and erythrocyte levels of reduced glutathione. In contrast, Akita type 1 diabetic mice with below-normal Elmo1 expression have reduced expression of these various factors and less severe diabetic complications. Remarkably, the reduced Elmo1 expression in the 30% hypomorphs almost abolishes the pathological features of diabetic nephropathy, although it does not affect the hyperglycemia caused by the Akita mutation. Thus, ELMO1 plays an important role in the development of type 1 diabetic nephropathy, and its inhibition could be a promising option for slowing or preventing progression of the condition to end-stage renal disease. reactive oxygen species | 3′-untranslated region | fibrosis D iabetic nephropathy is the leading cause of end-stage renal diseases in developed countries (1). Although the control of blood glucose levels remains the mainstay to prevent diabetic nephropathy, cumulative epidemiological studies have also provided evidence that genetic factors partly account for the severity of the disease (2, 3), and human genome-wide association studies have reported several candidate genes conferring susceptibility or resistance to diabetic nephropathy. The gene coding for engulfment and cell motility protein 1 (ELMO1), first discovered as a gene required for phagocytosis of apoptotic cells and cell motility (4), is a strong candidate, but proof of causation is lacking. In a genome-wide case control association study in Japan, over 80,000 single-nucleotide polymorphism (SNP) loci were tested and one SNP locus in the 18th intron of the ELMO1 gene was found to be strongly associated with nephropathy due to type 2 diabetes (χ 2 = 19.9; P = 0.000008; odds ratio, 2.7) (5). Later studies have demonstrated an association of SNPs in the ELMO1 gene and the susceptibility to type 1 diabetic nephropathy in Caucasians (6, 7), Pima Indians (8), African Americans (9), and Chinese population (10), although some other studies have reported that the association is not significant (11,12). To determine whether modest genetic changes in the levels of Elmo1 expression cause differences in the severity of diabetic nephropathy that occurs in type 1 diabetes, we have used our previously described method (13) to generate mice having five genetically graded levels of Elmo1 mRNA expression ranging from ∼30% to ∼200% normal, and have studied their phenotypes after making them type 1 diabetic by crossbreeding them with mice carrying the dominant Akita mutation in the insulin 2 gene (Ins2 Akita ). Here, we show that the severity of renal fibrosis and the amount of urinary albumin excretion in the Akita diabetic mice parallels the genetic levels of Elmo1. The indices of reactive oxygen species (ROS) also increase as the expression of Elmo1 increases, although plasma glucose levels and blood pressure do not differ significantly among the mice with the five Elmo1 genetic levels. These results indicate that ELMO1 plays an important role in the development of diabetic nephropathy, probably by increasing oxidative stress. Generation of Akita Diabetic Mice Having Five Genetically Different Levels of Elmo1 We first used our published method (13,14) to generate mice having an Elmo1 allele in which the 3′-untranslated region (3′-UTR) of the gene is replaced with that of the cFos gene (to form a low-expressing L allele) or with that of bovine growth hormone gene, bGH (to form a high-expressing H allele) (Fig. 1A). Comparison of the effects of these 3′-UTRs on gene expression shows that the stability of an mRNA having the natural Elmo1 3′-UTR is intermediate between that of an mRNA having the Fos 3′-UTR and that of an mRNA having the bGH 3′-UTR ( Fig. 1 B and C). Significance About one-third of patients with type 1 diabetes mellitus develop nephropathy, which often progresses to end-stage renal diseases. The present study demonstrates that below-normal Elmo1 expression in mice ameliorates the albuminuria and glomerular histological changes resulting from long-standing type 1 diabetes, whereas above-normal Elmo1 expression makes both worse. Increasing Elmo1 expression leads to aggravation of oxidative stress markers and enhances the expression of fibrogenic genes. Suppressing Elmo1 action in human patients could be a promising option for treating/preventing the progressive deterioration of renal function in diabetes. Systemic Parameters Affecting Diabetic Nephropathy in Akita Diabetic Mice with Five Graded Expressions of Elmo1 Plasma glucose and plasma insulin levels are not significantly different among the Akita diabetic mice with the five Elmo1 genotypes (Fig. 2 A and B), nor are their arterial pressures different (Fig. 2C). However, plasma lipid peroxides (a marker of oxidative stress) are progressively increased (Fig. 2D), and the levels of reduced form of glutathione in erythrocytes is reduced (Fig. 2E) as the expression of Elmo1 increases in the diabetic mice. Plasma levels of transforming growth factor β1 (TGFβ1) also increase as the expression of Elmo1 increases in the diabetic mice (Fig. 2F). These results demonstrate that increased expression of Elmo1 does not affect plasma glucose levels and blood pressure in the diabetic mice, but that it progressively enhances oxidative stress and increases the plasma levels of the fibrogenic cytokine TGFβ1. Overexpression of Elmo1 Increases and Underexpression of Elmo1 Decreases Renal Histological Changes, Renal Excretory Function, Urinary Excretion of Albumin, and the Renal Expression of Fibrogenic Genes in Akita Diabetic Mice Microscopic studies of the glomeruli of the diabetic mice ( Fig. 3 A-E) showed that the diabetic mice with wild-type Elmo1 alleles (WT:A/+) at age 40 wk had pathological changes in their glomeruli typical of diabetic nephropathy, including mesangial cell proliferation and accumulation of periodic acid-Schiff (PAS)positive materials (Fig. 3C). These pathological changes were progressively exacerbated when expression of Elmo1 increased above normal in the H/+:A/+ and H/H:A/+ Akita diabetic mice (Fig. 3 D and E). In marked contrast, the pathological changes were completely absent in the Akita mice with the lowest expression of Elmo1 (L/L:A/+; Fig. 3A), and were still reduced in the L/+:A/+ mice (Fig. 3B) relative to those in the diabetic mice with wild-type Elmo1 (Fig. 3C). We quantitated the renal pathology in the five Elmo1 genotypes by measuring the fraction of PAS-positive area per glomerular tuft area, an indicator of mesangial expansion, and found that PAS-positive material increased more than 10-fold in the Akita diabetic animals as Elmo1 expression increased (Fig. 3K). The open capillary area in the L/L:A/+ hypomorphs decreased from almost twice that in the WT:A/+ diabetic mice to less than one-half in the H/H:A/+ diabetic mice (Fig. 3L). The pathology revealed by light microscopy was further evaluated by electron microscopy (Fig. 3 F-J). The results confirmed that, at age 40 wk, the Akita mice with wild-type Elmo1 alleles (WT:A/+) had ultrastructural changes typical of advanced diabetic nephropathy, including a severalfold increase in the thickness of the glomerular basement membrane (GBM) together with marked podocyte effacement (Fig. 3H). The GBM thickening (quantitated in Fig. 3M) was progressively exacerbated in the H/+:A/+ and H/H:A/+ mice with above-normal expression of Elmo1 (Fig. 3 I and J). The ultrastructure of the glomeruli in the L/L:A/+ mice was indistinguishable from that in wild-type nondiabetic C57BL/6 mouse (Fig. 3 F and M). Podocyte effacement was present in the glomeruli of all of the diabetic mice except the L/L:A/+ mice with one-third normal Elmo1 expression. We did not observe any obvious lesions in the renal vasculature other than in the glomeruli of the Akita mice. We conclude that the nephropathy observed in the mature Akita mice with type 1 diabetes is strongly affected by the expression of Elmo1, ranging from almost the same as in nondiabetic Elmo1 WT mice to severe diabetic nephropathy. We studied kidney function by measuring plasma cystatin C and found that the cystatin C levels increase as the expression of Elmo1 is increased in Akita mice (Fig. 3N). Likewise, urinary excretion of albumin was progressively increased when expression of Elmo1 increased (Fig. 3O). Thus, visual inspection of the light and electron microscope images of the glomeruli together with their quantitative evaluation show that below-normal Elmo1 expression ameliorates the nephropathy caused by diabetes whereas above-normal expression exacerbates it. Discussion In the current study, we have shown that genetically increased levels of Elmo1 in Akita diabetic mice lead to progressive increases in the concentrations of oxidative stress markers, in the severity of pathological features characteristic of diabetic nephropathy, including glomerular fibrotic changes and urinary excretion of albumin and the renal expression of fibrogenic genes. In contrast, genetically decreased levels of Elmo1 lead to less urinary output of albumin, less tissue fibrosis, and less expression of fibrogenic genes. Strikingly, the kidneys of the diabetic mice with ∼30% Elmo1 expression exhibited no pathological changes; their kidneys were the same as those of nondiabetic mice Elmo1 WT mice. Remarkably the plasma level of lipid peroxides (a measure of oxidative stress) and the erythrocyte level of reduced glutathione in the 30% Elmo1 diabetic mice are also indistinguishable from that in nondiabetic Elmo1 WT mice. Plasma glucose levels and blood pressures of the diabetic mice were unaffected by differences in Elmo1 expression. Thus, our results show that genetic increases in Elmo1 expression causally aggravate the severity of diabetic nephropathy, whereas reducing expression of Elmo1 to about one-third is sufficient to prevent any pathological effects of the diabetes. Elmo1 was first discovered as a factor required for the phagocytosis of dying cells and cell migrations in Caenorhabditis elegans (4) that also acts as an upstream regulator of Rac1 in the Ras-related C3 botulinum toxin substrate (Rac) signaling pathway. Elmo1 is part of the Rac-guanine nucleotide exchange factor (Rac-GEF), which converts inactive Rac-GDP into active Rac-GTP (15). Earlier investigations have shown that expression of Elmo1 is increased in db/db type 2 diabetic mice relative to its expression in nondiabetic control mice (5). In our present study, we have found that Elmo1 expression is also elevated in Akita type 1 diabetic mice. A possible chain of events leading from Elmo1 to an increase in ROS and to nephropathy is apparent. Thus, in KKA(y) mice, which develop obesity-related diabetic kidney disease, the activity of Rac1 is increased in mesangial cells, and treatment with a pan-Rac inhibitor (EHT1864) mitigates the renal pathology (16). Rac1, Rac2, and Rac3 increase the formation of superoxide anion via activation of NAD(P)H oxidases (17)(18)(19)(20), which are encoded in Nox genes. Increased production of superoxide by the mitochondrial electron transport chain has been shown to be the causal link between elevated glucose and three major pathways of hyperglycemic damage (activation of protein kinase C, formation of advanced glycation end products, and activation of the polyol pathway) in cultured bovine endothelial cells (21), and there is growing evidence that NAD(P)H oxidase is a critical enzyme for producing ROS (22). Additionally, pharmacological inhibition of NAD(P)H oxidase with apocynin has been shown to reverse renal mesangial matrix expansion in streptozotocininduced type 1 diabetic mice (23). Consequently, it is likely that increased expression of Elmo1 enhances the development of diabetic nephropathy by activating Rac and increasing NAD(P)H oxidase, which in turn increases ROS. A possible chain of events between increases in Elmo1 expression and glomerulosclerosis is also apparent. Thus, pharmacological inhibition of different ROS sources, including NAD(P)H oxidase and the mitochondrial respiratory chain, decreases the transcription of Tgfb1 via reduced activity of activated protein 1 (24), indicating that ROS-induced enhancement of the transcription of Tgfb1 may at least in part account for the increase in Tgfb1 expression in diabetes. In vitro overexpression of Elmo1 in cultured COS cells increases the expression of Tgfb1 and its downstream extracellular matrix genes, including fibronectin, collagen 1A1 (5), and integrin-linked kinase (5). Consequently, it is likely that the overexpression of Elmo1 in diabetes increases Tgfb1 expression and enhances the development of nephropathy via its effects on ROS formation. In summary, we have studied the renal phenotypes of Akita type 1 diabetic male mice having five different genetically determined levels of Elmo1 expression and find that Elmo1 expression levels positively correlate with glomerular fibrosis and urinary excretion of albumin, although no significant differences in plasma glucose or blood pressure are seen among the five genotypes. Most notable is our finding that the nephropathic effects of diabetes are completely prevented if Elmo1 expression is genetically decreased to one-third, even though the diabetic hyperglycemia persists. Thus, Elmo1 plays an important pathophysiological role in the development of albuminuria and the tissue fibrotic changes characteristic of diabetic nephropathy. The remarkable effects of modestly decreasing its levels genetically in mice suggest that pharmacologically suppressing Elmo1 action in the kidney could be a promising option for preventing the progression of early stages of nephropathy to end-stage renal disease in patients with type 1 diabetes. Materials and Methods Animals. Mice having five graded expression levels of Elmo1 were generated by targeted replacement of the 3′-UTR of Elmo1 with unstable Fos 3′-UTR or with stable bGH 3′-UTR, as previously described (13). To study the effects of Elmo1 on the phenotype in diabetes, heterozygous and homozygous mice having hypomorphic (L) or hypermorphic (H) alleles for Elmo1 generated on a C57BL/6 genetic background (Ingenious Targeting Laboratory) were crossbred with mice heterozygous for the Akita mutation in the insulin 2 gene on a C57BL/6 genetic background (The Jackson Laboratory). All mice were kept under husbandry conditions conforming to the guidelines of the University of North Carolina (UNC) Institutional Animal Care and Use Committee. Measurement of Biological Parameters. Plasma glucose levels were determined with the glucose oxidase method (Wako Chemical). Plasma insulin levels were determined with ELISA (Crystal Chem). Plasma TGFβ1 levels were determined with ELISA (Quantikine Mouse/Rat/Porcine/Canine TGFβ1 Immunoassay; R&D Systems). Plasma cystatin C levels were determined with ELISA (Quantikine Mouse/Rat Cystatin C Immunoassay; R&D Systems). Plasma lipid peroxide was quantified as thiobarbiturate reactive substances (Cayman Chemical). The reduced form of glutathione (GSH) in erythrocytes was determined with a Glutathione assay kit (EMD Millipore). Metabolic balance studies were performed using metabolic cages (Solo Mouse Metabolic Cage; Tecniplast). Histology. After cutting the inferior vena cava, the left ventricle was punctured with a 23-gauge needle and perfused with PBS for 3 min and with 4% (wt/vol) paraformaldehyde for 5 min. Thereafter, the tissues were dissected out and put in 4% (wt/vol) paraformaldehyde for at least 3 d. They were then paraffin embedded and sectioned. Stained sections were prepared by the Center for Gastrointestinal Biology and Diseases Histology Core at UNC and imaged on an Olympus BX61 microscope. For electron microscopy, grids were prepared by the Microscopy Services Laboratory at UNC and imaged on a Zeiss TEM 910 transmission electron microscope. Blood Pressure and Pulse Rate Measurement. Blood pressure and pulse rate were measured with a tail-cuff method (25). Quantitative Reverse Transcription-PCR. Total RNA was extracted from different tissues, and the mRNAs were assayed by quantitative reverse transcription-PCR as previously described (26). The primers and probes used to measure the mRNAs are shown in Table S1. Statistical Analysis. Data are expressed as means ± SEs. To compare groups, we used one-factor ANOVA. Post hoc pairwise comparisons were performed by Tukey-Kramer honestly significant difference test (JMP 8.0; SAS Institute).
2018-04-03T00:05:44.958Z
2016-02-08T00:00:00.000
{ "year": 2016, "sha1": "43eeba5c2c9993bf5adeb773819de79e3d63651d", "oa_license": "CCBY", "oa_url": "https://www.pnas.org/content/pnas/113/8/2218.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "f1e8627d53810dd27d729b792848e3033ad91c99", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
129314696
pes2o/s2orc
v3-fos-license
Improving the Reproductive Health Status of Rural Women of Umunze in Orumba South of Anambra State , Nigeria The study was focused on improving the Reproductive Health Care Status of women in rural areas with a case study in Umunze in Orumba South Local Government Area of Anambra State. Specifically, the study identified and assessed some reproductive health care practices as adopted by the women in rural areas, impediments to attaining adequate reproductive health care and strategies for improving reproductive health status of the rural women in the area of the study. Descriptive survey that utilized questionnaire was used for data collection. The target population consisted of all women of reproductive and child bearing age. Percentages were used to analyze the data. Ten major findings were identified and these were enumerated within the study followed by recommendations. Introduction Health according to the World Health Organization (WHO, 1991) is a state of complete physical, mental and social well being and not merely the absence of disease or infirmity.Reproductive health therefore involves all the reproductive systems, processes and it functions at all stages of life.In line with WHO's definition of health, the International Conference of Population and Development (ICPD, 1994) defined reproductive health as a state of complete physical, mental and social well being and not merely the absence of diseases or infirmity in all matters related to reproductive systems, its functions ands processes.Reproductive health thus, demands that people are able to have a responsible, satisfying and safe sex life.The individual's also have the capability to reproduce and the freedom to decide if, when and how often to do so.Reproductive health also includes the right of men and women to be informed and to have access to safe effective, affordable and acceptable methods of family planning of their choice (ICPII, 1994). The components of Reproductive Health as stated by Population Reports (1996) (ICPI, 1994) and WHO (1994) include among others, responsible reproductive sexual behavior, family planning programmes which focus on providing information and services on contraception widely, effective maternal care, safe motherhood which ensure that pregnant women would receive adequate pre-natal, safe delivery and post natal care. In order to ensure adequate and appropriate reproductive health for women particularly in the rural areas, women should be exposed to appropriate health care services that will enable them to go safely through pregnancy and child birth while providing them with best choice of health for infants.Women in developing countries like Nigeria are often subjected to health risks due to inadequate responsiveness and lack of health services to meet their health needs particularly with regards to sex and reproduction.(United Nations, 1996) in a typical Nigerian culture with a focus on the area of study which is Umunze, observations and oral interviews conducted and among sampled group indicate that male dominance of decision making on reproductive health matters and related issues and sometimes their opposition of fertility regulation represents an important barrier to wishes use of reproductive health care facilities by women.Again, the relatively low status of the women within the family as obtained in most Nigerian cultures particularly in Anambra State; often impair the rights of women to adequate reproductive health care, since this fact makes their wishes subordinate to that of their husbands. The rural women are the focus of this study since majority of them who live in the rural areas, are mostly illiterates and thus have limited access to reproductive health care information system.The custom, tradition and lower status of women in the family also place them at a disadvantaged position to benefit much from reproductive health care initiatives.Pregnancy and child have been identified as leading causes of mortality and mobility of women in their reproductive and child bearing age (Arkutu, 1995). Statistics from Population Reference Bareau (1997), UN (1994) and Arkutu (1995) indicate that these reproductive health care problems and their complications are pronounced in developing countries like Nigeria.Again, the existing health problems persist because there is a general lack of basic infrastructures for a sound health care system.In a place like Umunze, the area of study, investigations reveal that only one health center is available which definitely does not serve the rural women who dwell there, who as a matter of fact are predominantly farmers, which indicates that they seldom have time spared for other activities outside their farming more so when the health faculty fails to provide adequate medical and reproductive health care services and a functional Health Management Information System (HMIS).This system is necessary to help provide relevant health data and health assessment to the reach of the people easily and at affordable cost. The question here is how can the Reproductive Health Care Status of Rural women be improved particularly in the area of the study where the researcher chose for reasons of their predominant occupation which seldom gives them the opportunity for other aspects of life? What impediments are identified as affecting the reproductive health life of the rural women in the area of study, what strategies can be adopted to help improve the reproductive health life of these women?These and other questions are what this study seeks to obtain answers to at the end of the research. It is believed that the more alternatives and choices the rural women in Anambra State have, as will be highlighted in the study, the more likely the rural women will be willing and able to utilize the services and facilities available. Statement of the Problem Pregnancy and child birth have been identified as leading causes to mortality and morbidity of women in their reproductive and child bearing age (Arkutu, 1995).Population Reference Bureau PRB (1997) noted in a summary of findings that 585,000 women die every year from complications of pregnancy, childbirth and related causes.Ninety-nine percent of these women come from developing countries like Nigeria of which Umunze the area of study is one of them. Women in these areas face greater risks during pregnancy, childbirth and post-partum periods, as a result of limited access to reproductive health facilities which automatically help to improve on their reproductive health generally.Of serious concern is the status of the rural dwellers who are mostly illiterates and predominantly have an impaired access to adequate medical and reproductive health care services, in addition to a moribund health system due to conspicuous absence or lack of functional health management information system (HMIS).For the area of study-Umunze, one wonders what help just one health facility/centre located in the town could offer to most of its women who are predominantly housewives and fully involved in reproductive health care practices.The alternatives these women have are the privately owned hospitals and few mission hospitals which often times scare these women away with huge medical bills.Previous researches and observations show that most reproductive health care facilities even when they are available are seldom used by these women particularly the rural women due to ignorance and oftentimes cultural obligations.The relatively low status of women in most Nigerian families, place the ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences women at disadvantaged positions in terms of acquiring adequate reproductive health care.It is believed that when these problems are well addressed as it affects the rural women in Orumba South Local Government area; it will go a long way to improving and ensuring the over all health care of these women. The focus of this study is primarily on improvement of the Reproductive Health Care Status of rural women in Umunze in Orumba South Local Government Area. Specifically, the study is aimed at: 1. Identifying some Reproductive Health Care practices by the rural women in Umunze, Orumba South Local Government Area. 2. Identifying some major impediments to the acquisition of adequate reproductive health care by these women in the area of study.3. Identifying and recommending some strategies that will help improve the Reproductive Health Care Status of the rural women in the area of study. Significance of the Study In Anambra State in general and Orumba South Local Government Area with particular focus on Umunze, the tradition, custom and lower status of women in the family places the man higher in taking reproductive health decisions.This is complicated by the fact that most of these rural women are illiterates and thus, have limited to reproductive health and information services etc. It is therefore believed that the findings and inputs of this study will expose the women's ignorance about much of her reproductive health needs which will form a good basis for fortifying the health clinics and centers in various localities particularly in the area of study Umunze. The findings of the study will also strengthen the ability of women to make reproductive health decisions in the family by increasing their self esteem and confidence in choosing reproductive health care devices as it soothes their personality, without the influence of culture or tradition. Suggestions and inputs of this work will go a long way to throw more light on areas of reproductive health care of the women that needs improvement. Government communities and different localities can benefit immensely from the findings of this research work by adopting the strategies enumerated for an improved adequate health management information system where women can always resort to for enlightenment and health information. Finally, the findings of this study will form a good resource material for individuals who want to carry out further researches in related areas. Methodology The design adopted for the study was survey research.This was particularly preferred because it will have a direct contact with the respondents and help to elicit more responses.The area of the study is Anambra State but for the purpose of the study, Umunze in Orumba south Local Government Area was used for the study.The population comprises of all women of child bearing or reproductive age in the areas of the study, (Orumba South Local Government Area in Anambra State).A total of 160 respondents were drawn randomly from the 4 zones in the town used for the study.For the purpose of the study, the area of study was divided into 4 zones considering all the villages in the area.In each zone, 40 respondents were randomly selected making a total of 160 respondents.The household was used as a unit of observation. Questionnaire was used to collect the data.This was developed based on the review of related literature and the purpose of the study.One hundred and sixty copies of questionnaire were distributed to the respondents (women) in the selected areas.The questionnaires were translated into vernacular language for the very illiterates women who cannot read or interpret the items on the questionnaire.The completed questionnaires were collected on the spot to ensure 100% return.The collected 160 questionnaires were subjected to data analysis. The data collected were analyzed using simple percentages.Any response above 50% was taken as accepted and vice versa. Table I: Responses on the Reproductive Health Care practices as adopted by women in Umunze, Orumba South Local Government Area.The items and the responses on the above table 1 show that the respondents (rural women) generally have a poor attitude to adopting healthy reproductive health care practices.This is evident from the negative responses on items 1, 2, 4, 5, 6 and 7 with percentage responses of 75, 62.5, 52.5, 61.2, 68.8 and 56.2 respectively.Since these items score above 50% which is the cut off point, it is therefore accepted that the negative responses to the healthy reproductive health care habits is indicative of poor reproductive health care practices by the women.The analysis in table II above clearly show that most of the items enumerated under the table, as impediments to the Reproductive health care of the rural women were accepted with percentage scores above 50% the cut off point.Item 7 was however observed to be below the cut off point with a percentage response 11.2%.This means that the item was not seen as an impediment or obstacle to the women's reproductive care attainment.The data on table III above reveals that all the factor (items) enumerated were strongly endorsed by the respondents as strategies for improving the reproductive health care status of the women.On the table item recorded a highest % response of 86.3 while item 5 had the least percentage response of 55.6 Major Findings The following findings were made; 1.The rural women in the area of study, were not familiar with and do not regularly use the family planning devices available proper 2. Nutrition was not considered as a vital factor in enhancing reproductive health hence their poor attitude to feeding practices 3. The rural women were very highly marginalized in items of taking decisions on reproductive health matter 4. Health practices like regular medical check ups, and attending enlightenment seminars and workshops on reproductive health issues was not regularly practiced by women , 5. Reproductive health care facilities are not easily available and accessible to the rural women in Umunze.6. Non-functionality of most facilities available for the women in the area.7. Inaccessibility to most rural area to disseminate information on reproductive health care matters 8. Influence of husbands, religion, culture, tradition, and some taboos were major impediments to achieving proper reproductive health by the women.9. Providing incentives like free medical and reproductive health cares can go a long way to boosting and sensitizing the women into achieving proper reproductive health status.10.Making the facilities available and accessible to the women is also a major strategy for improving the women s reproductive health. Discussions The finding of the study show that cultural and traditional factors, food taboos like what a woman should eat and what should not eat during pregnancy, socio-economic factors, male dominance in decision making, are all areas that clearly discriminate against the rural women and place them on disadvantage position as far as achieving proper reproductive health status is concerned.McCauley et al (1994) supported this observation when they noted custom, tradition and beliefs of a people as obstacles to obtaining adequate health information by most women particularly in the rural areas.Poor nutritional habits and poor medical habits were noted as some of the practices by the Arkutu (1995) was often more prevalent among women in developing countries like Nigeria particularly in the rural areas.Since women have special and additional need for nourishment hence they bear and nurture children, Arkutu (1995) again noted that they are more likely to suffer under-nourishment which will automatically affect their unborn babies.Again male dominance in decision making was identified as a strong factor in reproductive health matters.This according to WHO (1995) was attributed to the relatively low status of the women in the society which make their decisions subordinate to that of their husbands.Other factors which formed major impediments to the women's reproductive health include low literacy level of most of the women which literally places them at disadvantaged positions.To this Arkutu (1995) observed that the more educated a women is the more likely she is to make right decisions concerning her health and that of her children. The findings also noted that women need to be empowered to enable them to have health freedom.Empowerment according to WHO (1992) is critical to securing safe motherhood because it enables women to articulate their needs and concerns.Empowering women means enabling them to overcome these barriers and to make fully informed choices particularly in the areas affecting the most intimate aspects of their lives.WHO (1998) rightly supported this by saying that empowering the women in the area of health requires more than health related interventions, it requires a social, economic and cultural conditions in which freedom and responsibility are given concrete manning.Women particularly the rural women must have the means both physical and psychological to overcome barriers to safe motherhood. Conclusion Based on the findings of this study, the conclusions can be drawn: 1. Most reproductive health care facilities are not readily available for the women in Umunze, Orumba South Local Government Area. 2. Majority of the rural women do not practice healthy reproductive care practices and these affect their health status.3. Most rural women in the area lack proper information on the utilization of these reproductive health care facilities.4. Husbands have greater influence on reproductive health matters than the women. Recommendations The following recommendations are therefore made based on the findings: 1. Women should have access to accurate information about their reproductive health as well as to properly equipped women centered care.2. Reproductive health care information must be seriously taken to the grass roots and efforts made to protect the rural women from most traditional cultural norms and values that affect their reproductive life.3. Women should be allowed greater freedom to determine their own health and life choice within families and communities.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences 4. Government/communities should provide well fortified health and delivery centers for quality maternal care and child delivery.These should be located in various localities and made accessible to the women. Table II : Responses on the impediments to obtaining adequate Reproductive Health Care among rural women in Umunze Orumba South Local Government Area. Table III : Responses on strategies for improving the Reproductive Health Care status of the rural women in Umunze, Orumba south Local Government Area.
2018-04-26T23:28:38.886Z
2013-09-21T00:00:00.000
{ "year": 2013, "sha1": "ca7bf42823e3f5c8bd1f11aa2b395550182f97bd", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/682/710", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ca7bf42823e3f5c8bd1f11aa2b395550182f97bd", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
7577767
pes2o/s2orc
v3-fos-license
Imatinib Attenuates Myocardial Fibrosis in Association with Inhibition of the PDGFRα Activity Methods: Sixty male uninephrectomized Sprague-Dawley rats were assigned to three groups: control rats (CON group); deoxycorticosterone group (DOCA group); deoxycorticosterone and imatinib group (DOCA+IMA group). Systolic blood pressure (SBP) was measured biweekly. The apical portion of the left ventricle was studied. Sirius-Red staining, HematoxylinEosin staining, immunohistochemistry and Western blot assay were employed. in mineralocorticoid induced myocardial fibrosis remain unclear.Various cytokines and growth factors have been demonstrated to play crucial roles in the development of myocardial fibrosis.These factors include transforming growth factor ß (TGFß) 1 , interleukin-1 (IL-1) 2 , tumor necrosis factor α (TNFα) 3 , and platelet-derived growth factor (PDGF) 4 .PDGF is a member of growth factor family and has been found to promote the division and proliferation of fibroblasts and smooth muscle cells.There is evidence showing that high protein expressions of PDGFs may lead to myocardial fibrosis, which suggests that PDGF-PDGFR signaling pathways may serve as important molecular mechanisms of myocardial fibrosis 5 . Imatinib is a tyrosine kinase receptor inhibitor that is mainly used in the treatment of tumor in clinic currently and has been confirmed to exert inhibitory effect on the PDGF receptor (PDGFRα and PDGFRβ) activity.PDGF-PDGFR Introduction Myocardial fibrosis is a common clinical problem, can be found in several heart diseases and is often associated with inflammatory process.In the heart, excessive accumulation of extracellular matrix (ECM) components, especially the type I, III, and V collagens, is responsible for myocardial fibrosis.Changes in tissue structure and cardiac dysfunction following myocardial fibrosis contribute to heart failure, arrhythmia, sudden cardiac death and other serious cardiovascular events.The molecular mechanisms of myocardial fibrosis are complex, on which there are still controversies.Previous studies suggested that myocardial fibrosis is closely related to the increase of endogenous mineralocorticoid.However, the roles of some molecular signaling pathways Ma et al. Protective effect of imatinib and the protein expressions of PDGF-A, PDGF-C, PDGFRα, p-PDGFRα, procollagen I (PI), procollagen III (PIII), tenascin-C and fibronectin were investigated in deoxycorticosteroneacetate (DOCA)/salt-induced hypertensive rats, aiming to examine whether imatinib can effectively attenuate myocardial fibrosis and explore the relationship between PDGFRα signaling pathway and myocardial fibrosis. Animal Model Sixty male Sprague Dawley (SD) rats weighing 200 ~ 250 g were purchased from the Animal Experimental Center affiliated to Anhui Medical University and housed in specific pathogen free environment.Before experiment, animals were anesthetized by intraperitoneal injection of 10% chloral hydrate (400 mg•kg -1 ) and right nephrectomy was performed.One week after surgery, rats were fed with water containing 1% NaCL and 0.2% KCL, and randomly divided in three groups (n=20 per group): 1) control group (CON group): distilled water was intragastrically administered daily, and subcutaneous injection of sesame oil was given once every 4 days; 2) deoxycorticosterone group (DOCA group): deoxycorticosterone (60 mg.kg -1 .4d - ) was subcutaneously given, and distilled water intragastrically administered daily; 3) deoxycorticosterone plus imatinib group (DOCA + IMA group): deoxycorticosterone (60 mg.kg -1 .4d - ) was subcutaneously given once every 4 days, and imatinib (60 mg.kg -1 .d - ) intragastrically administered daily simultaneously.The volume of distilled water used in the control group was identical to the volume of distilled water containing drugs in the remaining 2 groups.In addition, the volume of sesame oil used in the control group was equivalent to the volume of sesame oil containing drugs in the other 2 groups.Treatment with drugs lasted for 4 weeks. Systolic Blood Pressure Measurement and Sample Collection The systolic blood pressure (SBP) was measured on 1d, 14d and 28d by using the tail-cuff method.On the 14th day and 28th day after drug intervention, animals (n=10) were intraperitoneally anesthetized with 10% chloral hydrate (400 mg•kg -1 ) and then killed.The hearts were obtained, the bilateral atria, right ventricle, great vessels and connective tissues were removed and the apical portion of left ventricle was collected for further examinations.The samples collected on the 14th day were fixed in 4% paraformaldehyde, embedded in paraffin followed by sectioning and immunostaining.The samples obtained on the 28th day were divided into 2: one was fixed in 4% paraformaldehyde for HE staining and Sirius red collagen staining, and the other was stored at -80 ºC for western blot assay. Inflammatory Pathological Changes and Immunohistochemistry Two pathologists who were blinded to the study evaluated the sections.Upon the 14th and 28th days, the apical portion of the left ventricles was subjected to H&E staining for the examination of cardiac inflammation.The infiltration of myocardial macrophages was observed by immunohistochemistry at 14 days after drug intervention.ED-1 is a kind of specific antigen on the surface of monocyte/ macrophage.Immunohistochemistry was performed to detect the ED-1 expression in the heart according to manufacturer's instructions.Macrophages with brown membrane were regarded to be positive for ED-1.Ten fields in the positive region were randomly selected at a magnification of ×400.Image-Proplus 6.0 software was employed to count the number of positive cells followed by calculation of the number of macrophage per mm 2 visual field.Then, average was obtained for each section. Detection of Myocardial Fibrosis and Vascular Remodeling Myocardial collagen volume faction (CVF) and myocardial peripheral vessel collagen area (PVCA) were measured by picric acid-Sirius red staining on the 28th day.The calculation method was stated below: Nikon photographic system was used, Image-Proplus 6.0 software was employed for analysis.CVF was expressed as the ratio of myocardial collagen area to whole area of visual field.Eight fields (×400) were selected randomly and the corresponding CVFs were measured in each section, and the average was considered as the final CVF for each section.PVCA was expressed as the ratio of peripheral collagen area around the small artery-toartery lumen area.Four small arteries (×400) were chosen randomly for detection in each section, and the average was considered as the final PVCA for the corresponding section. Western Blot Assay The protein expressions of PDGF-A, PDGF-C, PDGFRα, p-PDGFRα, PI, PIII, fibronectin and tenascin-C in the heart were detected by western blot assay on the 28th day after drug intervention.The total protein was extracted using cell lysis buffer(Main components: 20mM Tris (pH7.5),150mM NaCl, 1% Triton X-100, 1% deoxycholate, 0.1% SDS, 1mM PMSF, 2mg/mL leupeptin, 1 mM EDTA, sodium pyrophosphate, β-glycerophosphate and Na 3 VO 4 ), the supernatant containing proteins was collected by centrifugation, and the protein concentration was detected with BCA method.All the samples (One sample stand for one) were diluted into the same concentration.Three samples were randomly selected from each group and compared with those from the remaining two groups.Then, 50 μg of proteins were subjected to polyacrylamide gel electrophoresis, Ma et al. Protective effect of imatinib Arq Bras Cardiol 2012;99(6):1082-1091 transferred onto PVDF membrane, and blocked in skim milk overnight.Then, the membrane was incubated with primary antibody for 8~12 h and with secondary antibody for 2 h.Visualization was done with ECL kit.The bands were scanned with Bio-Rad imaging system, and Quantity-one software was employed for the detection of optical density.β-actin served as an internal reference.The optical density of target protein was normalized by that of β-actin representing the relative amount of target protein. Statistical Analysis SPSS version 13.0 software program was employed for statistical analysis.Data were presented as mean ± standard error (SEM).The means among multiple groups were compared with one-way analysis of variance (ANOVA).The ratios were compared with chi square test.A value of P<0.05 was considered statistically significant. Changes in SBP There was no significant difference in SBP among groups prior to experiment.However, SBP in the DOCA and DOCA+IMA groups were significantly higher than that in the CON group on the 14th and 28th days.No statistical significance in the SBP was noted between DOCA group and DOCA+IMA group regarding levels (Table 1). Myocardial Inflammation H&E staining revealed evident infiltration of inflammatory cells and mild fibrosis in the heart on the 14th day in the DOCA group.A large amount of inflammatory cells infiltrated the heart in the DOCA+IMA group while fibrosis was absent.Twenty-eight days after drug intervention, myocardial cells were regularly arranged, and infiltration of inflammatory cells and severe fibrosis were noted in the DOCA group.In the DOCA+IMA group, myocardial cells were regularly arranged.Although there were inflammatory cells infiltrating the heart, fibrotic tissues were not found.The myocardial cells in the control group were normal throughout the experiment, and neither infiltrating inflammatory cells nor fibrotic tissues were found (Figure 1). Infiltration of Monocytes/macrophages The number of ED-1 positive cells reflects the extent of monocyte/macrophage infiltration.Immunohistochemistry for ED-1 showed that the number of ED-1 positive cells in the DOCA group and DOCA+IMA group was significantly higher than that in the CON group (DOCA group: 6.3±0.4/mm 2 ; DOCA+IMA group: 6.0±0.3/mm 2 ; CON group: 1.1±0.2/mm 2 [Figure 2]). Myocardial Fibrosis At 28 days after intervention, Sirius red staining showed that the myocardial collagen became light red, and the remaining cardiac tissues presented yellow.Sirius red staining revealed that the myocardial fibrosis in the DOCA group was the most severe among all groups, characterized by the highest level of interstitial collagen.Additionally, the CVF and PVCA in the DOCA group were also significantly increased when compared with those in the CON group.The CVF and PVCA in the DOCA_IMA group were also higher than those in the CON group, whereas markedly lower than those in the DOCA group (Table 2, Figures 3 and 4). Protein expressions of PDGF-A, PDGF-C, PDGFRα and p-PDGFRα When compared with the CON group, the protein expressions of PDGF-A, PDGF-C and PDGFRα in the DOCA group and DOCA+IMA group were remarkably increased.The p-PDGFRα protein expression in the DOCA group was significantly elevated, but that in the DOCA+IMA group was markedly decreased as compared to the Con group.Western blot assay (Figure 5) showed that deoxycorticosterone-acetate treatment (DOCA group and DOCA +IMA group) significantly up-regulated the protein expressions of PDGF-A (0.61±0.12-fold and 0.59±0.09fold,respectively, p<0.01) and PDGF-C (0.58±0.11-fold and 0.55±0.06-fold,respectively, p<0.01) when compared with the CON group (0.17±0.04-fold and 0.15±0.02-fold,respectively).The PDGFRα protein expression (Figure 5) was significantly increased in the DOCA group (0.41±0.08fold, p<0.01) and DOCA+IMA group (0.43±0.10-fold, p<0.01) when compared with the CON group (0.12±0.03fold).PDGFRα is a kind of tyrosine kinase receptor, which shows activity when intracellular tyrosine residue is phosphorylated, p-PDGFRα is the phosphorylated form of PDGFRα, and increased p-PDGFRα level suggested an increase in the activation of PDGF signaling pathway.Results showed that the DOCA group has the highest p-PDGFRα protein expression (0.38±0.06-fold, p<0.01) when compared with the CON group (0.08±0.02-fold) and the DOCA+IMA group (0.11±0.02-fold), and the p-PDGFRα protein expression in the DOCA+IMA group was higher than that in the CON group, but without significant difference (Figure 5). Discussion In this study, our findings showed: 1) The DOCA-salt induced hypertensive rats presented apparent inflammation in the myocardial cells.More macrophages infiltrated in the myocardium, accompanied by over-expressions of PDGF-A, PDGF-C and PDGFRα.These findings suggest that the occurrence of myocardial fibrosis is closely associated with increased activation of PDGF signaling pathway.2) When there was no difference in SBP before experiment, and the CVF and PVCA in the DOCA+IMA group were dramatically decreased when compared with those in the DOCA group, indicating that imatinib is able to relieve myocardial fibrosis independent of lowering SBP. 3) The p-PDGFRα protein expression was significantly declined in the DOCA+IMA group when compared with the DOCA group.As an inhibitor of tyrosine kinase receptor, imatinib can suppress the activity of PDGFRα.The inhibitory effect on myocardial fibrosis is possibly attributed to the compromised proliferation of fibroblasts following inhibition of PDGF signaling pathway.( 4) Fibronectin and tenascin-C are regarded as the indexes of the degree of fibrosis.Our results showed the protein expressions of fibronectin and tenascin-C were proportionally related to the degree of cardiac muscular fibrosis. Inflammation and Myocardial Fibrosis Previous studies suggested that mineralocorticoidinduced myocardial fibrosis was accompanied by earlystage acute inflammation in the myocardium, suggesting that inflammation is an initial event in the myocardial fibrosis.In addition, chronic inflammation at the midand late-stage aggravates the fibrosis process [6][7][8] , which had been confirmed in the present study.On the 14th and 28th days after intervention, HE staining revealed the infiltration of inflammatory cells in the heart of the DOCA group.The increased infiltration of mononuclear macrophages during inflammation is a major characteristic and key step in tissue fibrosis 9,10 .Moreover, multiple cytohormones and growth factors secreted by mononuclear cells and macrophages are critical for the myocardial fibrosis.Removal of mineralocorticoid receptors (MR) on the mononuclear cells and macrophages by knock-out technique or inflammatory effusion and mononuclear macrophage infiltration in drug-dependent manner can significantly alleviate the DOCA-induced myocardial fibrosis 8,11,12 .Besides, mineralocorticoid is able to regulate the differentiation and maturation of mononuclear cells and macrophages by activating MR on the mononuclear cells and macrophages.Thus, MR inhibitors, such as antisterone, are able to exert anti-inflammation and anti-myocardial fibrosis effects 13,14 .In this study, the DOCA-treated rats presented obvious myocardial inflammation and increased amount of infiltrated macrophages when compared with the control group.Therefore, it can be reasonably assumed an explicit relationship between mineralocorticoid-induced myocardial fibrosis and subsequent inflammation in the myocardium.The aggravated infiltration of interstitial macrophages is of significance towards the occurrence and development of myocardial fibrosis. PDGF Signaling Pathways and Myocardial Fibrosis PDGFs act as vital growth factors, are able to promote the division and proliferation of fibroblasts and vascular smooth muscle cells and play a chemotactic role.Recent findings indicate that PDGF family members participate in the fibrosis of multiple organs and tissues.Obvious interstitial inflammation and increased PDGFs secretion by inflammatory cells (mononuclear cells and macrophages) have been observed in the fibrosis of the lung, heart, liver and kidney [15][16][17] , accompanied by elevated PDGFR expression on the fibroblasts [18][19][20][21] .Some investigations suggested that myocardial fibrosis was associated with excessive activation of PDGF signaling pathways and elevated proliferation of fibroblasts [22][23][24] .PDGF family members includes PDGF-A, PDGF-B, PDGF-C and PDGF-D.PDGF-A and PDGF-C are considered to be closely related to myocardial fibrosis [25][26][27][28] .PDGF-AA is the dimeric isoform of PDGF-A which has been demonstrated as a mitogen for myocardial fibroblasts and is implicated in the pathogenesis of myocardial fibrosis 29,30 .PDGF-C, a newly discovered ligand for PDGFRα 31,32 , also plays an important role in the development of myocardial fibrosis.PDGFs function by binding to the receptors of transmembrane protein-tyrosine kinases (PDGFRα and PDGFRβ).PDGF signaling pathways have been divided into PDGFRα and PDGFRβ signaling pathways based on the type of PDGFR 33 .It has been proved that myocardial fibrosis is mainly related to the PDGFRα signaling pathway 5,34 .Besides, PDGF signaling pathway is mainly regulated by PDGFR.The inhibition of PDGFR can completely block the cellular and biological responses mediated by this pathway 5,35 .Results in the present study confirmed the findings mentioned above.Western blot assay revealed that the protein expressions of PDGF-A, PDGF-C and PDGFR-α in the DOCA group and DOCA + IMA group were dramatically increased when compared with those in the CON group, and p-PDGFRα protein expression was positively associated with the myocardial fibrosis. Mechanism of Cardioprotective Effect of Imatinib Because PDGFs and their receptors are major mediators for the proliferation, survival and migration of fibroblasts, PDGFR inhibitors may become promising drugs for the treatment of myocardial fibrosis.Imatinib is an inhibitor of tyrosine kinase receptor.Previous studies suggested that inhibiting the transformation from PDGFRs into activated PDGFRα was able to block the PDGF signaling pathway, eventually preventing the occurrence of myocardial fibrosis [36][37][38] .In this study, the fibrosis of myocardium and tissues surrounding blood vessels in the DOCA + IMA group were markedly alleviated, and the protein expressions of PI and PIII significantly decreased when compared with the DOCA group.Fibronectin and tenascin-C are also major components of fibrotic tissues 39,40 .Tenascin-C has been found to be involved in the adhesion between myocardial cells and fibrotic tissues, and to regulate the fibroblast recruitment after myocardial injury.Thus, tenascin-C has been regarded as a novel index of myocardial fibrosis and ventricle remodeling.Additionally, tenascin-C can also be used to evaluate the function of left ventricle.The protein expressions of fibronectin and tenascin-C in the DOCA + IMA group were significantly decreased when compared with the DOCA group.These findings confirmed that imatinib is able to inhibit myocardial fibrosis in mineralocorticoid induced hypertensive rats. Role of PDGF Signaling Pathways In Mineralocorticoid induced Myocardial Fibrosis Our findings demonstrated that PDGF signaling pathway participates in the mineralocorticoid induced myocardial fibrosis.Additionally, PDGFs play a role as succeeding factors of macrophages.Thus, we speculate that mineralocorticoid induces the myocardial inflammation, activates MR on the macrophages, and recruits the macrophages into the myocardium leading to the production of a large amount of PDGFs.Then, PDGFs bind to and activate PDGFRα on the myocardial fibroblasts, which promotes the infiltration and proliferation of fibroblasts into inflammatory sites and facilitates the synthesis and secretion of excessive collagens, eventually resulting in myocardial fibrosis 22,36 .Imatinib exert anti-myocardial fibrosis effect through blocking the PDGF signaling pathway.Thus, animals in the DOCA + IMA group presented obvious myocardial inflammation, and elevated expressions of PDGF-A, PDGF-C, and PDGF-α.However, severe myocardial fibrosis was absent. Conclusion This study confirms that PDGF signaling pathway participates in the mineralocorticoid induced myocardial fibrosis.Imatinib is able to effectively suppress the myocardial fibrosis by inhibiting the PDGFRα activity and eventually blocking the PDGF signaling pathway. Figure 1 - Figure 1 -Myocardial inflammation and fibrotic scar in the two treatment groups on day 14 and day 28 compared with the CON group (HE staining, ×400)
2017-11-03T07:18:40.193Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "69ef978ca0af58c05168a399571056f2b46eebcb", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/syxHX97tsN6MTXKpJTJNKgy/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09683a071a0905f5e291671bab121a2786ad1ad4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5881197
pes2o/s2orc
v3-fos-license
Thinking beyond low-density lipoprotein cholesterol: strategies to further reduce cardiovascular risk Several large statin trials and meta-analyses have demonstrated a reduction in low-density lipoprotein cholesterol (LDL-C) and cardiovascular morbidity and mortality. Some trials have also highlighted the significance of residual cardiovascular risk after treatment of LDL-C to target levels. This reflects the complex nature of residual cardiovascular risk. This residual risk is partially due to low HDL-C and high triglycerides (TG) despite achievement of LDL goals with statin therapy. The NCEP ATP III guidelines reported that low HDL-C is a significant and an independent risk factor for coronary heart disease (CHD) and is inversely related to CHD. Epidemiologic studies have also shown a similar inverse relationship of HDL-C with CHD. High-density lipoprotein cholesterol (HDL-C) may directly participate in the anti-atherogenic process by promoting efflux of cholesterol of the foam cells of atherogenic lesions. Many studies have demonstrated multiple anti-atherogenic actions of HDL-C and its role in promoting efflux of cholesterol from the foam cells. The residual risk by increased TG with or without low HDL-C can be assessed by calculating non–HDL-C and a reduction in TG results in decreased CHD. Background Statin therapy has been shown to lower the cardiovascular morbidity and mortality in virtually every population study. Several large trials and meta-analyses have consistently demonstrated that statins reduce low-density lipoprotein cholesterol (LDL-C) leading to a decrease in the incidence of cardiovascular events. [1][2][3][4][5][6] Same statintrials also highlighted the significance of residual cardiovascular risk after treatment of LDL-C to target levels. Obviously all the residual cardiovascular risk is not modifiable because of age and gender issues. This residual cardiovascular risk is complex and is partially due to low high-density lipoprotein cholesterol (HDL-C) and high triglycerides (TG) despite achievement of LDL-C goals with statin therapy. The National Cholesterol Education Program (NCEP) ATP III guidelines reported that low HDL-C is a significant and independent risk factor for coronary heart disease (CHD) 6 and is inversely related to CHD. The nature of the relationship between HDL-C and CHD is not clear. One theory is that HDL directly participates in the atherogenic process. Various studies have demonstrated multiple anti-atherogenic actions of HDL-C (Table 1). 7 Studies in vitro have shown that HDL-C may promote efflux of cholesterol of the foam cells from atherogenic lesions, a process called reverse cholesterol transport. The residual risk by Dovepress increased TG with or without low HDL-C can be assessed by calculating non-HDL-C 6 and modification in TG also result in decreased CHD. 8 Residual CHD risk in patients treated with statins A significant cardiovascular risk remains in statin-treated patients as shown in many trials. Although statins are very efficacious they do not eliminate the CHD risk associated with diabetes mellitus (DM). 1,3,[9][10][11][12][13][14] This residual cardiovascular risk issue in such populations was well illustrated by Cholesterol Treatment Trialists (CTT). 15 In a meta-analysis of 90,056 patients, CTT collaborators found that residual CHD risk was particularly high in patients with statin monotherapy in 90,056 patients from 14 statin trials. This meta-analysis demonstrated the safety and efficacy of statin therapy in reducing 5-year incidence of major cardiovascular events (MACE). A reduction of 39 mg/dL of LDL-C was associated with a 20% decrease in the composite end point of non-fatal myocardial infarction (MI), coronary revascularization, and coronary death. 15 This meta-analysis further confirmed a significant reduction in major vascular events with statin therapy in patients with and without DM. However, as revealed in this meta-analysis, CHD events are still higher in diabetic patients treated with statin than those patients without DM on placebo. 15 Atherogenic dyslipidemia Both DM and metabolic syndrome are associated with atherogenic dyslipidemia, which is characterized by high TG, elevated small dense LDL-C and low HDL-C. Such dyslipidemia confers a high risk of CHD on patients. 16 According to NCEP ATPIII, elevated TG is a marker for atherogenic remnant lipoproteins 6 and the most readily available measure of this atherogenic remnant lipoprotein is very LDL (VLDL). A combination of both of these atherogenic lipoproteins (TG and VLDL) represents non-HDL-C. Furthermore, NCEP III guidelines state that non-HDL-C is calculated by subtracting HDL-C from total cholesterol and should be the secondary target if TG is 200 mg/dL. All three components of atherogenic dyslipidemia (LDL-C, HDL-C, and TG) are interrelated and each component predicts CHD risk. Low-density lipoprotein cholesterol It is not uncommon that LDL and LDL-C are used synonymously. LDL-C is a combination of lipoprotein (LDL) and lipid-like cholesterol. Cholesterol is packaged into lipoproteins in the form of cholesterol esters to make LDL-C. Lipoproteins differ in size and its cholesterol ester content. Therefore, small dense LDL particles can be more in number for the same level of blood cholesterol. The number of LDL-particles is an important predictor for risk from these lipoproteins when small LDL is present. The LDL-C value measured in a standard lipid profile does not provide information about the size of LDL particles. For example, a patient with normal LDL-C may have the majority of their cholesterol in small dense particles thus having more particles and placing the patient at higher risk for CHD. 17 High-density lipoprotein cholesterol HDL-C and HDL are not synonymous and a clear distinction should me made. HDL is a high-density lipoprotein which enables lipids like cholesterol to be transported back to liver. HDL-C represents HDL particle with cholesterol ester inside. Low HDL-C is another independent risk factor for CHD. It is a strong risk factor and is inversely associated with CHD risk. 6 In an observational study, it was found to have a 2% to 3 % decrease in the risk of CHD for every 1 mg/dL increase in HDL. 18 Another trial, Treating to New Targets (TNT), demonstrated a lower risk of CHD in groups with higher HDL. 19 Although the mechanism is not clear, it is believed that the anti-atherogenic effect of HDL-C may be a result of reverse cholesterol transport (RCT), and anti-oxidant and anti-inflammatory properties. 20 Furthermore, the size of HDL-C particles may also be important. The action of CETP (cholesterol ester transfer protein) plays an important role in determining the size of HDL-C particles. CETP is a plasma lipid transfer protein secreted by the liver. It facilitates the Dovepress exchange of TG from VLDL particle for cholesterol esters from HDL-C, resulting in smaller HDL-C particles. These resultant smaller HDL-C particles are readily cleared by the kidneys resulting in lower HDL-C particles. Triglycerides The independent prognostic value of TG was demonstrated in the Pravastatin or Atorvastatin Evaluation and Infectious Therapy-Thrombolysis in Myocardial infarction (PROVE IT-TIMI)-22 8 trial. This trial evaluated the role of intensive statin therapy in patients with acute coronary syndrome (ACS) admitted to hospital. After 2 years of follow up, significantly lower events occurred in patients with LDL-C  70 mg/dL. The relationship of LDL and TG to composite endpoints of CHD was assessed in this trial. After a multivariate adjustment, there were significantly fewer events in a treatment group with TG  150 mg/dL compared to the group with TG  150 mg/dL. 8 Therefore, TG  150 mg/dL was associated with lower CHD risk independent of LDL-C level, and achieving both optimal LDL and TG may be an important strategy in ACS patients. Treating beyond low-density lipoprotein cholesterol According to NCEP and American Diabetes Association (ADA), LDL-C is the primary therapeutic target in lipid management. As described above, there are several other atherogenic particles which contribute to the CHD risk after LDL-goals are met. Therefore, non-HDL-C is a secondary therapeutic target. 6,21 The American College of Cardiology (ACC) and ADA statement defines highest risk groups with known cardiovascular disease or patients without cardiovascular disease with DM associated with one or more risk factors such as smoking, hypertension, and family history of premature coronary artery disease (CAD). The 2008 ACC/ADA consensus statement sets specific lipid/lipoprotein goals based on cardiovascular risk based on lipoprotein abnormalities and cardiometabolic risk. The goals for these highest risk patients are LDL-C  70 mg/dL, non-HDL-C  100 mg/dL, and apolipoprotein B (Apo B) 80 mg/dL. 22 Although statins are the initial drugs of choice, combination therapy may be needed as a strategy to meet lipid goals beyond just LDL-C target. Modifying residual cardiovascular risk beyond LDL with statins As discussed before, it is crucial to modify all the atherogenic risk factors for better outcomes in patients with atherosclerotic vascular disease. To modify the risk beyond statin therapy, the following drugs are available. 1. Omega-3 fatty acids 2. Niacin 3. Fibrates 4. Combination of statins with niacin 5. Combination of statins with fibrates. Omega-3 fatty acids The 2007 National Lipid Associations (NLA) safety task force concluded that omega therapy is a safe therapeutic option for lowering TG. 23 Observational studies have shown several cardiovascular benefits such as a decrease in cardiac dysrrythmias, sudden cardiac death, and a decrease in blood pressure. 24 The mechanism of action of omega-3 fatty acids in the reduction of TG is unclear. There is evidence that omega-3 fatty acids increase TG clearance from circulating VLDL particles by increasing lipoprotein lipase (LPL) activity. Some studies have shown an increase of HDL-C with high doses of omega-3 fatty acids. In the JELIS study (Japan Epa Lipid Intervention Study), a combination of omega-3 fatty acids and statin was compared with statin monotherapy. There was a 19% reduction in major coronary events by the combination therapy as compared to statin alone. 25 Another trial, COMBOS (COMBination of prescription Omega-3 plus Simvastatin) which also showed that a combination of omega-3 fatty acids and simvastatin reduced non-HDL-C, TG and raised HDL as compared to statin monotherapy. 26 The AFFORD trial (Atorvastatin Factorial with Omega-3 fatty acids Risk Reduction in Diabetes) 27 did not show any benefit of residual cardiovascular risk reduction in the diabetic population. It is important to note that dietary supplement of omega-3 fatty acids is not subject to FDA regulation and thus higher doses of fish oil supplement may be required to be equivalent to the prescription form of omega-3 fatty acids (Lovaga, previously called Omacor). 23 Niacin Niacin has long been recognized for its lipid-modifying effect. It has a well established safety profile based on clinical evidence over 20 years. This is the most effective agent in raising HDL-C and had been used for past 5 decades. 28 No major trials showed any potential interaction of niacin with statins. The Coronary Drug Project (CDP) was an outcome, randomized trial conducted between 1966 and 1975 with a mean follow up of 6.2 years in men with history of previous MI. 29 The primary end point of mortality did not decrease in the niacin group. However, a significant Dovepress reduction in composite outcome of CHD death, non-fatal MI, and cerebrovascular events occurred in this group. There was also a significant reduction of cardiovascular surgery ( 47%) in the niacin group. 29,30 It should be added that 9 years after the termination of the trial, there was an 11% (P = 0.0004) mortality reduction in the niacin group compared to the placebo group. 31 Niacin decreases hepatic synthesis of TG, leading to reduced synthesis of VLDL particles, and increased degradation of APO-B and decreased catabolism of APO A. 32,33 Recent studies also indicate that it increase APO A-1, thereby increasing HDL-C. 34 It may also enhance ABC A-1 (ATP binding cassette) transporter transcription leading to HDL-mediated cholesterol efflux from peripheral cells. 35 Compared to statins alone, combination therapy with niacin and statins has shown greater efficacy with uncompromised safety in patients with dyslipidemia. Safety and efficacy have been evaluated in SEACOAST (Safety and Efficacy of a combination of Extended Release Niacin and Simvastatin trial) and OCEAN (Open label Evaluation of the safety and Efficacy of a combination of Niacin ER and Simvastatin) trials. 36,37 SEACOAST compared the safety and efficacy of simvastatin monotherapy with fixed dose combination of niacin ER and simvastatin in patients with mixed dyslipidemia. In SEACOAST-I trial, fixed dose combination of niacin ER and simvastatin (1000/20 and 2000/20) showed significant dose-related improvements in non-HDL-C, HDL, TG, and lipoprotein(a) compared to simvastatin monotherapy. The most notable results were the 24% increase in HDL-C, 38% reduction in TG, and 25% reduction of lipoprotein(a) in a group treated with niacin/simvastatin combination. 36 This has been further demonstrated in SEACOAST-II trial, which showed a 17.1% reduction in primary end point of non-HDL-C with niacin/simvastatin 2000/40 compared to a 10.1% reduction in simvastatin 80 mg monotherapy. The OCEAN trial was a randomized, open label, multicenter study which evaluated the safety and efficacy of a fixed dose combination of niacin and simvastatin in patients with elevated non-HDL. 37 The primary end point was long-term safety and secondary endpoints were the serum levels of non-HDL-C, IDL-C, and TG. In the subgroup of patients who failed to reach their goals with simvastatin as baseline therapy, 82% achieved non-HDL goals, 85% reached HDL goals, 67% reached HDL goals, and 64% reached TG target (65% reached all combined goals). 37 Another excellent clinical outcome trial (HATS) demonstrated efficacy of the combination treatment with niacin and simvastatin. This trial, the HDL-Atherosclerosis Treatment Study (HATS) was a double-blind, placebo-controlled trial in which 160 patients with low HDL and low LDL  145 mg/dL were enrolled. Coronary angiography was done at baseline and 2-year follow up and the endpoints were the angiographic change in CAD and occurrence of first cardiovascular event such as MI, death, coronary revascularization, and stroke. There was a slight progression of angiographic CAD in the placebo group (3.9% changes) and regression of angiographic CAD (0.4% change) in the treatment group with niacin and simvastatin. The authors concluded that treatment with niacin/simvastatin in CAD patients with low HDL resulted in slight regression of atherosclerosis but translated into 90% reduction in clinical events over a 3-year period. 38 Furthermore, the Arterial Biology for the Investigation of the Treatment Effects of Reducing cholesterol (ARBITER) evaluated effects of niacin ER added to background of statin therapy. 39 This was a double-blind, placebo-controlled study of once-daily niacin ER 1000 mg added to background statin therapy in 167 patients with known CAD with low HDL-C levels. The primary end point was change in carotid intima-media thickness (CIMT) after 12 months. The change in CIMT was 0.044 mm in the placebo group compared to 0.023 mm in the niacin group. The subgroup analysis of this study also showed that statin-treated patient had a similar CIMT progression regardless of insulin resistance status. One hundred thiry patients of ARBITER-2 who completed the blinded 12 months study end point were followed for an additional 12 months on open label as a prespecified extension study of ARBITER-2, called ARBITER-3. 40 The patients in ARBITER-3 included patients from ARBITER-2 who were on combination of niacin/statin and continued on the same regimen and for total of 2-year period. This also included patients from ARBITER-2 who were initially on statin and were switched to niacin/statin and followed for an additional 12 months. There was a significant regression of atherosclerosis as measured by CIMT at both 12 months and 24 months compared to statin therapy alone. 40 Discussion on niacin would be incomplete without mentioning its common side effect, flushing. Such flushing is initially seen in 80% patients and disappears over period of time. Recent data have suggested that flushing may be a marker of high lipid response to niacin therapy. 41 This was assessed in subgroup analysis of 77 patients in ARBITER-2. Interestingly, patients who reported flushing had a significantly greater response to increase in HDL-C than those patients without flushing. If these results are confirmed in larger trials, patients may be convinced that flushing is less of a nuisance and will therefore adhere to niacin treatment. Fibrates and fenofibrates Several studies have shown the cardiovascular benefits of fibrate therapy. The Helsinki Heart Study (HHS) 42 showed a 71% reduction in CHD in patients taking gemfibrozil compared to patients receiving placebo, and similar results were seen in VA-HIT (Veterans Affairs-High density lipoprotein Intervention Trial) 37 which showed a 41% reduction in CHD and stroke with gemfibrozil compared to placebo in a subgroup of patients with DM. The BIP trial 43 (Bezafibrate Infarction Prevention) evaluated the long-term cardiovascular benefit of bezafibrate therapy. This study demonstrated significant long-term cardiovascular protection which was attenuated by unbalanced use of non-study lipid-lowering drugs. Fibrates work by activating peroxisome proliferatoractivated (PPRA) alpha receptors which modulate several aspects of lipid metabolism by increasing expression of APO A-1, APO-11, and ABCA1, and lowering expression of APO C-111. They also increase HDL-C particles. Fibrates increase LPL synthesis which clears VLDL clearance and lowers TG. They also increase B-oxidation of fatty acids leading to a decrease in TG and VLDL production. 44,45 Fenofibrate therapy was tested in FIELD (Fenofibrate Intervention and Event Lowering in Diabetes) trial. 46 This study was planned to extend the findings of the HHS and VA-HIT studies by investigating the long-term effect of fenofibrates in the largest trial of patients (total 9795) with type 2 DM with a 5-year follow up. The fenofibrates showed a 11% reduction (insignificant) in primary end point of CHD but the study did show a 24% reduction in non-fatal MI and a 21% reduction in coronary revascularization (significant). The interesting aspects of this study were certain prespecified tertiary endpoints like effects on microvascular complication of DM such as microalbuminuria, diabetic retinopathy, and amputation due to non-traumatic causes. In the overall analysis for prespecified endpoints, there was a regression of microalbuminuria in the fenofibrate group. 46 There was also a beneficial effect in the subgroup with diabetic retinopathy. The fenofibrate group showed a 31% reduction in the need for first laser treatment compared to placebo, and benefits progressively increased thereafter. 47 Fenofibrates also reduced the number needing non-traumatic amputation in these diabetes patients. This demonstrates the beneficial effects of fenofibrate in preventing macro-and microvascular complications of DM. Combination of fibrates and statins As fibrates modify all aspects of dyslipidemia, their use in combination with statins is very attractive. 48 Both agents have the potential for myopathy and the risk of adverse events depends on the pharmacokinetic interaction between their affects on statin metabolism and clearance. [49][50][51][52][53] Several studies have shown that gemfibrozil interferes with the metabolism of statins by inhibiting glucoronidation. This possibly can raise statin levels, predisposing the patients to myopathy. 48,49 In contrast, fenofibrate does not interfere with statin metabolism and therefore may be safer to use in combination with statin therapy. 49 Because of this pharmacokinetic interaction, the National Lipid association (NLA) safety task force has recommended avoidance of usage of gemfibrozil in combination with statins, and fenofibrates may be the preferred fibrate to use in combination with statins. 54 It has been also stated in NCEP ATP-III update in 2004 that, unlike gemfibrozil, fenofibrates does not increase rate of myositis when used in conjunction with moderate doses of statins. 55 There are several ongoing trials to address this issue of combination therapy of omega-3 fatty acids, niacin, fibrates, and statins. 56,57 Other drugs on the horizon CeTP inhibitors The action of CETP plays an important role in determining the size and blood levels of HDL particles. Low HDL-C level constitutes a major risk factor for CHD. In view of lack of effective therapeutic intervention for low HDL-C, CETP inhibition offered a very attractive strategy to raise HDL-C. A CETP inhibitor was investigated in a trial in which torcetrapib markedly increased HDL-C levels as monotherapy as well as in combination with a statin. 58 Unfortunately its development was halted in phase III trial in 2006 due an increase in all-cause mortality in the treatment group with monotherapy or in combination with atorvastatin. Apo A-1 Milano Apo A-1 Milano is a naturally occurring mutated variant of the Apo A-1 found in human HDL. Apo A-1 Milano mutation was discovered by accident, present in 3.5% of the population of small village in Italy, Limone sul Garda. These carriers were found to have significantly reduced cardiovascular disease despite low HDL and high TG. Clinical trials with recombinant Apo A-1 Milano published in JAMA 59 showed a significant regression of coronary atherosclerosis as measured by intravascular ultrasound. Although promising, these results require confirmation in larger trials. Conclusion Large trials have consistently shown a significant benefit of LDL-C intervention but there is a significant residual risk Dovepress for cardiovascular events especially in high-risk patients with DM. This residual risk is predominantly due to low HDL-C and increased TG. NCEP has suggested the use of niacin or a fibrate as an addional agent for mixed dyslipidemia. Ongoing trials will be needed to demonstrate the incremental cardiovascular disease benefits and safety of combination regimens. Disclosures The authors report no conflicts of interest.
2018-04-03T02:43:20.874Z
2009-09-21T00:00:00.000
{ "year": 2009, "sha1": "705e54576f5f9f24b1876fd1c5533a96575f65bf", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=5282", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "334bbda688ec8119d68fabc4896d9f2a50bf09c8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253317632
pes2o/s2orc
v3-fos-license
TRADE SECTOR CHALLENGES, OPPORTUNITIES AND STRATEGIES IN EAST JAVA PROVINCE DURING THE COVID-19 PANDEMIC Accounting Introduction The year 2020 will be a tough challenge for the world. The outbreak of COVID-19 has affected global economic conditions, especially in the trade sector in East Java Province as a result of the epidemic control policy through restrictions on the movement of people and goods implemented by the Indonesian state. In line with the global economy, the dynamics of the economy of East Java Province throughout 2020 to mid-2021 are also affected by the COVID-19 pandemic. Industrial activity throughout 2020 was quite restrained and grew significantly, compared to the previous year. Thus, strategic steps are needed to increase economic growth during this Covid-19 pandemic. Overall in 2020, East Java's GRDP in 2020 experienced a contraction compared to last year which grew to 5.52% (yoy). The slowdown occurred in almost all components of demand due to the COVID-19 pandemic and resulted in a decline in private consumption, government consumption, investment, and net exports between regions. The COVID-19 pandemic has also put pressure on the building and construction investment sector due to the reallocation and refocusing of government budgets as well as the rescheduling of government and private projects due to the policy of restricting economic activity. However, various assistance from the Government for the targeted industrial and household sectors (preprosperous and affected by is suspected to be able to withstand a deeper slowdown in consumption. From the supply side, the COVID-19 pandemic has had an impact on the slowdown in the performance of almost all East Java LUs, including the five main implementation, so that the main goals and objectives of the organization will be achieved (Syafi'i Antonio, 2001). Strategy is an overall approach that deals with the idea, planning, and execution of an activity over a certain period of time. In a good strategy there is coordination of the work team, has the theme of identifying the supporting factors in accordance with the principles of rational implementation of ideas, efficiency in funding and having tactics to achieve goals effectively (Fandji Tjiptono, 2000). Strategy shows the general direction to be taken by the organization to achieve its goals. This strategy is a grand plan and an important plan. Every organization to achieve its goals. This strategy is a grand plan and an important plan. Every well-managed organization has a strategy, even if it is not explicitly stated. Regarding the following definition of strategy, several definitions will be mentioned: 1) According to Alfred Chandler, strategy is the setting of goals and direction of action as well as the allocation of resources needed to achieve goals; 2) According to Kenneth Andrew, strategy is a pattern of goals, aims or objectives of policies and plans. An important plan for achieving that goal is stated in such a way as to define the business to be followed and what type or type of organization the organization will be; 3) According to Buzzel and Gale strategy is the key policies and decisions used for management, which have a large impact on financial performance. These policies and decisions usually involve important resources and cannot be easily replaced (Agustinus Sri Wahyudi, 1996); 4) According to Konichi Ohinea, business strategy is competitive advantage, the sole purpose of planning is to obtain, as efficiently as possible, the last position that can be maintained in the face of its competitors. So, corporate strategy is an effort to change the company's strength in proportion to the strength of its competitors, in the most efficient way; 5) According to Griffin, strategy is a comprehensive plan to achieve organizational goals (Pandji Anoraga, 2009). Strategy is the most important factor in achieving company goals, the success of a business depends on the ability of leaders who can formulate the strategies used. The company's strategy is very dependent on the company's goals, circumstances and the existing environment. B. Types of Strategy a) Market penetration strategy Market penetration or market penetration is the company's effort to increase the number of customers both in quantity and quality in the current market through active promotion and distribution. This strategy is suitable for markets that are growing slowly. b) Product development strategy Product development strategy is an effort to increase the number of consumers by developing or introducing new company products. Innovation and creativity in product creation is one of the main keys in this strategy. The company is always trying to update or introduce new products to consumers. The company continuously explores market needs and strives to meet market needs. c) Market development strategy Market development strategy is one of the ways to bring products to new markets by opening or establishing new branches or subsidiaries that are considered strategic enough or cooperating with other parties in order to absorb new consumers. Management uses this strategy when the market is congested and the increase in market share is very large or competitors are strong. d) Integration strategy The integration strategy is the final choice strategy that is usually taken by companies experiencing severe liquidity problems. Usually what will be done is a horizontal diversification strategy, namely the merger of companies. e) Diversification strategy The diversification strategy is both concentration and conglomerate diversification. Diversification is meant here is the company focuses on a particular market segment by offering various variants of the company's products owned. Meanwhile, conglomerate diversification is banking that focuses on providing various variants of the company's products to the conglomerate group (corporate). Research Methodology In this study using a qualitative descriptive approach, and the type of research used is library research, namely collecting data or scientific papers related to the object of research or collecting data that is library in nature. And the study carried out to solve a problem critically and in depth on library materials that are relevant to the research. According to M. Nazir (2002:27), literature study is a data collection technique by using a study of books, literatures, notes and reports that have to do with the problem to be solved. According to M. Nazir, Literature Study is an important step in research, where after a researcher determines the research topic, then the next step is to conduct a study related to the theory of the research topic. Researchers must collect as much relevant information as possible from the literature related to library sources that can be obtained from books, journals, magazines, research results and other sources that are in accordance with the research theme. If the relevant literature has been obtained, it is immediately compiled regularly for use in research. Therefore, the literature study includes general processes such as systematically identifying theories, finding literature and analyzing documents that contain important information related to the research topic. Results and Discussion There are several problems experienced by various types of industrial sectors that spread throughout the East Java Province during the covid-19 pandemic and the settlement efforts made so that the economic growth of East Java Province remains stable, including: A. Large Industries: The main problems being faced include: 1. Raw and auxiliary materials for production, especially those through the import route 2. Export product market access 3. The production process is interrupted Efforts to solve the above problems, among others: 1. Acceleration of the import substitution industry 2. Industrial supply chain restoration 3. Development of domestic and foreign promotion In carrying out the settlement efforts that have been planned above, various activities will be carried out, in the form of: 1) Data collection and analysis of substitution of imported raw materials; 2) Analysis of industrial supply chain recovery; 3) Business matching with ITPC of export destination countries; 4) Dissemination of export & import policies; 5) Implementation of SKA Online. B. Medium Industry : The main problems being faced include: 1. Difficulty of raw materials 2. Access to capital and financing is constrained 3. Competence of human resources (HR) 4. Product marketing access 5. Product standardization Efforts to solve the above problems, among others: 1. Easy access to capital 2. Machine restructuring 3. Improved product standardization 4. Improved promotion In carrying out the settlement efforts that have been planned above, various activities will be carried out, in the form of: 1) Development of industrial centers; 2) Promotion and Trade Missions; 3) Increasing Human Resources and Production Capacity; 4) Industry assistance in the implementation of the "new normal"; 5) Standardization of industrial products; 6) IOMKI Supervision / Assistance C. Small Industry: The main problems being faced include: 1. Difficulty of raw materials 2. Access to capital and financing is constrained 3. Competence of human resources (HR) 4. Product marketing access 5. Product standardization Efforts to solve the above problems, among others: 1. Facilitating capital, equipment and raw materials 2. Improving product standardization, promotion, human resources and business management managerial In carrying out the settlement efforts that have been planned above, various activities will be carried out, in the form of: 1) Growth of New Entrepreneurs (WUB); 2) Promotion of domestic products; 3) Increasing Human Resources and Production Capacity; 4) Industry assistance in the implementation of the "new normal"; 5) Standardization of industrial products, and 6) IOMKI supervision/assistance. In addition to the industrial sector affected by the COVID-19 pandemic, the world of trade also has almost the same problems. The following are the current problems with trading facilities and the solutions provided so that people can enjoy easy access to trade: 1. Mall The problem being faced is a decrease in the number of visitors which has an impact on decreasing income for shopping place tenants. Meanwhile, the solution that can be provided is by facilitating and educating shopping center hygiene and health procedures for shopping center managers, as well as monitoring/supervising the implementation of the Covid-19 protocol in all malls strictly so that clusters do not occur. Modern Retail The problems being faced are in the form of reduced stocks of food and basic needs that are supplied which are at risk of increasing inflation rates. Then, efforts that can be made are in the form of guaranteeing the availability of food stocks and basic needs as well as monitoring/supervising the implementation of the COVID-19 protocol. People's Market The problem that is feared in the people's market is only the lack of guaranteed cleanliness in an effort to reduce the outbreak. Thus, efforts made to overcome this problem can be in the form of facilitating and educating market managers and visitors about cleanliness and health, as well as implementing Odd Even Markets and Online Markets. E-commerce It is a new means of trading for people in today's digital era. However, this facility still poses problems for some consumers who are still unfamiliar with how the buying and selling transaction system is carried out, as well as the lack of consumer protection on the trading page because there is still abuse of the convenience function in the form of fraud, thus making people worried about transacting in trading facilities like this . The solution that can be done by the government is to provide consumer protection education on trade pages. However, the East Java Provincial government has prepared various programs so that the regional trade system continues to run smoothly, including: Of all the programs above, there is one interesting program that can provide opportunities so that the wheels of the economy can run is to increase a sense of pride in the community for products made in their own country. You could say this is the best solution to increase regional income figures with high purchasing power for local products. In order for this program to be implemented properly, the government of East Java Province must take part in improving the quality of the industrial sector and strengthening the local market. The East Java Provincial Government already has an economic recovery strategy in the industrial and trade sectors as well as a short-term plan that has been prepared in such a way. The strategies made by the East Java Provincial Government include: A. Industrial Development Strategy The strategy in terms of production, in the form of: 1. Improvement of the quality and quantity of industrial resources 2. Increasing the quality and quantity of industrial facilities and infrastructure 3. Strengthening the pattern and structure of industrial zoning to encourage the distribution of industrial distribution (development of industrial growth center areas, industrial allotment areas, industrial areas and IKM centers) Development of integrated industrial information and communication technology (ICT) between IKM and IB 5. Increasing the synergy between the government and the private sector in realizing green industry, both in new and existing industries Then, the strategies in the marketing sector include a) Network integration, both to obtain raw materials, as well as marketing expansion through the establishment of the East Java Trade Representative Office (KPD), both at the national, ASEAN and international levels; b) Increasing international cooperation in the field of industrial development; and c) Improved marketing with information technology. In addition, there are also strategies that lead to financing, including: 1. Increasing the role and synergy between relevant stakeholders in providing competitive capital 2. Provision of affirmative strategies in the form of policy formulation, strengthening institutional capacity and providing facilities to small and medium industries 3. Strengthening commitment in providing legal capacity and investment guarantee 4. Accelerate the realization of financing that is integrated with digital technology B. SMI Development Strategy (Small and Medium Industry) In increasing production for SMEs, the following strategies are needed: 1. Development and arrangement of leading IKM centers 2. Improving the quality of IKM human resources along with the development of the digital era 3. Encouraging the growth of new SME entrepreneurs 4. Improved efficiency and standardized product quality 5. Strengthening IKM institutions in the face of global competition 6. IKM scale improvement Meanwhile, in terms of marketing, it is in the form of developing partnerships with Medium and Large Industries, as well as increasing information technology marketing. Plus in terms of financing in the form of guarantees for competitive business financing. East Java's economy as a whole in 2021 is estimated to grow higher in line with the improvement in the global and domestic economy. From the demand side, the improvement in East Java's economic performance is expected to come from accelerated household consumption, investment, and inter-regional net exports. Meanwhile, foreign exports are also expected to continue to grow positively. Vaccination policies that take place globally are believed to encourage the wider opening of productive economic sectors and increased community activities which have implications for increasing external and domestic demand. From the supply side, East Java's economic recovery is expected to be supported by improved performance in the main business sectors, namely the manufacturing and trading industries in response to rising domestic and external demand. Meanwhile, East Java Consumer Price Index (CPI) inflation in 2021 is predicted to increase compared to 2020, but is still within the national inflation target of 3.0%±1% (yoy). The increase in inflationary pressure is predicted to be due to the recovery in economic performance, which will lead to an increase in inflation in the clothing, education and transportation categories. The following is a table of the 2021 Short Term Plan for the industrial and trade sectors that has been prepared by the East Java Provincial Government:
2022-11-05T15:39:18.306Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "23058dd0729defce7cda789b206081f53bae36f9", "oa_license": "CCBY", "oa_url": "https://jurnal.stie-aas.ac.id/index.php/IJEBAR/article/download/3346/2066", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7a64d9958ca3ed01cac5e6b2627192d5114b8256", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
218538694
pes2o/s2orc
v3-fos-license
Guillain-Barré syndrome associated with SARS-CoV-2 infection☆ No disponible tual Campus of the Universidad Complutense de Madrid. These measures are similar to those implemented in other centres, even under different healthcare systems. 1,2 In early April, we began to assess how neurological care can return to a situation of normalcy. This transition will probably have to be progressive and include teleconsultations (we are implementing a secure videoconferencing system to improve interaction with patients) as well as conventional in-person consultations. It will be necessary to reintroduce the suspended treatments, avoiding overcrowding of patients and ensuring proper protection, and to return to normal hospitalisation procedures. We also face the challenge of identifying, understanding, and treating the increasingly frequent neurological manifestations of COVID-19, and minimising the impact of the pandemic on patients with neurological diseases. Funding This study has received no specific funding from any public, private, or non-profit organisation. Síndrome de Guillain-Barré asociado a infección por SARS-CoV-2 Dear Editor: COVID-19 is an infectious disease caused by a novel coronavirus, SARS-CoV-2. The virus was first detected in Wuhan, China, in December 2019, and subsequently spread across the world. 1 There is extensive evidence that SARS-CoV-2 infection causes respiratory alterations; however, the associated neurological manifestations are less well-known. 2 We present a case of Guillain-Barré syndrome (GBS) associated with COVID-19. Our patient is a 43-year-old man who consulted due to symmetrical weakness involving all 4 limbs; weakness progressively increased in severity, leading to inability to walk. He also presented sensory alterations in distal regions of all 4 limbs. Ten days previously he had experienced a selflimited episode of diarrhoea, followed by symptoms of upper respiratory tract infection. The neurological examination revealed weakness in all 4 limbs, with 3/5 muscle strength proximally and 4/5 distally, and global areflexia. Chest radiography revealed alterations suggestive of incipient pneumonia secondary to COVID-19 (Fig. 1). The PCR test for SARS-CoV-2 returned positive results. An EMG/nerve conduction study revealed increased distal motor latency and decreased sensory nerve conduc- Figure 1 Chest radiography (posteroanterior view) revealing ground-glass opacity in the right middle lobe; in the current epidemiological situation, these findings suggest incipient pneumonia secondary to SARS-CoV-2 infection. tion velocity in the nerves evaluated, and increased minimal F-wave latency in the right L5 and S1 spinal nerve roots; these findings are suggestive of demyelinating polyradiculoneuropathy and compatible with a diagnosis of GBS. During hospitalisation, the patient was assessed by the pulmonology and neurology departments. He received intravenous immunoglobulins for 5 days plus protocolised treatment for COVID-19: hydroxychloroquine sulfate, antiretroviral drugs (lopinavir and ritonavir), antibiotics (amoxicillin), corticosteroids, and low-flow oxygen therapy. Motor function worsened within 2 days of admission, with the patient developing bilateral facial palsy and dysphagia. Subsequently, neurological and respiratory symptoms progressed favourably. Although SARS-CoV-2 infection is likely to have caused GBS in our patient, we should not rule out the possibility that co-presence of GBS and SARS-CoV-2 infection may be coincidental. The association between COVID-19 and GBS has not been established, although recent evidence suggest that the virus may be involved in the aetiopathogenesis of GBS. 3 Future studies should address the neurological manifestations of SARS-CoV-2 infection. Dear Editor: Since late 2019, and especially during 2020, multiple cases of COVID-19 have been detected in the Chinese city of Wuhan 1,2 ; the disease has become a pandemic particularly affecting China, Southern Europe, and the USA, with very few places in the world escaping its impact. Spain is one of the countries hardest hit by the COVID-19 pandemic, although geographical differences can be observed. As of 16 April 2020, there are 182 816 confirmed cases, 19 130 deaths, and 74 797 recovered cases, and a slight downward trend has been observed in mortality, use of emergency departments, and intensive care unit admission. The true scale of the pandemic is yet to be determined due to a lack of data on the virus' global impact on the general population. This situation has led to the declaration of the state of alarm in Spain, 3 with the Ministry of Health being granted a predominant role and healthcare responsibilities remaining within the scope of regional governments, 4 which have had to adapt healthcare services to the pandemic and probably reduce the level of care provision for the more specific pathologies of each specialty. Current data suggest that SARS-CoV-2 is highly contagious. Among the clinical manifestations of COVID-19 (there appear to be a large number of asymptomatic/oligosymptomatic patients), 5 the main symptoms include fever, non-productive cough, dyspnoea, pulmonary infiltrates, and lymphocytopaenia. The disease particularly affects elderly and immunosuppressed individuals. The most frequent neurological manifestations include anosmia and dysgeusia, as well as myalgia, fatigue, and headache; only limited data are available on central and peripheral nervous system involvement. Anecdotal reports of these types of symptoms are beginning to appear, and databases are being generated, as we lack data from researchers with more experience, such as Chinese professionals. According to Dr Robert Stevens, ''we know almost nothing about the potential interactions between COVID-19 and the nervous system.'' Despite the increasing number of anecdotal cases and observational data on neurological symptoms, most COVID-19 patients do not present these symptoms, and while neurological alterations are infrequent, they remain a pos-
2020-05-08T13:05:44.062Z
2020-04-22T00:00:00.000
{ "year": 2020, "sha1": "428c7b3697a3f8deb71d8dcf331056a9fe5466fb", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nrleng.2020.04.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00ea9c687d69b7315f13fe3795172a86fef2969a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220651078
pes2o/s2orc
v3-fos-license
Extracellular Vesicles: A Potential Biomarker for Quick Identification of Infectious Osteomyelitis Effective management of infectious osteomyelitis relies on timely microorganism identification and appropriate antibiotic therapy. Extracellular vesicles (EVs) carry protein and genetic information accumulated rapidly in the circulation upon infection. Rat osteomyelitis models infected by Staphylococcus aureus, Staphylococcus epidermidis, Pseudomonas aeruginosa, and Escherichia coli were established for the present study. Serum EVs were isolated 3 days after infection. The size and number of serum EVs from infected rats were significantly higher than those from controls. In addition, bacterial aggregation assay showed that the S. aureus and E. coli formed large aggregates in response to the stimulation of serum EVs from S. aureus-infected and E. coli-infected rats, respectively. Treatment of EVs-S. epidermidis led to large aggregates of S. epidermidis and E. coli, whereas stimulation of EVs-P. aeruginosa to large aggregates of S. aureus and P. aeruginosa. To evaluate the changes in EVs in osteomyelitis patients, 28 patients including 5 S. aureus ones and 21 controls were enrolled. Results showed that the size and number of serum EVs from S. aureus osteomyelitis patients were higher than those from controls. Further analysis using receiver operating characteristic curves revealed that only the particle size might be a potential diagnostic marker for osteomyelitis. Strikingly, serum EVs from S. aureus osteomyelitis patients induced significantly stronger aggregation of S. aureus and a cross-reaction with P. aeruginosa. Together, these findings indicate that the size and number of serum EVs may help in the diagnosis of potential infection and that EVs-bacteria aggregation assay may be a quick test to identify infectious microorganisms for osteomyelitis patients. INTRODUCTION Posttraumatic or postoperative osteomyelitis, a common and serious complication in orthopedic trauma, presents a variety of clinical challenges. Clinical data have demonstrated that Staphylococcus aureus is a leading pathogen for osteomyelitis, followed by Staphylococcus epidermidis, Pseudomonas aeruginosa, and Escherichia coli (Jiang et al., 2015;Ma et al., 2018;Fily et al., 2019). Prompt diagnosis, timely identification of causative pathogenic bacteria, and appropriate antibiotics are critical for an effective treatment of osteomyelitis. In clinical tests for osteomyelitis diagnosis, conventional laboratory parameters are C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and white blood cells (WBCs) (Stucken et al., 2013;Lin et al., 2016). However, as these biomarkers are non-specific inflammatory markers, it is difficult to distinguish infectious diseases from non-infectious inflammatory conditions. Image examinations such as magnetic resonance imaging and single-photon emission computed tomography/computed tomography can help distinguish the local bone inflammation from systematic inflammation and can refine the infection region clearly (Arican et al., 2019;Mejzlik et al., 2019), but their high costs and intricate operation process limit their application. Most importantly, all the tests aforementioned cannot determine the specific pathogenic bacteria to help choose sensitive antibiotics. Additionally, bacterial diagnosis is limited by a high false-negative rate of conventional cultures (Jiang et al., 2015). Therefore, it remains a great challenge to identify microorganisms quickly and accurately. Extracellular vesicles (EVs) are a heterogeneous group of nanoscale membrane vesicles (MVs) released by cells and can be further classified as exosomes (<150 nm in diameter), microvesicles (100-1,000 nm), and apoptotic bodies (larger than 1,000 nm) based on their biogenesis, size, and biophysical properties (Gurunathan et al., 2019). As EVs carrying proteins, lipids, and various nucleic acids can be released in response to various stimuli, their role as a key regulator in physiopathological cellular processes in cancer, inflammation, and infection has been intensively explored (Hessvik and LIorente, 2018;Panfoli et al., 2018). Studies indicate that EVs may serve as a valuable tool in diagnostics, prognostics, and therapeutics (Hessvik and LIorente, 2018;Panfoli et al., 2018). It is found that the number of polymorphonuclear cellsderived EVs is significantly increased in the serum from sepsis patients, probably associated with antibacterial effect of host immune system (Timár et al., 2013;Herrmann et al., 2015) and that EVs in the serum from S. aureus-infected patients induce distinct aggregation with in vitro addition of exogenous S. aureus (Timár et al., 2013;Herrmann et al., 2015). We thus made a hypothesis that serum EVs may have a potential role in differential diagnosis of osteomyelitis. Here, we established rat models of tibial osteomyelitis, which resulted from infection with S. aureus, S. epidermidis, P. aeruginosa, and E. coli, respectively. We found that the particle size and number of serum EVs from infected rats were all significantly increased compared to controls. We also found that the serum EVs from rats infected by specific bacteria could induce strong aggregation of the corresponding bacteria. Furthermore, we demonstrated the differential diagnostic role of serum EVs from S. aureus osteomyelitis patients. Ethics Statement All protocols were conducted in accordance with guidelines for the care and use of human subjects and approved by the Ethics Committee of Nanfang Hospital, Southern Medical University. Written informed consent was obtained from participants prior to inclusion in the study. All the patients were recruited from the Department of Orthopedics at Nanfang Hospital, Southern Medical University, from June to October, 2019. Animal studies were conducted in accordance with the Institutional Animal Care and Use Committee and approved by the Ethics Committee of Nanfang Hospital. Bacterial Strains and Preparation of Bacteria Staphylococcus aureus, S. epidermidis, P. aeruginosa, and E. coli isolated from the osteomyelitis patients from the Department of Orthopedics, Nanfang Hospital, were identified using PHOENIX 100 (Becton Dickinson Microbiology System, Frankin Lakes, NJ, USA). To prepare bacteria for osteomyelitis animal models or bacterial aggregation assay, an isolated colony from a fresh tryptic soy agar plate was inoculated in 10 mL fresh tryptic soy broth overnight at 37 • C with shaking at 180 revolutions/min (rpm). Bacteria were harvested by centrifugation, washed twice with phosphate-buffered saline (PBS), and resuspended in PBS. The concentration of each strain was adjusted to an optical density (OD) of 0.5 at 600 nm, approximately equal to 10 8 colonyforming units/mL (CFUs/mL), for infection in rat osteomyelitis models and an OD of 1.5 for testing the aggregation action. Rat Models of Osteomyelitis Pathogen-free male Wistar rats aged 8 to 10 weeks (300-350 g) were randomly divided into five groups: one control group with sham operation and four infected groups with S. aureus, S. epidermidis, P. aeruginosa, and E. coli, respectively. Rat osteomyelitis models were established for this study as Kalteis et al. (2006) described with modifications. Briefly, after the rats were anesthetized with pentobarbital, the rat hind limb was shaved and swabbed with povidone-iodine solution. Next, a parapatellar incision was made to expose the tibial plateau before the tibial medullary cavity was opened and widened with a sterile 18-gauge hollow needle. One hundred microliters of bacterial suspension containing 10 8 CFUs/mL S. aureus, S. epidermidis, P. aeruginosa, or E. coli was then injected into the medullary cavity by another 18-gauge sterile hollow needle, the tip of which (1.5 mm in length) was cut off and inserted into the medullary cavity. In the control group, 100 µL PBS was injected with a 1.5mm needle tip inserted into the medullary cavity. A sterile bone wax was then used to close the medullary cavity. Blood samples were collected 3 days after infection for EV isolation. Isolation of Serum EVs From Rat Osteomyelitis Models Six to ten milliters blood samples were collected into a vacuum blood tube without anticoagulant before centrifuging at 2,000 g for 10 min to separate the serum. Serum samples were processed by centrifugation at 3,000 g for 30 min and filtration through a 5-µm filter (Millex Filter Unit; Millipore, Billerica, MA, USA) to remove cell debris. Extracellular vesicles were isolated from serum samples as Herrmann et al. (2015) described previously. Briefly, the serum samples were transferred to a 10.2 mL Beckman centrifuge tube, which was then filled with filtered Hanks balanced salt solution (HBSS). Then, EVs were isolated by ultracentrifugation at 100,000 g for 1 h at 4 • C (Optima L-100 XP; Beckman Coulter, Indianapolis, IN, USA). The isolated EV pellets were resuspended in HBSS to one-fifth of the original serum volume. Isolated EVs were then aliquoted and stored at −80 • C to avoid repeated freeze-thaw cycles. Extracellular vesicles were lysed to evaluate their protein amount, and protein concentration was detected using bicinchoninic acid (BCA) protein assay. Briefly, after 25 µL of 5 × cell lysis buffer was added to 100 µL EVs, the lysate was centrifuged at 12,000 rpm for 5 min under 4 • C. Supernatant was collected to quantify protein according to manufacturer's instructions (cat. 23225; Pierce TM BCA Protein Assay Kit; Thermo Fisher Scientific, Rockford, IL, USA). The protein concentrations were adjusted based on the original volume of serum from which the EVs were derived. Transmission Electron Microscopy The morphology of EVs was identified using transmission electron microscopy. Extracellular vesicles were prefixed with 2% paraformaldehyde solution and incubated on carbon-coated copper grids for 20 min at room temperature. After rinsing with PBS for three times, samples were fixed with 1% glutaraldehyde solution for 5 min at room temperature, followed by rinsing with distilled water for 10 times. Samples were then stained with 4% uranyl acetate for 5 min and imaged using a transmission electron microscope (Tecnai G2 Spirit; FEI, Hillsboro, OR, USA). Nanoparticle Tracking Analysis The size distribution and number of EVs were assessed by nanoparticle tracking analysis (NTA) using a Nanosight NS300 system (Nanosight NS300; Malvern Instruments Ltd., Malvern, Worcestershire, UK), which is equipped with a 638-nm laser light source and sCMOS camera. Extracellular vesicles were diluted by 1/20 in HBSS, administered manually into the sample chamber using a syringe. Each sample was measured by three 10-s videos and recorded at cameral level 11. The data were analyzed using NTA software version 3.0. The particle number was adjusted based on the original volume of the EV-derived serum. Isolation of MVs From Supernatant of Bacterial Culture The isolation process of MVs from bacteria was developed following the protocol previously described (Kim et al., 2012). Briefly, overnight cultures of S. aureus, S. epidermidis, E. coli, and P. aeruginosa were pooled separately and centrifuged at 4,000 g for 30 min at 4 • C. Their supernatant was filtered using a 0.22-µm syringe filter (Millipore) and further centrifuged at 130,000 g (Optima L-100 XP; Beckman) at 4 • C for 2 h. The pellet was resuspended in sterile HBSS to concentrate by 10 times, and the particle number was measured via a Nanosight NS300 system (Nanosight NS300; Malvern Instruments Ltd, Malvern, Worcestershire, UK). Bacteria Aggregation Assay The concentration of S. aureus, S. epidermidis, P. aeruginosa, or E. coli was adjusted to an OD of 1.5 at 600 nm. Each strain was then stained using a SYTO 9 green fluorescent nucleic acid dye (Invitrogen, Carlsbad, CA, USA) following the manufacturer's protocol. The number of serum EVs from bacteria strain-infected rat was adjusted to 1 × 10 10 particles/mL based on the particle number detected by NTA. To detect bacteria aggregation, 50 µL EVs at 1 × 10 10 particles/mL mixed with SYTO 9-stained bacteria in a 10:1 volume ratio were incubated for 15 to 20 min at 37 • C. Ten microliters of the EVs-bacteria mixture was applied to the hemocytometer and allowed to stay for 5 min and imaged by a fluorescence microscope (BX53; OLYMPUS, Tokyo, Japan) to visualize bacteria aggregation. For each EVs-bacteria reaction, three random fields were imaged under 400× magnification. The diameters of bacteria aggregates were measured by ImageJ (National Institutes of Health, Bethesda, MD, USA). The bacterial aggregates larger than 3 µm in diameter were counted as positive ones, the quantification of which was expressed as the percentage of positive aggregates in total particles in the field. A total of five independent experiments were performed for each bacteria-EVs aggregation reaction. Sodium Dodecyl Sulfate-Polyacrylamide Gel Electrophoresis and Coomasie Brilliant Blue (CBB) Staining To investigate whether bacterial MVs were a component of serum EVs isolated from infected rat, proteins in serum EVs from infected rats and in bacteria MVs were analyzed with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) (10% resolving gel), and the gel was subsequently stained with CBB G-250. Protein components of EVs and MVs were distinguished according to their different patterns of bands; 7 µg/lane protein was loaded and separated with 10% SDS-PAGE. After electrophoresis, the gel was fixed in fixing solution (50% methanol and 10% glacial acetic acid) for 6 h before soaking in staining solution (0.1% Coomassie Brilliant Blue R-250, 50% methanol, and 10% glacial acetic acid) for 20 min with gentle agitation. Finally, excess staining was eluted with destaining solution (40% methanol and 10% glacial acetic acid). The gel was photographed for further analysis. Aggregation of Bacteria With EVs From Neutrophils To evaluate the possible cellular origin of EVs, we prepared neutrophilic granulocytes from whole blood using a Percoll kit (P8370; Solarbio, Beijing, China) following the manufacturer's instructions. Briefly, 2 mL of rat blood was carefully overlaid onto a three-layer Percoll gradient (75, 65, and 55%) and centrifuged at 1,500 g for 30 min at 4 • C. The layer of neutrophilic granulocytes was carefully pipetted, and the cells were washed twice with HBSS. Cell concentration was adjusted to 1 × 10 7 cells/mL. Bacteria aggregation was performed according to the method previously described (Timár et al., 2013). Briefly, 50 µL opsonized bacteria at 1 × 10 8 CFUs/mL were added into 450 µL cell suspension and cultured at 37 • C for 20 min. Then culture supernatant was collected for preparation of EVs. Finally, the prepared EVs were cocultured with corresponding bacteria for aggregation test. Analysis of Serum EVs From Osteomyelitis Patients Twenty-eight osteomyelitis patients and 21 controls were included in this study. The diagnosis of osteomyelitis was based on the following confirmatory criteria previously described (Morgenstern et al., 2018): supportive histopathological tests, fistula with communication to a bone or an implant, pathogens identified by culture from at least two separate sites in deep tissue, or an implant. Patients who had undergone internal fixation of fracture but finally healed were enrolled as controls. Patients and controls with a history or presence of another infectious disease, diabetes, autoimmune disease, and severe systemic disease were excluded. Five-milliliter blood samples were collected for isolation of EVs in a procedure as aforementioned. The particle size and number of EVs were assessed by NTA as well. To evaluate the bacteria aggregation effect of EVs from S. aureus osteomyelitis patients, the concentration of S. aureus, S. epidermidis, P. aeruginosa, or E. coli was adjusted to an OD of 1.5 at 600 nm before staining with SYTO 9. Next, 50 µL EVs at 1 × 10 10 particles/mL were cocultured respectively with SYTO 9-stained S. aureus, P. aeruginosa, E. coli, and S. epidermidis in a 10:1 volume ratio for 15 to 20 min at 37 • C. Quantification of bacterial aggregates was performed as described in the section, Bacteria Aggregation Assay. Statistics Quantitative values are presented as the mean ± SE. Multiple comparisons were assessed by one-way analysis of variance with least significant difference tests. Means between controls and osteomyelitis patients were compared by independent Student t-test. Paired t-test was used to analyze the aggregation effect of EVs secreted by neutrophils on bacteria. Receiver operating characteristic (ROC) curves were calculated to evaluate the diagnostic efficacy of osteomyelitis biomarkers. SPSS 22.0 was used for statistical analyses (SPSS, Inc., Chicago, IL, USA). P < 0.05 was considered to be statistically significant. To investigate the association between particle size and number of EVs and amount of protein in EVs, we detected protein concentration in EVs per microliter of serum. Results showed that compared with that in the serum EVs from control rats (209.36 ± 39.36 µg/mL), the protein concentrations were significantly higher in the serum EVs from the rats infected by S. aureus (567.22 ± 170.81 µg/mL, P = 0.02), E. coli (507.56 ± 108.12 µg/mL, P = 0.047), and P. aeruginosa (559.67 ± 66.04 µg/mL, P = 0.022). No such a difference was observed when the value was adjusted to microgram per 1 × 10 9 EVs (Figures 1D,E). As shown in Figure 1F, electron microscopy of negatively stained EVs showed cupshaped MVs at 100 to 200 nm in diameter. The above data indicate that the increased number of EVs particles rather than the increased size is associated with up-regulated level of proteins in serum EVs. Stimulation of Bacterial Aggregation by Serum EVs From Osteomyelitis Rats The bacteria aggregation activity of EVs from control and bacteria-infected rats was evaluated by incubating EVs with SYTO 9-stained S. aureus, S. epidermidis, E. coli, and P. aeruginosa. Bacterial aggregation was quantified by counting aggregates larger than 3 µm in diameter using fluorescent microscopy ( Figure 2A). As shown by quantitative data in Figure 2B, in the serum EVs from S. aureus-infected and E. coli-infected rats, massive EVs-bacteria aggregates were observed when EVs were incubated with S. aureus and E. coli. In response to the treatment of serum EVs from S. epidermidis-infected rats, aggregates of S. epidermidis and E. coli were significantly more than those of S. aureus or P. aeruginosa. The serum EVs from P. aeruginosa-infected rats led to large aggregates of S. aureus, as well as those of the P. aeruginosa. It is noticeable that the serum EVs from the osteomyelitis rats infected with each of the four strains of bacteria led to large aggregates of a corresponding strain of bacteria with the exception of E. coli. Now that bacterial cells release MVs during host-microbe interactions (Haurat et al., 2015), MVs-bacteria aggregation assay was performed with MVs harvested from the supernatant of bacterial culture to evaluate whether MVs from bacteria may stimulate aggregation of the same bacteria. Results showed that the MVs from the supernatant of bacterial culture failed to induce aggregation of the corresponding bacteria ( Figure 3A). To further investigate whether MVs might have been mixed in the serum EVs isolated from the rats infected by different bacteria strain, we separated EV protein from the serum of non-infected and bacteria-infected rats as well, and MVs from the supernatant of in vitro bacteria culture using SDS-PAGE, followed by CBB staining to show patterns of protein bands. As shown in Figure 3B, the patterns of serum EV protein from the rats infected by four strains of bacteria were similar to those from the control rats, but the amount of protein close to 70 KD in the EVs from rats infected by each strain of bacteria was much higher than that from control rats. Interestingly, we found that all the band patterns of EVs-S. aureus, EVs-S. epidermidis, EVs-E. coli, and EVs-P. aeruginosa were different from those of MVs-S. aureus, MVs-S. epidermidis, MVs-E. coli, and MVs-P. aeruginosa, respectively. The above data indicate that aggregation reaction of bacteria-EVs is mainly activated by cellular components from rats infected by each strain of bacteria rather than from bacterial MVs. Staphylococcus aureus Aggregation Induced by EVs From Neutrophils It is reported that EVs from neutrophils exposed to S. aureus may stimulate EVs-bacteria aggregation (Herrmann et al., 2015). To investigate the possible effect of EVs from neutrophils on bacterial aggregation, EVs were harvested from the supernatant of neutrophils infected by S. aureus, S. epidermidis, E. coli, and P. aeruginosa and cocultured with each of the above bacteria, respectively, whereas EVs from PBS-treated neutrophils harvested as controls. As shown in Figure 3C, only the EVs of neutrophils infected by S. aureus induced significant aggregation of S. aureus, whereas no significant bacterial aggregation was observed in cases of the other 3 bacteria. Characterization of Serum EVs From Osteomyelitis Patients In order to test the diagnostic potential of EVs in infectious osteomyelitis, the particle size, and number of EVs from 28 osteomyelitis patients and 21 controls were analyzed using NTA. Significant differences were shown in size and number of EVs between osteomyelitis and control patients ( Figure 4A). Quantitative data showed that the average diameter of EVs from osteomyelitis patients (133.61 ± 3.55 nm) was significantly larger than that from controls (122.82 ± 3.33 nm, P = 0.037) ( Figure 4B). The number of EVs from osteomyelitis patients [(5.19 ± 0.41) × 10 9 particles/mL] was also significantly higher than that from controls [(3.94 ± 0.29) × 10 9 particles/mL, P = 0.024] (Figure 4C). It took less than 4 h to have the particle size and number of EVs determined since collection of blood samples. Bacterial Aggregation Activity of EVs From Osteomyelitis Patients For the 28 osteomyelitis patients, pathogenic microorganisms were identified by positive bacterial culture and PHOENIX To testify the potential of aggregation activity to identify the pathogenic bacteria for osteomyelitis patients, the bacterial aggregation was evaluated of the EVs from the five S. aureus osteomyelitis patients. Results showed that serum EVs from S. aureus osteomyelitis patients had an aggregation rate of (10.90% ± 2.18%) for S. aureus (2.76% ± 0.65%), for S. epidermidis (P = 0.001 vs. S. aureus) and (4.24 %± 1.15%) for E. coli (P = 0.005 vs. S. aureus); however, the aggregation rate for P. aeruginosa was (8.22% ± 1.40%) (P = 0.211 vs. S. aureus), indicating a weak cross-reaction between serum EVs from S. aureus osteomyelitis patients and P. aeruginosa (Figures 5A,B). It took less than 18 h (including 16 h for bacterial recovery) to finish the bacterial aggregation assay since collection of blood samples. Extracellular Vesicles as a Potential Biomarker in Quick Diagnosis of Osteomyelitis Receiver operating characteristic curves and the corresponding area under the curve (AUC) values were calculated to evaluate the diagnostic efficacy of such biomarkers commonly used for osteomyelitis as WBCs, ESR, and CRP, as well as the size and number of EVs. The closer AUC is to 1, the better the diagnostic efficacy. Erythrocyte sedimentation rate and CRP showed significantly diagnostic AUC values at 0.829 ± 0.066 (P = 0.001) and 0.767 ± 0.073 (P = 0.005), respectively, but WBCs did not at 0.438 ± 0.089 (P = 0.516). As the AUC value of EVs number was 0.662 ± 0.089 (P = 0.088), it showed little value in distinguishing osteomyelitis patients from controls, but ROC analysis of the particle size of EVs showed an AUC value of 0.722 ± 0.079 (P = 0.019). Further analysis indicated that the best diagnostic threshold value should be 136.95 nm, with a sensitivity of 46.2% and specificity of 93.3% (Figure 5C). DISCUSSION The present study found a definite association between osteomyelitis and the size and number of serum EVs. In addition, serum EVs from rats infected by S. aureus, S. epidermidis, P. aeruginosa, or E. coli trigger intense aggregation of a corresponding strain of bacteria. Because ∼1 week is required for the growth and identification of microorganisms using conventional bacterial culture (Lesens et al., 2011), the significant clinical value of our work is that serum EVs might be a potential biomarker for a quick diagnostic test for osteomyelitis patients to identify the disease and possible infectious microorganisms as well, despite possible cross-reaction induced by other bacteria. Consistent with a finding that reported EVs accumulate rapidly in the circulation upon infection (Singh et al., 2012;Schorey et al., 2015), we found that the size and number of FIGURE 5 | Aggregation activity Staphylococcus aureus, Staphylococcus epidermidis, Escherichia coli, and Pseudomonas aeruginosa with EVs from S. aureus osteomyelitis patients. (A) Representative fluorescent images of bacteria aggregation. Scale bar, 10 µm. (B) Quantitative analysis of EVs-bacteria aggregates with diameter larger than 3 µm. **P < 0.01, ***P < 0.001. (C) Analysis of receiver operating characteristic curves for the diagnostic efficacy of particle diameter of EVs, particle concentration of EVs, and commonly used biomarkers including C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and white blood cell count (WBC). serum EVs from S. aureus-, S. epidermidis-, P. aeruginosa-, and E. coli-infected rats and from osteomyelitis patients increased significantly. However, ROC analysis showed only the particle size was a potential diagnostic marker for osteomyelitis patients. The discrepancy in the effect of EVs between animal models and human patients may be attributed to antibiotic pretreatment and different stages of osteomyelitis. In the present study, all the patients were in an acute stage of chronic osteomyelitis, whereas the animals were on day 3 after acute osteomyelitis infection. It is likely that the particle size and number may be sensitive and specific markers for diagnosis of acute osteomyelitis rather than for acute stage of chronic osteomyelitis. As early diagnosis of acute osteomyelitis is often challenging but critical for timely treatment to minimize bone destruction, it is particularly significant to use particle size and number as biomarkers in diagnosis of acute osteomyelitis. It is reported that S. aureus infection can induce formation and secretion of EVs from neutrophilic granulocytes, and in turn, these EVs demonstrate a definitive ability to stimulate aggregation of S. aureus ex vivo (Timár et al., 2013;Herrmann et al., 2015). Consistently, we also found that EVs from neutrophils induced bacterial aggregation. Further, besides specific bacterial aggregation activity of serum EVs from bacteriainfected rats, we found that the serum EVs from patients with S. aureus osteomyelitis also induced aggregation of S. aureus and a weak cross-reaction of P. aeruginosa. Our findings point to a potential role of EVs-bacteria aggregation assay as a quick test to identify possible pathogens for osteomyelitis. In addition to host components, pathogen-derived components have also been found on EVs after infection (Schorey et al., 2015). Moreover, secretion of MVs is a conserved process from microorganisms to multicellular organisms. Studies demonstrate that Gram-positive and Gram-negative bacteria can produce a variety of MVs, an important role in eliminating competing organisms, antibiotic resistance, and pathological functions in the whole infection process (Lee et al., 2009;Kulp and Kuehn, 2010). Our ex vivo and in vitro results demonstrate that MVs produced from bacteria cannot induce aggregation of bacteria, and the protein patterns of EVs from infected rats are different from those of MVs, suggesting that the bacterial aggregation induced by the serum EVs may be produced mainly by the infected host cells. In conclusion, further clinical studies are needed to confirm our chief finding that serum EVs may be used for diagnosis of acute osteomyelitis and to identify the pathologic microorganisms, which is much more rapid than bacterial cultures. However, our study had several limitations. First, because osteomyelitis patients infected by S. epidermidis, E. coli, and P. aeruginosa were not available for the present study, bacterial aggregation assays of the EVs from them need to be carried out in the future study. Second, although we found the components of EVs mediating bacterial aggregation were from host cells but not from the bacteria, the composition of EVs was not defined. Third, cross-reaction of EVs-bacteria aggregation is of particular concern for its further application; therefore, determination of the essential component in EVs that mediates bacterial aggregation is necessary to help in the development of sensitive and specific methods to define specific pathogenic microorganisms in osteomyelitis patients. Further studies, both laboratory and clinical, are also warranted to determine and improve the accuracy of EVs-bacteria aggregation in identifying the causative organisms for osteomyelitis. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/supplementary material. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of Nanfang Hospital of Southern Medical University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. The animal study was reviewed and approved by Ethics Committee of Nanfang Hospital of Southern Medical University.
2020-07-21T13:07:06.156Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "a1bcc17b4970c6f37e3dc90e18372caed82e99af", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7385055", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a1bcc17b4970c6f37e3dc90e18372caed82e99af", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2515797
pes2o/s2orc
v3-fos-license
Transverse Momentum as a Measure of Colour Topologies Several distinct colour flow topologies are possible in multiparton configurations. A method is proposed to find the correct topology, based on a minimization of the total transverse momentum of produced particles. This method is studied for three-jet $Z^0 \to q\overline{q} g$ and four-jet $W^+W^- \to q_1\overline{q}_2q_3\overline{q}_4$ events. It is shown how the basic picture is smeared, especially by parton-shower activity. The method therefore may not be sufficient on its own, but could still be a useful complement to others, and e.g. help provide some handle on colour rearrangement effects. When high-energy processes produce multiparton states, it is generally believed that the confinement property of QCD leads to the formation of colour flux tubes or vortex lines spanned between the partons. These tubes/vortices are here called strings, in anticipation of our use of hadronization based on the string model [1]. A quark or antiquark is attached to one end of a string. A gluon is attached to two string pieces, one for its colour and one for its anticolour index, and thus corresponds to a kink on the string. The simplest kind of events, e + e − → γ * /Z 0 → qq, gives a single string stretched between the q and the q. The direction of the colour flow can, in principle, be distinguished by flavour correlations [2], but will not be studied here. In next order, qqg events correspond to a string stretched from the q via the g to the q. The colour topology is unique, but experimentally it is not normally known which of the three jets is the gluon one, so this gives a threefold experimental ambiguity. From four partons onwards, true ambiguities of the topology exist, even when the identity of the partons is known. In qqgg events, the string can be drawn from the q to either of the two gluons, on to the other gluon and then to the q. This gives two possible topologies. A third topology, not expected to leading order in N C , is when one string runs directly between the q and q and another string in a closed loop between the two gluons. There is an experimental ambiguity, in picking the two quarks among the four partons, which gives a further factor of six, i.e. a total of eighteen possible topologies (reduced to fifteen if single and double strings are not distinguished). Another four-jet final state is obtained in the process e + e − → W + W − → q 1 q 2 q 3 q 4 . Since the W 's are colour singlets, in principle each of q 1 q 2 and q 3 q 4 form a separate colour singlet string. However, by soft gluon exchange or some other mechanism, alternatively q 1 q 4 and q 3 q 2 may form two singlets. Since it would not normally be known which of the four jets are quarks and which antiquarks, there is a total of three experimental pairings of four jets, where the third corresponds to the unphysical flavour sets q 1 q 3 and q 2 q 4 . Among the topologies above, only the three-jet qqg events have been studied in detail. It has been shown that the string approach here correctly predicts the topology of particle flow, with a dip in the angular region between the q and q jets [3,4]. This comes about because the qg and qg string pieces produce (soft) particles in the respective angular ranges, while there is no string directly between the q and q. The same effect is also obtained as a consequence of colour coherence in perturbative soft-gluon emission [5] normally these two approaches give the same qualitative picture. Although by now LEP 1 has produced large samples of four-jet events, the energy flow between these jets have not been studied in detail, presumably because of the large number of possible topologies. LEP 2 will provide four-jets from W pairs, and here the issue of colour topology may become of great importance [6,7]. If, by colour rearrangement, an original q 1 q 2 plus q 3 q 4 colour singlet configuration is turned into a q 1 q 4 plus q 3 q 2 one, particle production will be somewhat different. Methods to determine the W mass from LEP 2 four-jet events will then give different results. There is more than one model of the colour rearrangement process; therefore the uncertainty on the W mass could be as high as 100 MeV [8], i.e. larger than the expected statistical error of the order of 40 MeV. The final number will then be dominated by the mixed hadronic-leptonic channel W + W − → ℓν ℓ qq, which has about the same statistics. The hadronic events could be recuperated if colour rearrangement effects could be diagnosed from the data itself. Additionally, the ability to distinguish between various reconnection scenarios could provide information on the nature of the QCD vacuum, and therefore be of fundamental interest. Unfortunately, realistic reconnection scenarios give only very minute effects on the data (remember that the effect on the W mass is at most of the order of one per mille), and so attempts to find useful signals have been rather unsuccessful [7]. There is only one claim for having found a signal [9] where, in a few events, a central rapidity gap separates the particle production from two low-mass colour singlet reconnected systems. It has turned out, however, that neglected angular correlations in the W pair decays tend to reduce this signal to the border of observability [10,11]. Therefore one would like to devise alternative methods to diagnose the appearance of colour rearrangement. We now want to propose and study a method to select the correct string topology. The starting point is the observation that, were it not for various smearing effects, hadrons would be perfectly lined up with the string pieces spanned between the partons. In momentum space the hadrons would appear along hyperbolae with the respective endpoint parton directions as asymptotes. In a frame where a string piece has no transverse motion, i.e. where the two endpoint partons are moving apart back-to-back, the hadrons produced by this piece would have vanishing transverse momentum. A hadron could then successively be boosted to the rest frame of any parton pair until the pair is found for which the particle p ⊥ is vanishing. When smearing is introduced, this p ⊥ will no longer be vanishing, but it still would be reasonable that a "best bet" is to assign each hadron to the string piece with respect to which it has smallest p ⊥ . The most likely string configuration would be the one where the sum of all hadron p ⊥ 's is minimal. While the fluctuations for each single hadron is too large for usefulness, it could be hoped that the net effect of all the hadrons would be to single out the correct topology. To be more precise, here is the scheme proposed, to be carried out for each event: 1. Use some jet clustering algorithm to identify the directions of the number of jets that should be used for the current application. 2. Enumerate the possible colour flow topologies allowed in the process. 3. Form one p ⊥ measure for each topology, where the sum runs over all final-state particles in the event. For each particle, its p ⊥ is defined as the minimum of the p ⊥ 's obtained by boosting the particle to the rest frame of each of the string pieces making up the current topology. 4. Identify the correct topology as the one with smallest p ⊥ . Several effects could smear the simple picture and lead to incorrect conclusions. The main ones are: • Errors in the reconstruction of jet directions. • Additional p ⊥ caused by perturbative QCD branchings, predominantly gluon emission. • The p ⊥ generated by the fragmentation process. • Secondary decays of unstable hadrons. A convenient test bed for the relative importance of these effects is provided by qqg events. In symmetric three-jet events at Z 0 energies, without any parton-shower activity, the gluon is correctly identified in 87% of the cases. This should be compared with the 33% expected from a random picking among the three alternatives. When complete Z 0 events events are generated (with Pythia 5.7 and Jetset 7.4 [12]), three jets are found (in 10-20% of the events) and the method above is used, the success rate is around 55-60%. The "correct" answer is here found by tracing the original quark and antiquark through the shower history and associating them with the jets they are closest to in angle. The conclusion of this kind of studies is that the method does work, though maybe not as well as one might have hoped, and that the main cause of errors is the perturbative gluon emission. The studies can be extended to four-jet qqgg events, although the large number of colour flow topologies here makes the method rather inefficient for selecting the correct topology. A more modest objective is to establish differences between string and independent fragmentation [13] models. In the latter approach, particle production is aligned along the direction connecting a parton with the origin in the c.m. frame of the event, again with some smearing effects. This approach is already disfavoured by the three-jet studies [4], but is a convenient reference. We have not completed a full study, but can illustrate with results for a simple four-jet cross topology with q and q back-to-back. Fig. 1a gives the difference in p ⊥ between the worst topology (where none of the string pieces run the way assumed) and the right one. Results are for charged particles, and the p ⊥ has been normalized to the number N of charged particles per event. The independent fragmentation distribution is symmetric around the origin, as it should, while the string approach shows a clear offset. Without knowledge of the correct answer, one is reduced to studying a measure such as the difference between the maximal and the minimal p ⊥ among all the twelve possible topologies, Fig. 1b. By construction this is a positive number, also for independent fragmentation, but an additional shift is visible for the string approach. Identification of one or both quarks, e.g. by b quark tagging or energy ordering (quarks are likely to have higher energy than gluons), would cut down on the number of topologies to be compared and therefore enhance the signal. Some experimental studies along these lines could therefore be interesting. We now turn to the main application of this letter, namely four-jet W + W − events. Since the two W decay vertices are less than or of the order of 0.1 fm apart, while typical hadronic distances are of the order of 1 fm, the two W decays occur almost on top of each other. QCD interconnection effects could appear at all stages of the process, namely in the original perturbative parton cascades, in the subsequent soft hadronization stage, and in the final hadronic state. It can be shown that perturbative effects are strongly suppressed [7], but no similar arguments hold for the other two. Bose-Einstein effects in the hadronic state could be the largest individual source of W mass uncertainty [14], but it is the least well studied. The presence of such Bose-Einstein effects presumably could be established from the data itself, while reconnection in the hadronization stage is less easy to diagnose. We will study whether the p ⊥ measure offers any help here. The reconnection models used as references in this work are: 1. Reconnection after the perturbative shower stage but before the hadronization, with reconnection occuring at the 'origin' of the showering systems. This 'intermediate' model is the simplest of the more realistic ones. Reconnection when strings overlap based on cylindrical geometry. A 'bag model' based on a type I superconductor analogy. The reconnection probability is proportional to the overlap integral between the field strengths, with each field having a Gaussian fall-off in the transverse direction, the radius being about 0.5 fm. The model contains a free strength parameter that can be modified to give any reconnection probability. 3. Reconnection when strings cross. In this model the strings mimic the behaviour of the vortex lines in a type II superconductor, where all topological information is given by a one-dimensional region at the core of the string. 4. Reconnections occur in such a way that the string 'length' is minimized. As a measure of this length the so-called λ-measure is used, which essentially represents the rapidity range for particle production counted along the string. This can also be seen as a measure of the potential energy of the string. Models 1 through 3 is described in [7] and the last in [9]. Further models have been proposed [15]. For the study, events should have a clear four-jet structure. To achieve this we demand that each jet must have some minimum energy and that the angle between any jet pair must not be too small [7]. When applied to the expected statistics of LEP 2, the number of events left after the cuts will be about 2500 per experiment; therefore statistics will be a problem when different models are compared with each other. Three different algorithms are used to identify which jet pairs belong together: 1. The q 1 q 2 q 3 q 4 configuration before parton showers can be matched one-to-one with the reconstructed jets after hadronization. This is done by minimizing the products of the four (jet+q) invariant masses. The original quark information is not available in an experimental situation, so this measure can only be used as a theory reference. 2. The invariant mass of jet pairs from the same W should be close to the known Wmass of about 80 GeV. Among the three possible jet pairings, therefore the one is selected which has minimal |m ij −80|+|m kl −80|, where i, j, k, l are the four jets. We have picked this method rather than a few similar ones since it has (marginally) the best correlation with the reference method above. This method is mainly probing the electroweak aspect of W pair production, namely the W mass spectrum, while it should be less dependent on the QCD stages of showering and hadronization. 3. The p ⊥ method introduced in this letter provides an alternative measure, that rather should be sensitive to the QCD stages and less so to the electroweak one. Without colour reconnection it should (hopefully) agree with the previous two, while it could give interesting differences if reconnection occurs. The agreement between these three methods is shown in Table 1, with and without reconnection, the former for model 1. As should be expected, algorithm 2 comes close to the "correct" answer of number 1, and is not significantly affected by colour rearrangement. The p ⊥ method, algorithm 3, shows the expected dependence on colour rearrangement, with a smaller success rate when colour rearrangement is included. Note, however, that the success rate does not drop below the naive 33% number, indicating that the p ⊥ method is also picking up other aspects of events, such as the jet topology. The results in the table are for a 170 GeV energy, but we do not expect a significant energy dependence. Both methods 2 and 3 can be applied to data, so therefore the correlation is an observable. The rate of reconnection could be extracted, by interpolation between the two extremes of no and complete colour rearrangement. Statistically it should be feasible to establish a signal for reconnections, if they occur at a rate above the 10-20% level. The systematic errors on the correlation method may be large, however, especially for the model-dependent change when reconnection is included. It is therefore important to study whether a differential distribution would better highlight qualitative differences. Algorithm 2 can be used to identify the best hypothesis for which jets should be paired to form the two W 's, and also the worst hypothesis, where |m ij − 80| + |m kl − 80| is maximal. The p ⊥ can be calculated for each of these two extremes. When the strings are reconnected, the first sum should increase and the second one decrease relative to the no-reconnection case. The signal is therefore enhanced by making use of the difference, ∆ = ( p ⊥ ) worst − ( p ⊥ ) best , which should decrease in case of reconnection. The subtraction furthermore has the advantage of removing some spurious fluctuations from the comparison. The main example is high-momentum particles, where the assumed string hyperbolae attach well with the four jet directions and therefore all three string hypotheses give the same contribution to p ⊥ . (Our studies show that particles with momenta above 3 GeV add little to the discrimination between the string hypotheses.) The ∆ measure is plotted for models 1-4 in Fig. 2. Note that the results for models 1 and 4 correspond to 100% reconnection, while the reconnection rate in models 2 and 3 is about 30%. (The reconnection fraction could be varied in all models, but the effects of reconnection are not linear in this fraction for models 2 and 3, so a reasonable value is preferred here.) All reconnection models show the expected shift towards smaller ∆ values, and the magnitude of the shift is comparable once differences in assumed reconnection fractions are removed. Remaining differences imply that one cannot model-independently extract a reconnection rate from the data. The signal for reconnection may be enhanced, compared with the results of Table 1, by cuts on ∆, e.g. by only considering the fraction of events with ∆ < 0. This is at the price of a reduced statistics, however, so the balance is not so clear. It may be better to use measures that gauge the full shape of the curve, given that the physics of the no-reconnection scenario is presumed well-known (by extrapolation from the Z 0 results). Prospects look promising to diagnose colour rearrangement along these lines but, as before, the combination of low effects and small statistics could give marginal results. Furthermore, hopes should not be raised too high that this would immediately imply a scheme to correct a W mass measurements for the reconnection effects: of the models above, number 3 shifts the W mass downwards while the others shift it upwards [7,8], and yet they all shift the ∆ distribution in the same direction. Clearly the study of reconnection effects ultimately must be based on a host of different measures, the p ⊥ one and others. It may be of some interest to understand why effects are not larger. Several simulations have been performed with various simplified toy models to study this issue [16]. It turns out that there are two main mechanisms that smear distributions and make them less easily distinguished. One is parton showers, just as for the Z 0 → qqg process studied above. The other is the geometry of the process, namely that the helicity structure of the W + W − → q 1 q 2 q 3 q 4 process is such that q 1 and q 3 tend to go in the same general direction, as do q 2 and q 4 [10]. The overall change of string topology by reconnection therefore is not as drastic as if the q 1 had tended to be close to q 4 and vice versa. Had it been possible to remove these effects, i.e. study events without shower activity and with q 1 and q 4 reasonably close in angle, the original and the colour-reconnected ∆ distributions would almost completely separate. In practice, only a modest reduction of shower activity could be obtained by requiring that all four jets be reasonably narrow, while tagging of quark vs. antiquark (e.g. with charm) would leave very few events. Therefore no simple solutions have been found. In summary, we have introduced a p ⊥ measure as a diagnostic of the colour topology of hadronic events. The fuzzy nature of hadronic final states somewhat limits the usefulness of the method. In particular, the more drastic effects associated with perturbative gluon emission tend to obscure the subtler effects of different colour topologies. Therefore the p ⊥ measure is no panacea, but could still be a useful addition to the (not so large) tool box of methods to characterize the nonperturbative stage of hadronic events. Applications include three-and four-jet events at Z 0 energies and, in particular, W pair decay to four jets at LEP 2. In the latter process, it could be possible to detect the effects of colour reconnection with this approach. Further details on the studies reported here may be found in [16]. no reconnection with reconnection (model 1) algorithm 1 2 algorithm 1 2 2 85% -2 85% -3 68% 65% 3 46% 46% Table 1: Fraction of agreement in jet pair identification between the three algorithms.
2014-10-01T00:00:00.000Z
1996-08-16T00:00:00.000
{ "year": 1996, "sha1": "30f46d3f62251b004e99bc78a95b55f0c2af1aac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a00371e6f09ce2213f2e90c874a5fc13321445fe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258062386
pes2o/s2orc
v3-fos-license
Patient‐specific cognitive profiles in the detection of dementia subtypes: A proposal Many physicians rely on sum score cognitive screening tests to evaluate patients for cognitive decline. Because the vast majority of cognitively impaired patients never receive more extensive testing, the results of these screening tests impact patients and their family members profoundly. No previous study has examined whether the metrics used by the popular Mini‐Mental State Examination, Montreal Cognitive Assessment, and Saint Louis University Mental Status tests reliably identify single‐domain deficits or allow clinicians to adequately track disease progression. We compare side by side the metrics used by these three tests to highlight the differences in the ways they measure domain impairments. We then contrast the sum score approach to cognitive screening with brief domain‐specific tests that use extended metrics in each domain examined. Last, we suggest that moderate‐to‐severe domain‐specific deficits on these tests should lead physicians to anticipate specific functional problems and alert family members. a physician's ability to (1) make treatment recommendations, (2) educate patients and family members as to the functional consequences of impaired cognition, and (3) help family members anticipate and plan for a patient's need for caregiver assistance. The Diagnostic and Statistical Manual of Mental Disorders 5th Edition (DSM-5) requires that the diagnosis of a neurocognitive disorder be based on two separate evaluations: cognitive testing and assessment of instrumental activities of daily living (IADLs). 3Basic activities of daily living (BADLs) involve motor skills of toileting, bathing, dressing, eating, and transferring from a bed or chair to standing or walking. These are often assessed with the Katz Index of Independence in Activities of Daily Living (ADL). 4By contrast, IADLs have to do with an individual's ability to communicate with others, interact with a community, and manage a variety of interactive tasks.In 1969 Lawton and Brady created a scale to measure IADLs. 5This widely used scale examines seven functional areas: the ability to use a telephone, food preparation, housekeeping, laundry, transportation, management of medications, and handling of finances.Each of these seven areas has several subdivisions.Because of their greater sensitivity to cognitive decline, IADL problems appear earlier than BADL problems in the course of a dementing illness.The measurement of functional abilities is addressed more fully in the Discussion section of this paper. The lens through which physicians examine cognition shapes many things.The metrics that popular screening tests use to detect and measure impairment in different domains vary greatly from test to test. The ability of an in-office test to predict behavior and performance in the real world, its ecological validity, may be high or low.Together with laboratory work, neuroimaging studies, a physical examination, and review of how a patient performs daily activities, cognitive screening plays a critical role in determining whether a patient has a neurocognitive disorder.Because there is at present no effective pharmacologic treatment for dementia, the role that physicians play in the education of patients and family members is central to their charge as health providers. For every patient identified as suffering from AD or another form of dementia there is likely to be a circle of friends, family members, and other relatives whose lives are significantly impacted by the patient's neurocognitive disorder.These individuals often live for years quietly confused and frightened, uncertain of ways in which the patient's life and their lives will be affected.Cognitive problems are frequent in older adults and impact reasoning, judgment, decision making, self-awareness, and behavior.Impaired attention associated with confusional states or delirium impacts cognition in 14% to 56% of all hospitalized elderly patients. 7Estimates are that between 29% and 76% of demented patients in primary care settings go undiagnosed. 8,9While dementia often manifests as impairment in memory, it can also present as difficulty in the domains of language, spatial skills, or executive functioning.Failure to recognize the early stages of dementia puts patients at risk for safety issues at home, poor medication compliance, vehicular accidents, getting lost, mail or telephone scams, fraud, and other types of elder abuse.False negative results of screening tests with low sensitivity can delay detection of confusional states or dementia.The costs to patients and family members of either false negative or false positive results can be high. 10,11Although brief screening tests are designed as triaging rather than diagnostic tools, they are widely used beyond their intended scope and in practice great reliance is placed on them in diagnosing dementia.The vast majority of patients found to have cognitive decline on a screening test will not receive any other form of cognitive testing. 12 Second-tier screening tests Second-tier tests take 10 to 15 minutes to give and are used when the clinician wishes to test multiple cognitive domains.The most widely used of these, the MMSE, was developed in 1975. 13 Neuropsychological Battery (UDSNB 3.0). 17The MMSE is required for certain medication insurance reimbursements and has been used as an outcome measure for >1200 research dementia drug trials. Despite its enormous popularity, the MMSE has several drawbacks.An extensive literature describes its insensitivity and other weaknesses. 18,19,20The MMSE identifies moderate to severe dementia.However, its commonly used cutoff score of 24 does not detect early cognitive decline, particularly in individuals with higher levels of education.Using a cutoff score of 24, O'Bryant et al. 21found 36% false negative diagnoses in 1141 college-educated individuals.They suggested raising the cutoff score to 27 to improve sensitivity and specificity.Shiroky et al. 22 identified eight patients in a memory clinic who received a perfect MMSE score of 30 but met formal DSM-IV diagnostic criteria for "probable dementia" based on the combined results of neuropsychological testing (NPT) and Reisberg's Global Deterioration Scale (GDS). 23Physicians who administer the MMSE often fail to use age and education norms gathered from 18,000 subjects by Crum et al. 24 Mild cognitive impairment may slow mental speed or decrease efficiency at work but does not interfere with independence in daily functioning, as dementia does. 25Estimates of MCI prevalence rates range from 3% to 42% in individuals 65 or older.This wide range may be partially due to researchers using different definitions of MCI and different cutoff scores. 26The insensitivity of the MMSE to both MCI and dementia derives in part from the weighting of different subtests in the composite score.Performance in memory and visuospatial skills, the two domains that show the earliest impairments in most cases of MCI and AD, contribute only 3 points and 1 point, respectively, to the sum score on the MMSE.By contrast, 5 points are assigned to serial 7s, a task that can be failed for many reasons besides cognitive decline including attentional problems, pre-existing arithmetic weakness, performance anxiety, and depression. The second edition of the MMSE exists in three versions. 27 Dautzenberg et al. 30 suggest that studies validating the MoCA using healthy controls (instead of patients representative of a clinical setting) have led to overestimates of the test's specificity. In a sample of 185 geriatric outpatients in a memory clinic, Moafmashhadi et al. 31 conversion of amnestic MCI to AD dementia. 33Kaur et al. found that the MoCA-MIS was better than delayed recall of a narrative paragraph at discriminating normal cognition from amnestic MCI. 34veral efforts have been made to move beyond the total score (TS) of the MoCA to explore the relationship of subscores to diagnosis, staging, and correlation with functional autonomy.In addition to the MoCA-MIS, Julayanont et al. 33 proposed five other index scores for the MoCA: orientation (OIS), attention (AIS), language (LIS), visuospatial function (VIS), and executive skills (EIS).Kim et al. 35 specific, and the AIS and LIS to be neither sensitive nor specific. Goldstein et al., however, found that the MoCA TS had higher incremental validity than the index scores in distinguishing MCI from normal cognition, MCI from AD, and normal cognition from AD. 37 SLUMS The SLUMS exam was created in 2006 by Tariq et al. 15 and is available in eight languages.It is widely used in Veterans Administration hospitals across North America. 38 The distinction of delirium from dementia is critical in the elderly. COMMENT ON SECOND-TIER SCREENING TESTS Impairment in attention and registration (initial learning), accompanied by either agitation or hypoactivity, is the hallmark of confusional states.Tests that do not reliably identify attentional deficits will not help clinicians distinguish reversible confusional states from dementia. The MMSE has no adequate measure of attention, the MoCA allocates 3 points to attention, and the SLUMS assigns attention only 2 points.Sum score tests do not calibrate degrees of difficulty in language subdomains.This makes it difficult for clinicians using these tests to identify the focal "language-led" dementias, namely the primary progressive aphasia variants of both AD and frontotemporal dementia. 41ysicians using sum score tests will have difficulty recognizing the role of language impairments in lowering a global score beneath a cutoff score for dementia.Neither the MoCA nor the SLUMS contains a graded measure of language comprehension and the SLUMS does not test naming. Because they do not explore the executive areas of reasoning, social judgment, or self-awareness, sum score tests cannot be used to establish whether a patient has the capacity to make sound financial decisions or appreciate the consequences of their actions.Sum score tests do not determine whether a patient can resist undue influence, live alone, drive safely, or function reliably at a particular job.Finally, they lack ecological validity in that they do not anticipate or predict the consequences of cognitive impairments in terms of specific functional or IADL impairments in a patient's day-to-day life. The costs of false negative results on second-tier tests can include (1) delay in the detection and timely treatment of pathological conditions, (2) failure to provide proper medical information and guidance, (3) inappropriate reassurance to patients and family members, and (4) failure to anticipate a patient's functional limitations and care needs. On the other hand, a false positive diagnosis of dementia can lead to depression or anxiety, inappropriate treatment with either cholinergic agonists (risk of cardiac arrhythmias, diarrhea, dehydration, and syncope) or N-methyl-D-aspartate receptor antagonists (risk of headache, dizziness, and cardiac arrhythmias). DOMAIN-SPECIFIC TESTS The gold standard for cognitive testing is NPT.It provides a fine-grained delineation of cognitive functioning and defines the elements that should be addressed in cognitive testing.NPT is, however, impractical for use in general medical settings as it is access limited, expensive, time consuming, and frequently exhausting for elderly patients.Clinicians are often unaware that brief domain-specific neurocognitive tests constitute a more accessible and less expensive form of NPT.Because of their brevity (10-30 minutes) these tests can be given repeatedly to track illness progression or document response to treatment.They examine the basic building blocks of cognition (attention, registration, and language) as well as the higher-level skills (memory, visuospatial skills, and executive functioning) that rest on these building blocks.The relationship among these levels of cognition is depicted in Figure 1. Impairments in attention, registration, or language comprehension, three foundational ability areas, can lower performance in higher level areas such as memory and the executive skills of reasoning and judgement.None of the three second-tier tests adequately measures any of these three basic ability areas. By contrast, domain-specific tests use sufficiently graded measures in each domain examined to identify single-domain impairments.This enables them to capture a unique pattern of cognitive strengths and weaknesses.Figure 2 shows one example of such a patient-specific cognitive profile. Such patterns of information assist examiners in detecting and staging dementia into various levels of cognitive impairment.Domain deficits that fall at either the moderate or the severe level of impairment alert examiners to the likelihood that a patient either has or will soon have problems in specific IADLs. 42Sharing this information with family members allows them to better anticipate such problems and plan accordingly.This reduces the risk of caregiver fatigue or burnout. 43The likely IADL consequences of deficits in individual cognitive domains are shown in Table 3. Table 4 provides an overview of four domain-specific tests: Cognistat, 44 the CERAD-NB, 16 Addenbrooke's Cognitive Examination (ACE-III), 45 and the Cambridge Cognition Examination (CAMCOG). 46formation on how to obtain each of these tests is provided in the references. DISCUSSION The division of cognition into unique domains or macro areas is useful but admittedly artificial.Informant caregivers, who were predominantly family members, used the Lawton-Brody rating scales.The authors found that attention, memory and learning, and language were all significant predictors of both basic ADLs and IADLs.This observation led them to comment, "Cognitive test performance also predicts basic ADLS (e.g., bathing, grooming, dressing, and feeding).This suggests that even in patients with mild AD, basic ADLs likely also require complex cognitive functioning." 56 mentioned above, total scores on the MoCA were shown For these individuals being told the patient's score on the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), or Saint Louis University Mental Status (SLUMS) exam does little to inform, guide, or reassure them.A 2020 report of the AARP and National Alliance of Caregivers 6 indicates that 10 million Americans are providing care for patients with AD or other dementia.A majority of these caregivers are family members and 61% of them are also working.If caregivers are shown the relationship between cognitive impairments underlying and contributing to ADL and IADL difficulties, they are likely to be better able to anticipate and plan for a patient's functional needs.Unfortunately, the most widely used cognitive assessment tools in the United States and many other countries do not enable physicians and other clinicians to provide under-appreciated caregivers with information relevant to managing functional impairments in ADLs and IADLs. IN CONTEXT 1 . 2 . Systematic Review: An internet search scanned the literature for papers analyzing the strengths and weaknesses of three widely used cognitive screening tests: the Mini-Mental State Examination, Montreal Cognitive Assessment, and Saint Louis University Mental Status exam.Interpretation: The conventional method of measuring and defining cognition with a sum score has resulted in physicians providing inadequate and confusing information.The varying lengths and brevity of the metrics used by these sum score screening tests do not allow physicians to adequately (a) identify dementia subtypes, (b) stage progression of illness, (c) detect isolated domain impairments, (d) compare performance in individual domains, or (e) anticipate problems in specific functional instrumental activities of daily living areas.3. Future Directions: Future research should document the degree to which, absent neuropsychological consultation, brief domain-specific testing done by trained staff improves physicians' diagnostic skills and increases their ability to educate patients, family members, and caregivers. A standard version (MMSE-2:SV) has replaced problematic items from the original MMSE and modified several tasks to adjust for difficulty.It can be administered in 10 to 15 minutes.A brief 16-point version (MMSE-2:BV) can be administered in 5 minutes and requires no stimuli for administration.The expanded 90-point version (MMSE-2:EV), which can be administered in 20 minutes, extends the 3-point memory portion of the standard version by adding recall of a narrative paragraph.The EV also includes a timed digit-symbol task sensitive to subcortical dementia as a measure of psychomotor speed.Alternative forms (blue and red) of each of the three MMSE-2 tests diminish practice effects.Domain testing on the MMSEAttention: Not tested.Registration: No score is recorded for the number of trials needed to learn a 3-word list.Language: 8 points in total are allocated to five separate language areas, the most detailed language evaluation of the three second-tier tests.However, if a patient fails the 3-step comprehension task there is no assessment of the patient's ability to reliably perform 1-or 2-step tasks.Memory: Contributes only 3 points to the sum score.This makes it difficult to document improvement or deterioration of memory on repeat testing.Visuospatial: Only 1/30 points, the least of the three second-tier tests.Executive Skills: Not tested.2.2.2 MoCADeveloped by Nasreddine in 1995, the MoCA is available in 35 languages and used in 200+ countries.Its sensitivity to MCI and executive dysfunction has made it increasingly popular.Eighty-six culturally adapted versions of the original MoCA existed in 2022 and 74 versions are on the MoCA website.28The MoCA has replaced the MMSE as a component of Version 3 of the UDSNB 3.0 currently used at ADRCs across the United States.17 Since 2019 , training and certification in administration and interpretation of the MoCA are required to give the test.Scores of 26 to 30 are considered normal, scores of 20 to 25 indicate MCI, and scores of 0 to 20 suggest dementia.A patient's level of education is considered in the final score by adding 1 point if the patient has 12 or fewer years of schooling.The standard MoCA test uses a 5-word memory list.Despite the MoCA's heightened sensitivity to cognitive decline compared to the MMSE, Chan et al. 29 reported that 78% of 136 acute stroke patients determined to be cognitively intact on MoCA testing had impairments in one or more cognitive domain on NPT that the MoCA did not assess, one of which was visual memory.Fifty-nine percent of stroke patients who received perfect scores on the MoCA were found by Chan et al. to have cognitive impairments on NPT. found "significant but modest" correlations between MoCA subtest scores and neuropsychological factor scores in the corresponding domain.They concluded, "Performance on individual items and subtests of the MoCA yields insufficient information to draw conclusions about impairment in specific cognitive domains as determined by neuropsychological testing."No attempt was made to explore possible relationships between MoCA subtest scores and difficulty in specific IADLs.Durant et al. 32 found a general correlation between lower MoCA total scores and higher levels of functional impairment but did not look at subtest scores in relation to IADLs.Domain testing on the MoCA Attention: Repetition of a 5-digit sequence is a non-demanding and insensitive task for college graduates.Registration: No score.A 5-word list is read twice but the number of words recalled on each immediate memory trial does not contribute to the overall score.Language: Comprehension is not tested.Memory: A 5-point scale.Visuospatial: A 4-point spread based on cube copy and clock drawing (the 2½-inch square space for clock drawing is too small to reliably detect spatial neglect or hemianopsia).Executive Skills: A 5-point spread is the largest point allotment to executive functioning on the three secondtier tests.The MoCA uses four executive tasks: shifting sets, phonemic fluency, abstraction, and having the patient tap when the letter "A" is said in a sequence of letters.The original 7.1 version of the MoCA has an alternative version (7.2) that was created to address test-retest issues.A newer 8.3 version of the MoCA expands the test's original 5-point spread for memory recall to 15 points.It does this by means of a Memory Index Scoring (MIS) system that assigns 3 points to each word recalled without a cue, 2 points to each word recalled with a cue, and 1 point to each word chosen from a multiple-choice list.Julayanont et al. showed that the MoCA-MIS improves the sensitivity of the MoCA, allows clinicians to better track progression of memory decline, and has been shown to predict Scores of 27 to 30 are considered normal, scores of 21 to 26 indicate MCI, and scores of 0 to 20 reflect dementia.The SLUMS combines two separate memory measures (recall of a 5-word list and a narrative passage) and assigns 13 of 30 total points to memory.Greater weighting of memory on the SLUMS compared to the other two second-tier tests increases the sensitivity of the SLUMS to amnestic MCI and to those dementia syndromes in which memory loss is the earliest symptom.The ceiling effect (the likelihood of a patient obtaining a perfect score) on the SLUMS is less than on the MMSE. 39,40Our review of the literature found no articles validating metrics on the SLUMS compared to the domains of attention, language, or visuospatial skills on neuropsychological testing.Domain testing on the SLUMS Attention: Contributes only 2 points to total score.Registration: No score.Initial recall of a 5-word list is recorded but the number of words retained does not contribute to the overall score.Language: Comprehension is allocated only 1 point while naming and repetition are not tested at all.Memory: The most extensive testing of the second-tier tests: 13/30 points, compared to 3/30 on the MMSE and 5/30 on the MoCA.Executive Skills: Not tested. by Durant et al. to correlate with the Activities of Daily LivingQuestionnaire.32 Apart from Julayanont et al.'s work with index measures that extend the limited metrics of the MoCA subdomains, 33 a review of the literature failed to find research that explores causal links between deficits in individual cognitive areas on second-tier screening tests and specific IADL impairments.Because of the variety of factors that can impact IADL functions there is no reason to expect a consistent predictive 1:1 causative link between deficits in a particular cognitive area and impairment in a specific functional area.However, the demonstration of moderate or severe deficit in one or more individual domains on a validated brief domain-specific test serves several valuable functions: it alerts clinicians to likely areas of functional impairment, helps direct questioning of informed observers regarding specific functional areas, and may inform the provision of services to patients who suffer from different types of dementia and are at different stages of decline.A patient-specific profile such as the one shown in Figure 2 demonstrates which cognitive domains remain intact, which are impaired, and to what degree they are impaired.Armed with this domain-specific information, clinicians can bridge the gap between in-office assessment and real-world functioning by alerting family members that particular activities of everyday functioning are at risk. 6 CONCLUSIONS With the enormous worldwide need for cognitive testing, a broad range of physicians and other clinicians have come to rely on rapid secondtier screening tests.One aim of this paper has been to warn physicians as to the risks and costs, both for doctors and their patients, of excessive reliance on these tests.A second aim has been to show the value of patient-specific cognitive profiles generated by brief domain-specific tests.The information from these profiles can markedly improve a physician's ability to detect, stage, and treat dementia subtypes.Decades of reliance on second-tier cognitive screening tests have led physicians to miss a number of opportunities to educate patients and caregivers.The information provided by sum score screening tests does not enable physicians to identify subtypes of cognitive decline or adequately monitor their progression.Lack of domain-specific information makes it unlikely that clinicians will explore how subtypes of cognitive decline map onto and contribute to impairment in specific areas of daily functioning.Without an adequate understanding of how individual cognitive domains are impacted by neurocognitive disorders clinicians remain limited in their ability to deliver patientspecific care.Table 3 in this paper provides examples of the connections between domain impairments and functional IADL areas that are at risk.Domain-specific test information helps physicians play a more proactive role in educating and advising family members throughout the course of a neurocognitive disorder.A physician's time is poorly spent administering tests that only generate a sum score.Instead, we propose that physicians have trained staff routinely administer domain-specific tests that generate differentiated and patient-specific cognitive profiles.Physicians can then use their unique set of skills to determine the meaning of those test results within the context of a patient's medical history, physical examination, functional status, laboratory data, imaging studies, and medication load. Points assigned to the four domains most impacted by neurocognitive disorders. explored the validity of these scores by comparing them to domain scores on a standard neuropsychological measure.In a group of 104 subjects with amnestic MCI all six MoCA index scores showed significant correlations with domain scores on the Seoul Neuropsychological Screening Battery, 2nd Edition.Henderschott et al. examined the use of the six MoCA index scores in a group of patients with Parkinson's disease. 36They found the EIS sensitive and specific, the VIS and MIS sensitive but not TA B L E 2 Table 2 shows the points spreads that sum score tests use to capture and describe the degree of impairments in the four domains most impacted by neurocognitive disorders.The weighting of different domains varies considerably from test to test.Limited point spreads fail to capture the degree of impairment in each domain sampled.Because these tests combine scores from different domains to create a sum score, impairments in individual domains can be masked by adequate performance in other areas. 23,51,52,53,54act of domain impairment.functionalstatus of patients with moderate or severe dementia.49Abroadrange of IADLs can be considered for possible correlations with either overall cognitive test scores (on screening tools or lengthier test batteries) or performance in specific cognitive domains.Galasko et al.50explored which of 45 ADLs (either chosen from existing ADL scales23,51,52,53,54or selected on the basis of clinical experience) were 48So too, there are multiple ADL subareas.As dementia progresses patients become increasingly unreliable historians, making the observations and reports of knowledgeable informants essential.The ADLs most suitable for assessment in MCI and early dementia are different from those that are appropriate for evaluating the
2023-04-12T06:16:56.115Z
2023-04-10T00:00:00.000
{ "year": 2023, "sha1": "3b4acd561bfb1b9d3c51eb25bd5d682fc1e97df8", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/alz.13049", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "c8a12aa3bcb726212aa28c2e195630412f7a8b25", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212638009
pes2o/s2orc
v3-fos-license
Modeling Method of the Grey GM(1,1) Model with Interval Grey Action Quantity and Its Application GM(1,1) is a univariate grey prediction model with incomplete structural information, in which the real number form of the simulation or prediction data does not conform to the Nonuniqueness Principle of Grey theoretical solution. In light of the network model of GM(1,1), the connotation of grey action quantity is systematically analyzed and the interval grey number form of grey action quantity is restored under uncertain influencing factors. A novel GM(1,1) model is then constructed. The new model has the basic characteristics of the grey model under incomplete information. Moreover, it can be fully compatible with the traditional GM(1,1) model. The developed model is employed to the natural gas consumption prediction in China, showing that its predicting rationality is much better than that of the traditional GM(1,1) model. It is worth mentioning that, for the first time, the grey property of GM(1,1) has been restored in structure, which is of significance for both academia and industry. Introduction In 1982, Professor Deng proposed the GM(1,1) model [1] with predictive function based on cybernetics. GM(1,1) is a single-variable grey prediction model with a first-order difference equation [2]. Its greatest feature is that GM(1,1) has only a dependent variable but no independent variables [3,4]. Grey theory holds that the development and evolution of a system are influenced by many uncertain external environments and internal factors (Grey causes) [5]. Under such circumstances, it is difficult to establish a definite functional relationship between dependent variables and independent variables to analyze and predict the future development trend of the system [6,7]. However, under the influence and restriction of many factors, the operation results of the system are determined (White results) [8]. In other words, the results of system operation are the final manifestation of the system under the influence of many factors, which can comprehensively reflect the evolution trend and development law of the system under the combined action of these factors [9,10]. GM(1,1) has many advantages [5,11], such as small amount of data needed, simple modeling process, and easy to learn and use. It has been widely used to solve various prediction problems in production and life [12]. With the deepening of application, the theoretical system of GM(1,1) has been enriched and improved, and a lot of research results have been produced. Generally speaking, these achievements mainly include the following four aspects: (a) Optimization of GM(1,1) parameters: such as initial condition optimization [13,14], background value optimization [15,16], and accumulation order optimization [17][18][19] (b) Optimization of GM(1,1) structure: realizing the optimization of model structure from the single exponential form to intelligent variable structure [20][21][22] (c) Extension of GM(1,1) modeling object: to achieve the expansion of modeling objects from real data to grey uncertain data [23][24][25] Essence and Connotation of Grey Action Quantity In the univariate grey system, system characteristic variables describe the evolution law of the system, which is the result of the interaction of many complex external factors. ey are all real numbers. e influencing factors of system development are "cause." e result of change embodied in the system is "result." In cybernetics, the former is called input, and the latter is called output. In a single-variable grey system, because the independent variables are unknown, the comprehensive effect of many uncertain and complex factors on the development of the system is expressed by parameter "b." erefore, parameter "b" is called the grey action quantity and represents all grey uncertainty information (Grey Information Coverage) [37]. In Figure 1, the input variable "b" represents all the uncertain factors (Grey factors) affecting the system development and the output variable x (0) (k) is the characteristic variable (White result) of the system. x (0) (k) adjusts the size of parameter "b" by AGO (Accumulation Generation Operator, weakening randomness) and MEAN (MEAN generation of consecutive neighbors sequence, improving smoothness). e main purpose of AGO [35] and MEAN [35] is to weaken the influence of extreme values in raw data on input variable "b." In Figure 1, the feedback coefficient "a" is called the development coefficient and its size and symbols reflect the development trend of x (0) (k). According to the relationship between input, output, and feedback of the system in Figure 1 (1) (k) � b can be obtained, which is the basic form of the classical GM(1,1) model. e parameters "a" and "b" are estimated by the least square method, which are all real numbers. Because grey action quantity "b" represents the influence of all external factors on the development trend of the system, it is essentially uncertain (Grey factors), and its form should be grey number. However, in the modeling process of the GM(1,1), the grey attribute of "b" is not taken into account which is estimated and modeled with a real number. is obviously does not agree with the actual meaning of "b," which leads to the poor reliability of the prediction results of the GM(1,1) model. e GM(1,1) model is a grey model with incomplete structural information. e uncertainty and complexity of the influencing factors are caused by incomplete structural information. However, the simulation and prediction results of the current GM(1,1) model are determined as real numbers, which is totally inconsistent with the nonuniqueness principle of the grey theory solution. erefore, it is necessary to restore the "grey" uncertainty characteristics of grey action quantity "b" and build a new GM(1,1) model on this basis. New GM(1,1) Model In this section, the interval grey number form of grey action quantity "b" will be restored under the uncertainty of influencing factors. On this basis, a new GM(1,1) model is constructed. Because the grey action quantity "b" is an interval grey number, the simulation and prediction results of GM(1,1) are also interval grey numbers, which satisfies the nonuniqueness of GM(1,1) prediction results under uncertain conditions. Interval Grey Number Form of Grey Action Quantity. In cybernetics, there is a corresponding relationship between each input and output. Grey action quantity covers all unascertained information and has different sizes at different time points (Figure 2). Usually, b 2 , b 3 , . . . , b n are not equal, that is, b 2 ≠ b 3 ≠ · · · ≠ b n . According to eorem 1, the parameters a � (a, b) T are estimated by the least square method under the condition of minimizing the sum of squares of simulation errors of x (0) (k), k � 2, 3, . . . , n. In other words, the parameters "b" in eorem 2 is an approximate value, which is used to represent all the grey action quantities b 2 , b 3 , . . ., and b n of each input. en, the information difference between grey action quantities is completely ignored. erefore, the simulated and predicted data based on parameter "b" in eorem 2 are only an approximate solution. It can be seen that the traditional GM(1,1) model violates the nonuniqueness principle of the solution of grey theory under incomplete information. In this section, according to the relationship between each input and output of the system, the uncertain information contained in grey action quantity is fully excavated, and the interval grey number form of grey action quantity is restored. On this basis, a new GM(1,1) model is constructed. According to equation (3), the grey action quantity with different values of k(k � 2, 3, . . . , n) can be calculated, as follows: en, we call Bs � b 2 , b 3 , . . . , b n is the sequence of grey action quantity of GM(1,1). e maximum value b max and minimum value b min of Bs � b 2 , b 3 , . . . , b n can be obtained, as follows: After this, grey action quantity of GM(1,1) can be expressed as the interval grey number form, that is, According to equation (3), the grey action quantity b k is positively correlated with x (0) (k). at is, the bigger the b k is, the bigger the x (0) (k) is. e parameter "b" in GM(1,1) is estimated by the least square method, which is a compromise value between b min and b max . On the other hand, under the existing conditions, the maximum possible value of interval grey number ⊗ b is neither b min or not b max , but "b." e parameter "b" is the real number most likely to represent the whitening value of interval grey number According to the definition of probability function [35], can be expressed as in Figure 3. New GM(1,1) Model with Interval Grey Action Quantity Definition 3. Let X (0) , X (1) , Z (1) , and a be the same as in Definition 1 and eorem 1. en, P � (a, ⊗ b ) T is called the sequence of grey parameters, and a is named as the Complexity is called the interval grey action quantity. (1) , and P be the same as in Definitions 1 and 3; then, is called the GM(1,1) model in which grey action quantity is the interval grey number ⊗ b , GM(1, 1, ⊗ b ) for short. And is called the whitinization (or image) equation of (1) , and P be the same as in Definitions 1 and 3; then, (ii) e time response sequence of (dx (1) (iii) e restored values can be given by According to eorem 2, when the grey action quantity of GM(1,1) is expanded from real number b to interval grey number ⊗ b , the GM (1,1) model evolves into the new GM(1, 1, ⊗ b ) model, and the simulation or predicted results of GM(1, 1, ⊗ b ) have the following characteristics: (1) e simulated or predicted result of GM(1, 1, ⊗ b ) is an interval grey number ⊗(k) (2) e interval grey number ⊗(k) has the definite lower e possibility function of the interval grey number ⊗(k) is a triangle, and its maximum possible value mid (k) e schematic diagram of the interval grey number ⊗(k) and its probability function is shown in Figure 4. It can be seen that when the grey action quantity "b" is restored to an interval grey number ⊗ b ∈ [b min , b max ], the simulation and prediction data of the GM(1, 1, ⊗ b ) model are also interval grey numbers. In the case of uncertain system Grey factors White result Grey factors White result Model Application and Rationality Analysis With the increasing demand for natural gas in China's civil and industrial sectors, China has surpassed Japan to become the world's largest importer of natural gas and also the world's most heavily dependent importer of natural gas. In 2018 alone, China imported 125.4 billion cubic meters of natural gas, a growth rate of 31.7%. Under the background of the international trade rule of "take or pay" of natural gas and the rapid increase of China's demand for natural gas, the stable and orderly supply of natural gas has become an important factor threatening China's energy security. According to China's Statistical Yearbook (data.stats.gov.cn/easyquery.htm?cn�C01), China's total natural gas consumption (ten thousand tons of standard coal) in 2009-2018 is shown in Table 1. In order to test the comprehensive performance of the GM(1, 1, ⊗ b ) model, it is necessary to test the simulation and prediction results of the model at the same time. In this paper, the first seven data in Table 1 are used as the raw data to build the GM(1, 1, ⊗ b ) model and the last three data are used as the reserved data to test the prediction performance of the GM(1, 1, ⊗ b ) model. en, the modeling data X (0) is as follows: Step 1. Generating new sequences X (1) and Z (1) : According to Definition 1, X (1) and Z (1) are be obtained, as follows: Step 3. Constructing the interval grey action quantity According to Definition 1 and the development coefficient a, the known data x (0) (k) and z (1) (k), (k � 2, 3, . . . , 7), the grey action quantity b k at time point k can be computed, as follows: en, So, the interval grey action quantity ⊗ b ∈ [b min , b max ] is as follows: and the possibility function of ⊗ b ∈ [b min , b max ] is shown in Figure 5. e relationship between the grey action quantity at different time points and the grey action quantity b of the traditional GM (1,1) model is shown in Figure 6. According to Figure 6, we can see that the grey action quantity b of the traditional GM(1,1) model is a compromise value and the size of b is estimated under the condition of minimizing the sum of squares of residual errors of the simulated data. erefore, the process conceals the difference of grey action quantity at different points and loses some known information, which is the main reason why the simulation and prediction results of the traditional GM(1,1) model are unstable. (22) Similarly, when k � 8, 9, 10, the predicted data x (0) min (k), , and x (0) mid (k) can be computed, as follows: The value grey action quantity at different time points Step 5. Analyzing the rationality of simulation and prediction data: Based on the above calculation results, the original data and various simulation and prediction data curves are drawn, as shown in Figure 7. According to Figure 7, before analyzing the rationality of the proposed GM(1, 1, ⊗ b ) model in this paper, we first analyze the irrationality of the traditional GM(1,1) model: (a) e overall trend of China's total natural gas consumption is increasing year by year, but it is not balanced, such as the rapid growth in 2012-2014 and the slowdown in 2014-2015. However, the traditional GM(1,1) model is an exponential model with a constant growth rate, so it is difficult for the GM(1,1) model to achieve unbiased simulation of China's total natural gas consumption. It can be found from Figure 7 that there are obvious deviations between curves ① and ②. (b) In the traditional GM(1,1) model, the grey action quantity b represents the influence of all external factors on the development trend of the system. It is essentially uncertain, and its form should be grey number. However, in the modeling process of GM(1,1), the size of b is estimated by the least squares method, which is a real number. is completely ignores the uncertainty characteristics of grey action quantity and leads to the poor reliability of the simulation and prediction results of the traditional GM(1,1) model (see curves ② and ⑤). (c) e GM(1,1) model is a grey model with incomplete structural information which mainly reflects in the uncertainty and complexity of the influencing factors. According to the "Nonuniqueness Principle" of Grey theory, solutions with incomplete and uncertain information show nonuniqueness. erefore, the simulation and prediction results of GM(1,1) should be nonunique. However, the GM(1,1) model is a time sequence prediction model with deterministic structure, and its simulation and prediction results are unique (see curves ② and ⑤), which does 1, ⊗ b ) is an interval grey number (see curves ⑥ and ⑦), which enables the decision maker to clearly understand the future change range of the research object. However, the prediction result of GM(1,1) is a determined real number (see curve ⑤), which usually has some errors; it leads decision makers to question its reliability. In this case, a certain interval is often more valuable than an uncertain real number. Conclusions e single variable grey prediction model represented by GM(1,1) simply uses a real number (grey action quantity) "b" to express the comprehensive effect of many uncertain and complex factors on the system development because the factors affecting the system (independent variables) are unknown. In other words, grey action quantity "b" represents the influence of all external factors on the system development trend. Hence, the parameter "b" is essentially uncertain and should be in the form of grey number. However, in the traditional GM(1,1) modeling process, the grey attribute of "b" is not taken into account, which is estimated and modeled according to the real number, which is obviously inconsistent with the actual meaning of "b". On the other hand, the GM(1,1) model is a grey model with incomplete structural information (the absence of independent variables). According to the "Nonuniqueness Principle" of Grey theory, the solution with incomplete and uncertain information is not unique. erefore, the simulation and prediction results of GM(1,1) should be nonunique. However, the current GM(1,1) model is a time sequence prediction model with deterministic structure, so its simulation and prediction results are unique, which obviously violates the "Nonuniqueness Principle" of Grey theory. Starting from the origin of the grey prediction model, this paper analyses the defects of the traditional GM(1,1) model. en, according to the Nonuniqueness Principle and Minimum Information Principle of Grey theory, the interval grey number form of grey action quantity b is restored and the new GM(1, 1, ⊗ b ) model is put forward. e new GM(1, 1, ⊗ b ) model is applied to simulate and forecast China's natural gas consumption, and the rationality of the simulation and prediction results of GM(1, 1, ⊗ b ) and GM(1,1) is analyzed. e results show that the prediction results of GM(1, 1, ⊗ b ) have more reference values. Although this paper only extends grey action quantity b from real number to interval grey number ⊗ b ∈ [b min , b max ], it is no exaggeration to say that the proposed GM(1, 1, ⊗ b ) model makes the classical grey prediction model really to have the "grey" attribute. At present, there are many kinds of 8 Complexity grey prediction models, and GM(1,1) is only one of the most primitive grey models. erefore, how to use GM(1, 1, ⊗ b ) as the basis to carry out in-depth research on the "grey" attributes of other grey models, so as to build the new grey prediction model with stronger modeling ability, is the next work of our team. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper
2020-02-06T09:09:55.124Z
2020-01-31T00:00:00.000
{ "year": 2020, "sha1": "987014a1af5414fe4d30d1271b33f69d25628fc9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/6514236", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5d2c209594d51635e9268ac3747977dbfbb4d773", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
4643185
pes2o/s2orc
v3-fos-license
Nutritional disorders in the proposed 11th revision of the International Classification of Diseases: feedback from a survey of stakeholders Objective To receive stakeholders’ feedback on the new structure of the Nutritional Disorders section of the International Classification of Diseases, 11th Revision (ICD-11). Design A twenty-five-item survey questionnaire on the ICD-11 Nutritional Disorders section was developed and sent out via email. The international online survey investigated participants’ current use of the ICD and their opinion of the new structure being proposed for ICD-11. The LimeSurvey® software was used to conduct the survey. Summary statistical analyses were performed using the survey tool. Setting Worldwide. Subjects Individuals subscribed to the mailing list of the WHO Department of Nutrition for Health and Development. Results Seventy-two participants currently using the ICD, mainly nutritionists, public health professionals and medical doctors, completed the questionnaire (response rate 16 %). Most participants (n 69) reported the proposed new structure will be a useful improvement over ICD-10 and 78 % (n 56) considered that all nutritional disorders encountered in their work were represented. Overall, participants expressed satisfaction with the comprehensiveness, clarity and life cycle approach. Areas identified for improvement before ICD-11 is finalized included adding some missing disorders, more clarity on the transition to new terminology, links to other classifications and actions to address the disorders. Conclusions The Nutritional Disorders section being proposed for ICD-11 offers significant improvements compared with ICD-10. The new taxonomy and inclusion of currently missing entities is expected to enhance the classification and health-care professionals’ accurate coding of the full range of nutritional disorders throughout the life cycle. The International Classification of Diseases (ICD) is the standard diagnostic tool for epidemiology, health management and clinical purposes. This includes the analysis of the general health situation of population groups. Most countries use the ICD to report mortality data, a primary indicator of health status, as well as to monitor the incidence and prevalence of diseases and other health problems, providing a picture of the general health situation of countries and populations. The ICD is used by physicians, nurses, other providers, researchers, health information managers and coders, health information technology workers, policy makers, insurers and patient organizations to classify diseases and other health problems recorded on many types of health and vital records, including death certificates and health records. In addition to enabling the storage and retrieval of diagnostic information for clinical, epidemiological and quality purposes, these records also provide the basis for the compilation of national mortality and morbidity statistics. Notably, the ICD is used for reimbursement and resource allocation decision making by countries (1) . Since its 6th revision in 1948, the WHO has undertaken periodic revisions of the ICD. Clinical modifications of the ICD have been developed and implemented to accommodate country-specific needs for classifying diagnoses in coded health data (2)(3)(4)(5)(6) . It is more than 20 years since the Forty-third World Health Assembly (May 1990) endorsed the tenth ICD revision (ICD-10) and WHO Member States adopted it for clinical use. WHO is currently working on the 11th revision, which the World Health Assembly is expected to approve in May 2018. The rationale for the revision is to reflect progress in the understanding of health and disease, improve its clinical utility and adapt the classification to advances in information technology (7) . Among the main changes proposed there are many new elements, such as: new chapters (e.g. diseases of the blood and blood-forming organs, disorders of the immune system, conditions related to sexual health, sleep-wake disorders, traditional medicine); restructuring of existing chapters; content model (e.g. all conditions/disorders/diseases will include short and long definitions); new coding scheme; new terminology; and new concepts (e.g. classification hierarchy). Major improvements are anticipated from a nutrition perspective. The 11th revision will include a Nutritional Disorders (ND) section within the 'Endocrine, nutritional and metabolic diseases' chapter (Chapter 6), that has been developed by a Topic Advisory Group for Nutrition. The section will include the full range of nutritional disorders, from undernutrition to overweight and obesity, throughout the life cycle. A detailed description of the various enhancements in structure and content will be reported elsewhere. To foster public awareness and promotion of ICD-11 and to ensure transparency of the revision process, WHO has established an Internet-based editing platform (http:// apps.who.int/classifications/icd11/browse/l-m/en) which enables interested parties to participate in the revision process with proposals for enhancing the content and structure (8) . A total of 5202 proposals had been received by 31 December 2015 for the twenty-six chapters, of which 154 corresponded to Chapter 6 ('Endocrine, nutritional and metabolic diseases'). Of these 154, less than one-third corresponded to the ND section. Evaluation studies are also underway to field-test the current ICD-11 draft and assess how it improves the quality of the data. As part of this process, WHO's Department of Nutrition for Health and Development undertook a survey to seek stakeholders' opinions on the new structure of the ND section. The aim was to use feedback to enhance this section of ICD-11 before its finalization. Methods A questionnaire on the ICD ND section was developed centrally and sent to subscribers to the WHO Department of Nutrition for Health and Development's global mailing list. To ensure clarity throughout the survey, questions were kept short and simple; they included a combination of single-choice, multiple-choice and open-ended questions. The single-and multiple-choice questions had pre-coded answer options. The questionnaire (see online supplementary material) included instructions at the beginning of each section. In addition, to enable participants to review and compare approaches, a link to the online ICD-11 Beta Draft (8) was provided for the last section (feedback on the new structure of ICD-11 ND section) together with two documents presenting the current (ICD-10; Table 1) and the proposed new structure (ICD-11) of the ND section (Table 2). Participants were offered online access to the survey via email. Once the survey was opened, respondents could stop and save answers and continue responding later at their convenience. No hard copies were distributed. The survey was conducted over a period of 34 d between 22 June and 25 July 2015. Information was collected using twenty-five questions (see online supplementary material) covering the following areas: (i) information about the participant (seven questions); (ii) current use of the ICD (seven questions); and (iii) feedback on the new structure of ICD-11 ND section (eleven questions). In the first section, participants were asked about their profession and specialization, the type of organization for which they work, whether it is in the private or public sector, and the country where it is located. The section on current use of the ICD sought to ascertain which version participants are using (ICD-9, ICD-10 or other), how familiar they are with the coding system and how frequently they use the ICD. Participants were asked their opinions about the usefulness of the ICD-9/ICD-10 classification systems as tools for coding nutritional disorders, and the limitations and challenges encountered in using them. Questions in the third section focused on the new ICD-11 structure of the ND section. Participants were asked their opinion about the level of detail and whether the new ND section covers all nutritional disorders encountered in their work. Additionally, open-ended questions attempted to identify specific challenges or matters of concern in the ICD-11 ND section for coding nutritional disorders. LimeSurvey ® , an open-source software tool used by WHO to conduct online surveys, was used to conduct the survey. Summary statistical analyses were performed using the survey tool and Microsoft ® Excel. Figure 1 presents the survey flowchart. A total of 3181 questionnaires were successfully delivered by email. Of these, 500 participants accessed the survey and 293 submitted a complete questionnaire. Among the 293 participants completing the survey, seventy-two reported using the ICD classification in their current practice while 221 did not. As the survey was designed to obtain feedback from participants familiar with the ICD, results presented below concern the seventy-two ICD users who returned completed questionnaires. Respondents used the ICD mostly for clinical purposes (e.g. many countries require ICD codes to make any drug prescriptions for treatments covered by the public health system), teaching purposes (e.g. use updated disease terms and definitions), financing purposes (e.g. codification of diagnostic and treatment procedures expenditures in the context of hospitalizations) and research projects (e.g. codification of causes of death and morbid conditions). Results Survey respondents came from twenty-two countries, with the largest number from the Region of the Americas (31 %) followed by the South-East Asia Region (19 %). Participants from the four remaining WHO regions (African, European, Eastern Mediterranean and Western Pacific) had similar response levels. The three most common occupations listed by participants were nutritionists (31 %), public health professionals (17 %) and medical doctors (13 %). In medicine, general practice, paediatrics, nutrition and internal medicine were the top four fields of specialization (30 %, 26 %, 13 % and 13 %, respectively). The most common roles included researchers, professors and project coordinators followed by programme leaders, health-care providers/clinicians and senior managers. The majority of respondents (73 %) used ICD-10 exclusively, 17 % were still using ICD-9 and 10 % reported using both versions. The information obtained on frequency of use showed that almost half of participants used the ICD classification system at least three times per year (46 %), 38 % at least three times per month, and 16 % at least three times per week. Among the limitations participants reported when coding ND with ICD-9 and/or ICD-10, the problems most commonly listed were 'unclear/confusing grouping', 'content not up to date', 'missing entities', 'unclear, confusing structure' and 'entities not consistent' ( Table 3). The main concern expressed by respondents was that ICD-10 was inadequate in terms of covering nutritional condition diagnoses. Overall, 25 % of respondents strongly agreed, and 44 % agreed, that the ICD-11 ND section provided a meaningful way to classify nutritional disorders. Only three respondents (4 %) disagreed and nineteen (26 %) were neutral. To the question 'Is the level of detail of the new ICD-11 structure for ND appropriate?', 74 % answered 'just right', 8 % 'too detailed' and 18 % 'not enough details'. Figure 2 presents the nutritional conditions in the new structure of ICD-11 most frequently used by respondents. About 40 % of respondents used at least three times per week disorders under the groupings 'Undernutrition based on anthropometric and clinical criteria in infants, children and adolescents', 'Vitamin deficiencies', 'Mineral deficiencies', 'Overweight and obesity in infants, children and adolescents' and 'Overweight and obesity in adults'. About the same proportion of participants reported occasionally using 'Undernutrition based on anthropometric and clinical criteria in adults' and 'Mineral deficiencies' (at least three times per month). 'Vitamin excesses' and 'mineral excesses' were the least frequently used groups of nutritional disorders, with 40 and 38 % of participants, respectively, reporting never using them. Importantly, 78 % (n 56) of participants reported that all nutritional disorders were represented in ICD-11 and that their area of specialty was adequately covered. Comments provided by participants for improving the classification included the need for actions to deal with the disease/ condition, inclusion of missing disorders (i.e. iodine excess, re-feeding syndrome), more clarity on the transition from previously used terms to new terminology (e.g. kwashiorkor to severe acute malnutrition), recommendations for links to other classifications such as ICF (International Classification of Functioning, Disability and Health), and the need for health-care providers/clinicians/coders to be trained in the use and documentation of the 11th revision once it is released. Overall, 96 % (n 69) of participants reported that the ICD-11 ND section will be a useful improvement over the ICD-10; they expressed appreciation for the new structure, mentioning that it is more comprehensive and specific, includes the main nutritional conditions that are missing in ICD-9 or ICD-10 (e.g. childhood overweight and obesity, stunting, moderate and severe acute malnutrition) and represents an upgrade of the terminologies used. Other positive comments referred to the classification covering population subgroups (i.e. infants, children, adolescents, adults) and displaying information in a clear and precise format. Discussion Feedback from stakeholders around the world suggests that the new structure of the ICD-11 ND section provides a useful improvement over previous versions (ICD-9/ ICD-10). It also identifies areas needing improvement before ICD-11 is finalized and adopted. These areas relate mostly to content (adding short and long definition of conditions), adding missing disorders (e.g. iodine excess, re-feeding syndrome), providing more clarity on the transition to new terminology (e.g. kwashiorkor to severe acute malnutrition), recommending links to other classifications of functioning, disability and health, and providing actions to address the disorders. To the best of our knowledge, this is the first stakeholder survey on the ND section of ICD-11. Other Topic Advisory Groups such as the Quality and Safety TAG have performed similar surveys investigating stakeholders' views on how to improve the quality and safety applications of ICD-11. Consistent with our results, issues identified by stakeholders when using ICD-9/ICD-10 included missing codes/information/concepts, insufficient updates on current medical knowledge and unclear clustering of categories (9) . Concerning mental health, Tyrer et al. reported in their study that respondents emphasized that ICD-11 was a more useful tool than ICD-10 in clinical practice when coding personality disorders. Similar to the positive feedback received in our survey to the proposed ND section in ICD-11 (e.g. on the enhanced coverage of population subgroups), improvements mentioned included wider age ranges and an expanded section on pathology (10) . Similarly, in a survey conducted by Demoly et al., the majority of respondents considered the ICD-10 classification as inappropriate in clinical practice for coding hypersensitivity disorders. The ICD-10 classification was described as unclear, insufficient and inadequate. Missing and inaccurate entities limited coding of allergic diseases (11) . Our study has a number of limitations. First, for various reasons (email address could not be found, email system processing problems, recipient's mailbox was full, problem occurring during delivery, message rejected, permission or security issue), 19 % of email invitations could not be delivered to individual subscribers to the WHO Department of Nutrition for Health and Development's mailing list, thus excluding them from the survey. Second, sending the questionnaire by email limited the sample to interested parties with access to a computer. Lastly, the survey automatically excluded experts who did not use the classification in their current activities even if, retrospectively, we realize that their feedback might have been useful for the evaluation. It was nevertheless possible to compensate for this last limitation through the public revision process WHO established via the Internetbased ICD-11 platform. All interested parties could participate by submitting proposals for enhancing content and structure (8) . By the end of April 2016, forty-three proposals had been received for the ND section of Chapter 6 ('Endocrine, nutritional and metabolic diseases'). Of these, thirty-nine were related to content enhancement (i.e. adding definitions, refining titles, adding/deleting synonyms), one concerned deleting entities (i.e. 'certain specified deficiencies of B group vitamins') and three involved hierarchical changes: one to designate the neurological chapter as the primary parent for the 'Nutritional and toxic disorders of the nervous system' since all diseases included there are neurological entities; and two to make hierarchical changes in the anthropometric structure (a proposal that had already been captured by the survey being reported in the current paper) and the neonatal hypocalcaemia entity. The public revision process is still ongoing and anyone is welcome to contribute to it. Notwithstanding the above-mentioned limitations, our results underscore the need for the ICD-11 ND section. Stakeholders expressed appreciation for the new structure. Content enhancements (e.g. considerable expansion of the overweight, obesity and micronutrient excesses categories) are an important step for coding individual patients, collecting and comparing data for global overweight and obesity statistics, and thus for allocating resources and implementing action to address the global burden of overweight, obesity and related health problems. Similarly, major improvements in content and level of detail of the category 'undernutrition' and its sub-categories (e.g. moderate/severe underweight, moderate/severe wasting, moderate/severe stunting, and moderate/severe acute malnutrition in infants, children and adolescents (MAM/SAM)) will permit differentiation between the many forms of undernutrition and will allow correct coding of nutritional disorders in different age groups, which is not possible with ICD-10. Thus, future data collection and monitoring will promote better targeting for interventions aimed at preventing and treating childhood undernutrition. Moreover, the classification's availability on an electronic platform will greatly facilitate its application. Conclusion There are noticeable differences between ICD-10 and the proposed ICD-11 in the taxonomy of nutritional disorders. The 11th revision is being upgraded to include the full range of nutritional disorders throughout the life cycle, many of which are missing in ICD-10, including undernutrition-related entities based on anthropometric and clinical criteria, as well as overweight and obesity disorders. Our study documents stakeholders' overall satisfaction with the comprehensiveness, clarity and coverage of population subgroups of the ND section being proposed for ICD-11. It also identifies areas for improvement before ICD-11 is finalized and adopted in 2018. The new ND section should be useful to a wide range of health professionals, from nutritionists and researchers to health-care providers and coders. The improved tool is expected to enhance the classification and accurate coding of the full range of nutritional disorders and support clinical care and the attainment of public health objectives for years to come.
2018-04-03T06:08:15.718Z
2016-06-13T00:00:00.000
{ "year": 2016, "sha1": "88922154ba90fe4134f3a456c8194e0672ad2bc0", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C9105D3D8C33FC1BEA86A923C4209C40/S1368980016001427a.pdf/div-class-title-nutritional-disorders-in-the-proposed-11th-revision-of-the-international-classification-of-diseases-feedback-from-a-survey-of-stakeholders-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "88922154ba90fe4134f3a456c8194e0672ad2bc0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49294901
pes2o/s2orc
v3-fos-license
Proliferation and differentiation of rat adipose-derived stem cells are regulated by yes-associated protein Adipose-derived stem cell (ASC)-based therapy is a promising treatment strategy for diseases of the musculoskeletal system, as ASCs have the potential to differentiate into numerous cell lineages. However, this field has only recently been explored; therefore, a considerable amount of work is required to determine the therapeutic potential of ASCs. The mechanisms and factors associated with ASC proliferation and differentiation remain to be elucidated. In order to determine the biological properties and subsequent clinical applications of ASCs, these molecular mechanisms must be investigated. The transcriptional co-activator yes-associated protein (YAP), which is a major target of the Hippo signaling pathway, has been reported to serve a crucial role in stem cell proliferation and differentiation. To the best of our knowledge, the role of YAP in the proliferation and differentiation of rat ASCs (rASCs) has not yet been reported. The results of an immunofluorescence analysis revealed that subcellular distribution of YAP in rASCs was regulated by cell density and the actin cytoskeleton. Furthermore, western blot analysis demonstrated that YAP protein expression in rASCs was regulated by lysophosphatidic acid and the actin cytoskeleton. In addition, YAP activation promoted the proliferation of rASCs, whereas YAP inactivation promoted osteogenesis and inhibited adipogenesis of rASCs. In conclusion, these findings demonstrated that YAP may regulate the proliferation and differentiation of rASCs. Targeted modulation of YAP in rASCs may therefore increase the therapeutic effect of rASCs in musculoskeletal diseases. Introduction Mesenchymal stem cells (MScs) are adult stem cells that have the ability to self-renew and differentiate into various mesodermal cells, including osteoblasts, chondrocytes and adipocytes (1). Populations of MScs are present in almost every tissue in the body, including bone marrow, adipose tissue, dental pulp and synovial tissue (2)(3)(4). Adipose-derived stem cells (AScs) are MScs that are present within the adipose tissue, as first described by Zuk et al (5). It has been reported that AScs are easier to isolate and acquire compared with other resident stem cell populations, and the cell yield is much higher than that of bone marrow-derived MScs (BMScs) (6). AScs have garnered attention in the scientific and medical fields due to their potential clinical applications (7). Numerous AScs-based clinical trials have been performed over recent years, and it has been suggested that AScs possess therapeutic potential for the future treatment of various diseases (7). To fully exploit the therapeutic value of AScs in clinical application, an in-depth understanding of the molecular pathways by which AScs proliferate and differentiate is essential. Yes-associated protein (YAP; gene symbol, YAP1) is a key transcriptional co-factor that is regulated by the Hippo signaling pathway (8). YAP acts as a transcriptional co-activator of the TEA domain-containing sequence-specific transcription factor, which regulates the expression of several 'stemness' genes (9). core components of the Hippo pathway include the kinases MST and LATS (10). Upon activation of the Hippo pathway, MST phosphorylates and activates LATS, which subsequently phosphorylates and inhibits YAP. YAP phosphorylation leads to cytoplasmic retention and degradation by proteasomes (10). conversely, inhibition of the Hippo pathway results in YAP nuclear retention and activation of transcriptional activity (11). It has previously been reported that sustained YAP expression is associated with liver enlargement and eventual tumorigenesis, thus suggesting an important role for YAP in cell proliferation and tumor formation (12). The upstream signaling mechanisms that regulate the Hippo signaling pathway remain elusive. A previous study demonstrated that the mechanical properties of the extracellular matrix (EcM), along with cell matrix attachment, may regulate the localization and activity of YAP via a Proliferation and differentiation of rat adipose-derived stem cells are regulated by yes-associated protein process involving the actin cytoskeleton (13). Furthermore, G protein-coupled receptors and their agonists, including lysophosphatidic acid (LPA) and sphingosine-1-phosphate (S1P), have been revealed to regulate YAP activity via modulating the actin cytoskeleton (14). YAP and its paralog, transcriptional co-activator with the PdZ binding motif (TAZ), are the main downstream regulators of the Hippo signaling pathway (15). TAZ has been demonstrated to co-activate genes dependent on Runt-related transcription factor 2 (RUNX2), which is the transcriptional regulator of the osteoblastic lineage, while suppressing the transcription of genes dependent on peroxisome proliferator-activated receptor γ (PPARγ), which is the master regulator of the adipogenic lineage, in MScs. Our previous study demonstrated that the phytomolecule icariin may promote the proliferation and osteogenic differentiation of rAScs via the Ras homolog gene family, member A-TAZ signaling pathway (16). YAP and TAZ are often considered to be orthologs of Drosophila Yorkie; however, it has been reported that the differentiation-regulating functions of YAP are not the same as those of TAZ (17). The present study aimed to evaluate the upstream factors that affect YAP expression and subcellular distribution in rAScs, as well as the role of YAP in rASc proliferation and osteogenic/adipogenic differentiation. Materials and methods Isolation and culture of rASCs. Male Sprague-dawley rats (age, 6-8 weeks; weight, 200-250 g, n=24) used in the present study were purchased from the Laboratory Animal center of the Tongji Medical college (Wuhan, china). All rats were kept in ventilated filter-top cages under standard laboratory conditions: 12-h light/dark cycle and a constant temperature of 24˚C with 60% humidity. Rats were given ad libitum access to conventional rodent chow and water. All experimental animals were sacrificed via cervical dislocation. Prior to cervical dislocation, rats were anesthetized by intraperitoneal injection of 2% pentobarbital sodium (35 mg/kg body weight). Animal death was confirmed by monitoring the heartbeat and body temperature. The present study was approved by the Experimental Animal Ethics committee of Tongji Medical College. Briefly, rASCs were isolated, cultured and characterized as described in our previous study (16). Adipose tissues from the epididymis of male Sprague-dawley rats were harvested, finely minced and digested with 0.1% type I collagenase (Wuhan Boster Biological Technology, Ltd., Wuhan, China) at a 1:1 volume ratio at 37˚C for 1 h, followed by centrifugation at 100 x g for 10 min at 37˚C. The supernatant layer was discarded and the remaining cells were collected and resuspended in dulbecco's modified Eagle's medium/Ham's F-12 (dMEM/F12; Hyclone; GE Healthcare Life Sciences, Logan, UT, USA) supplemented with 10% fetal bovine serum (FBS; Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA). Undigested debris was removed by filtering through a sterile 75-mm nylon mesh. Finally, cells were cultured with dMEM/F12 supplemented with 10% FBS and 1% penicillin-streptomycin at 37˚C in an atmosphere containing 5% CO 2 . cells at passage 3 were used for subsequent experiments. Cell treatment. rAScs at passage 3 were seeded at varying densities (1,500 or 12,000 cells/cm 2 ) and cultured for 2 days in an atmosphere containing 5% CO 2 at 37˚C with DMEM/F12 supplemented with 10% FBS. Cells were harvested and the gene expression of cTGF and Ankrd1 was evaluated via reverse transcription-quantitative polymerase chain reaction (RT-qPcR). To evaluate the role of the actin cytoskeleton in YAP activity, cells were seeded at a density of 1,500 cells/cm 2 and cultured for 2 days in an atmosphere containing 5% CO 2 at 37˚C with DMEM/F12 supplemented with 10% FBS. After treatment with 1 µg/ml Latrunculin B for 30 min, cells were harvested and the expression levels of cTGF and Ankrd1 were evaluated by RT-qPcR. rAScs were seeded in 6-cm dishes at a density of 1,500 cells/cm 2 and incubated in serum-free dMEM/F12 medium for 24 h at 37˚C prior to treatment with various reagents. LPA at 10 µmol/l and S1P at 10 µmol/l were used in the present study. Latrunculin B (LatB; 1 µg/ml) was used to disrupt F-actin fiber organization. All reagents were purchased from Sigma-Aldrich (Merck KGaA). Immunofluorescence staining. To evaluate the effects of contact-inhibited proliferation on YAP subcellular localization, rAScs were plated on coverslips at varying densities (1,500 or 12,000 cells/cm 2 ) and cultured for 2 days in an atmosphere containing 5% CO 2 at 37˚C with DMEM/F12 supplemented with 10% FBS. To evaluate the effects of the actin cytoskeleton on YAP subcellular localization, cells were seeded on coverslips at a density of 1,500 cells/cm 2 with dMEM/F12 supplemented with 10% FBS and treated with 1 µg/ml Latrunculin B at 37˚C for 30 min. Cells were subsequently fixed with 4% paraformaldehyde for 15 min and permeabilized with 0.1% Triton X-100 for 10 min at room temperature. After blocking with 5% bovine serum albumin (BSA; Wuhan Boster Biological Technology, Ltd.) for 1 h at 37˚C, slides were incubated with YAP primary antibody (cat. no. 4912; cell Signaling Technology, Inc., Danvers, MA, USA) diluted with 5% BSA (dilution 1:200) at 4˚C overnight. After washing three times with PBS, slides were incubated with Alexa Fluor ® 594-labeled donkey anti-rabbit secondary antibodies (cat. no. R37119; Invitrogen; Thermo Fisher Scientific, Inc.; dilution 1:1,000) for 1 h at 37˚C. For staining of F-actin, cells were incubated with fluorescein isothiocyanate-conjugated phalloidin (cat. no. P5282; Sigma-Aldrich; Merck KGaA) 1 h after blocking at 37˚C. After washing with PBS, cell nuclei were visualized with dAPI for 5 min. To reduce background fluorescence, slides were washed with PBS. Finally, cells were observed under a fluorescence microscope (FV500; Olympus corporation, Tokyo, Japan). Cell Counting kit (CCK)-8 cell proliferation assay. cells were cultured under various conditions and subsequently seeded into 96-well plates at a density of 2x10 3 cells/well. cells were divided into three groups: Serum starvation (SS) group (dMEM/F12 medium only), SS + LPA group (dMEM/F12 medium supplemented with 10 µmol/l LPA) and SS + short hairpin (sh) RNA-targeting YAP (shYAP) + LPA group (cells transduced with shYAP lentivirus and cultured in dMEM/F12 medium supplemented with 10 µmol/l LPA). cells were cultured for 1, 2 or 3 days at 37˚C and each sample was assessed for cellular proliferation. A total of 10 µl CCK-8 solution (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) was added to each well and incubated at 37˚C in the dark for 2 h. Absorbance was read on a microplate spectrophotometer (Bio-Rad Laboratories, Inc.) at a wavelength of 490 nm. Cell cycle distribution analysis. rAScs were seeded in 6-well plates at a density of 1,500 cells/cm 2 with serum-free dMEM/F12 medium at 37˚C. Cell cycle distribution and DNA content were analyzed by flow cytometry. Cells were divided into the following groups: SS group, SS + LPA group, SS + S1P group (dMEM/F12 medium supplemented with 10 µmol/l S1P), SS + shYAP + LPA group and SS + shYAP + S1P group (cells transduced with shYAP lentivirus cultured in dMEM/F12 medium supplemented with 10 µmol/l S1P). After 3 days treatment, cultured cells were rinsed with PBS twice and fixed with 70% cold ethanol at -20˚C for 2 h. Fixed cells were treated with RNase A (50 µg/ml; cat. no. R4875; Sigma-Aldrich; Merck KGaA) at 37˚C for 30 min and then stained with propidium iodide (PI; 65 µg/ml; Sigma-Aldrich; Merck KGaA) in the dark for 30 min at 37˚C. PI fluorescence of individual nuclei was measured using a flow cytometer (FACSort; BD Biosciences, Franklin Lakes, NJ, USA). Alkaline phosphatase (ALP) staining. rAScs were seeded in 6-well plates at a density of 1x10 4 cells/cm 2 and cultured at 37˚C in an atmosphere containing 5% CO 2 in osteogenic medium. cells were divided into four groups: control group (dMEM/F12 supplemented with 10% FBS), Osteo group (cells cultured in osteogenic differentiation medium), Osteo + control shRNA (shcontrol) group (cells transduced with control lentivirus cultured in osteogenic differentiation medium) and Osteo + shYAP group (cells transduced with shYAP lentivirus cultured in osteogenic differentiation medium). Following 5 days of culture, cells were rinsed with PBS three times and fixed with 4% paraformaldehyde for 15 min at 4˚C. Cells were rinsed again with deionized water and stained with naphthol AS-MX phosphate and fast blue RR salt (Sigma-Aldrich; Merck KGaA) for 30 min at 37˚C in the dark. Excess dye was removed with PBS and photomicrographs were captured under a light microscope. Oil Red O staining. rAScs were seeded in 6-well plates at a density of 1x10 4 cells/cm 2 and cultured at 37˚C in an atmosphere containing 5% CO 2 in adipogenic medium. cells were divided into four groups: control group, Adipo group (cells cultured in adipogenic differentiation medium), Adipo + shcontrol group (cells transduced with control lentivirus cultured in adipogenic differentiation medium) and Adipo + shYAP group (cells transduced with shYAP lentivirus cultured in adipogenic differentiation medium). Following 5 days of culture, cells were rinsed with PBS three times and fixed with 4% paraformaldehyde for 15 min at 4˚C. cells were rinsed again with deionized water and stained with Oil Red O for 30 min at 37˚C. Excess dye was removed with PBS and photomicrographs were captured under a light microscope. Transduction of rASCs with lentiviral vectors. rAScs at the third passage were seeded at the density of 3x10 4 cells/ml in 24-well tissue culture plates and cultured until cells reached 30% confluence. Following removal of the cell culture medium, rAScs were incubated with dMEM/F12 containing lentivirus (shYAP or shcontrol) and 5 µg/ml polybrene (Sigma-Aldrich; Merck KGaA) for 5 h at 37˚C. Multiplicity of infection (MOI) ranges between 10 and 200 were calculated. Following lentiviral vector infection, the medium was replaced with normal growth medium comprised of dMEM/F12 and 10% FBS. A total of 1 day post-transduction, reporter gene expression [green fluorescent protein (GFP)] was examined with fluorescence microscopy. AScs were transduced with lentiviruses at a MOI of 100 and passaged for further study. Statistical analysis. All data are presented as the means ± standard deviation. SPSS 13.0 was used for general statistical analysis (SPSS, Inc., Chicago, IL, USA). Significant differences in numerical data between two groups were determined using a Student's t-test. For multiple group comparisons, one-way analysis of variance was used with the Bonferroni post-hoc test. P<0.05 was considered to indicate a statistically significant difference. YAP subcellular distribution and expression in rASCs is regulated by cell density and the actin cytoskeleton. YAP is a transcriptional co-activator, which has been reported to serve critical roles in cell proliferation and differentiation; however, the functions of YAP and its upstream regulating factors in rAScs have yet to be elucidated. The present study demonstrated that the subcellular distribution of YAP in rAScs was regulated by cell density and F-actin integrity. rAScs were seeded at varying densities in tissue culture plates (1,500 and 12,000 cells/cm 2 ). When cultured at 1,500 cells/cm 2 (low density), cells showed no contact with neighboring cells (Fig. 1A). When cultured at a higher density of 12,000 cells/cm 2 (high density), a marked amount of contact was observed with neighboring cells (Fig. 1B). In addition, in the low density group, YAP expression was mainly localized to the nuclei of rAScs (Fig. 1A). conversely, nuclear YAP expression was markedly decreased when rAScs were cultured at a higher density (Fig. 1B). In addition, the effects of the actin cytoskeleton on YAP subcellular distribution were investigated. The results demonstrated that under normal conditions, YAP was mainly localized in the nuclei of rAScs ( Fig. 2A); however, disruption of the actin cytoskeleton with 1 µg/ml LatB for 30 min induced marked YAP cytoplasmic translocation (Fig. 2B). RT-qPcR was conducted to analyze the expression levels of the YAP target genes, cTGF and Ankrd1. The results indicated that the mRNA expression levels of cTGF and Ankrd1 were significantly decreased in the high density group compared with in the low density group, which was consistent with decreased YAP nuclear localization (Fig. 3A). Furthermore, cTGF and Ankrd1 mRNA expression was markedly inhibited by LatB treatment compared with in the control group, thus suggesting that the integrity of the actin cytoskeleton may be closely associated with YAP activity (Fig. 3B). LPA promotes YAP protein expression in rASCs. It has previously been reported that LPA activates YAP expression in epithelial cells, such as McF10A cells (14). However, to the best of our knowledge, the effects of LPA on YAP protein expression in rAScs have yet to be determined. The present study demonstrated that SS of rAScs for 24 h markedly decreased YAP protein expression (Fig. 4). However, when cells were treated with LPA (10 µmol/l) for 1 h after 24 h SS, YAP protein expression was significantly increased (Fig. 4). When cells were treated with the actin cytoskeleton disruption agent, LatB, for 1 h after 24 h SS treatment, YAP protein expression was further decreased compared with in the SS group (Fig. 4). Notably, the LPA-induced upregulation of YAP protein expression in rAScs was inhibited in the SS + LPA + LatB group compared with in the SS + LPA group (Fig. 4). These results suggested that LPA may promote YAP protein expression in rAScs, and that integrity of the actin cytoskeleton is critical for the regulation of YAP protein by LPA. Transduction of rASCs with the shYAP lentiviral system. rAScs at passage three were infected with lentiviruses carrying shYAP-GFP or shcontrol-GFP. Fluorescence microscopy was used to detect GFP expression in transduced rASCs. GFP-expressing rASCs were observed by fluorescence microscopy 24 h post-transduction (Fig. 5A-c). To verify that the lentiviral system had been successfully infected into rAScs, the mRNA expression levels of YAP were measured by RT-qPcR 5 days post-transduction. The results revealed that YAP mRNA expression in the shYAP#1 and shYAP#2 groups was significantly lower than in the shControl group (Fig. 5D). Furthermore, YAP protein expression in the shYAP#1 and shYAP#2 groups was markedly lower compared with in the shcontrol group 7 days post-transduction (Fig. 5E). The shYAP#1 lentiviral system was used for subsequent experiments. LPA and S1P promote rASC proliferation by activating YAP. YAP is a transcriptional coactivator that promotes the expression of various downstream genes, including cTGF and Ankrd1 (14). The present results demonstrated that cTGF, Ankrd1 and YAP mRNA expression levels were significantly increased when rASCs were treated with LPA (10 µmol/l) or S1P (10 µmol/l) for 1 h compared with in the SS control group (Fig. 6A-c). However, rAScs transduced with shYAP lentiviruses exhibited no increase in cTGF, Ankrd1 and YAP mRNA expression when treated with LPA or S1P (Fig. 6A-c). Furthermore, the expression levels of PcNA in rAScs were evaluated following treatment with LPA and S1P for 3 days. The results demonstrated that LPA and S1P significantly increased PcNA mRNA expression in rAScs (Fig. 6d). However, when rAScs were transduced with a shYAP lentivirus, the effects of LPA and S1P on PcNA expression were abrogated (Fig. 6d). rASc proliferation was investigated using ccK-8 cell proliferation assays and flow cytometric analysis. ccK-8 results revealed that LPA and S1P significantly increased rASc proliferation compared with in the SS control group 2 and 3 days after treatment (Fig. 6E and F). Infection with a shYAP lentivirus significantly inhibited LPA and S1P-induced cell proliferation (Fig. 6E and F). cell cycle distribution of rASCs was also investigated using flow cytometry, and the representative flow cytometric profile is presented. Flow cytometric analysis revealed that treating rAScs with LPA and S1P for 3 days significantly increased the percentage of rAScs in S and G 2 /M phases compared with in the SS control group (Fig. 6G and H). However, when rAScs were infected with a shYAP lentivirus, no significant difference was observed in the percentage of rAScs in the S and G2/M phases compared with in the SS control group (Fig. 6G and H). In conclusion, these results suggested that LPA and S1P may induce rASc proliferation by activating YAP expression. YAP inactivation promotes osteogenesis and inhibits adipogenesis of rASCs. The protein expression levels of YAP were investigated 7 days after osteogenic and adipogenic differentiation of rAScs. The results revealed that YAP protein expression was increased during osteogenic differentiation of rAScs ( Fig. 7A and B). conversely, the protein expression of YAP was significantly decreased during adipogenic differentiation ( Fig. 7A and B). YAP knockdown following transduction with shYAP led to a marked increase in osteogenic differentiation, as demonstrated by increased ALP staining in the Osteo + shYAP group compared with in the Osteo and Osteo + shcontrol groups 5 days after cell culture (Fig. 8A). In accordance with this, the expression levels of the osteogenic differentiation-related gene RUNX2 were also increased in the Osteo + shYAP group compared with in the Osteo and Osteo + shcontrol groups 5 days after cell culture (Fig. 8B). The effects of YAP knockdown on adipogenic differentiation of rAScs were also investigated. The results suggested that adipogenic differentiation of rAScs was reduced following YAP inhibition, as adipocyte formation was markedly decreased 5 days after cell culture in the Adipo + shYAP group compared with in the Adipo and Adipo + shcontrol groups (Fig. 8c). The adipogenic differentiation-related gene PPARγ was also decreased by YAP knockdown 5 days after cell culture compared with in the control groups (Fig. 8d). In conclusion, these results suggested that YAP may serve an important role in rAScs osteogenic and adipogenic differentiation. Discussion MSCs are promising cells in the field of regenerative medicine. BMScs and AScs are two distinct lineages of MScs, which are currently being investigated due to their potential clinical applications (18). AScs represent an abundant and easily accessible source of adult stem cells that can differentiate along numerous lineage pathways (19,20). ASc-based therapies for the treatment of musculoskeletal disorders are currently being developed (21); however, the factors that mediate ASc proliferation and differentiation remain poorly understood. Increasing understanding of the molecular mechanisms governing proliferation and differentiation of AScs may reveal their potential clinical applications. The regulation of YAP subcellular localization has not been studied extensively. A study by Zhao et al in epithelial cells revealed that YAP localization and phosphorylation are dependent on cell density (22). At low cellular densities, YAP is predominantly localized in the nuclei, translocating to the cytoplasm when cell density is increased (22). Furthermore, dupont et al reported that YAP/TAZ subcellular localization and activity are regulated by EcM stiffness and cell geometry (23). In addition, the activity of YAP/TAZ in mammary epithelial cells grown on stiff hydrogels has been reported to be comparable to that of cells grown on tissue culture plastic, whereas cells cultured on soft matrices exhibit decreased YAP/TAZ activity (23). It has also been reported that inhibition of the actin cytoskeleton decreases YAP/TAZ nuclear accumulation and transcriptional activity in mammary epithelial cells (23). The present study demonstrated that the subcellular distribution of YAP in rAScs was regulated by cell density and the actin cytoskeleton. In the low density group, YAP was predominantly localized to the nucleus, whereas in the high density group, YAP expression was markedly decreased in rASC nuclei; however, no significant cytoplasmic localization was detected. These results are not in agreement with those for epithelial cells (22,23). It may be hypothesized that this difference is due to the stem-like characteristics of rAScs and their primordial state, although more thorough investigations are required to confirm this. Consistent with a previous study (23), the present study also indicated that disruption of the actin cytoskeleton with LatB may induce YAP cytoplasmic translocation. The present results suggested that the expression of YAP target genes, cTGF and Ankrd1, was markedly decreased in the high density and LatB-treated groups compared with in the corresponding control groups. It has previously been reported that LPA acts through G12/13-coupled receptors to inhibit LATS kinase in the Hippo signaling pathway, thereby activating transcriptional co-activators YAP/TAZ (14). In the present study, the effects of LPA on YAP protein expression were detected in rAScs. LPA is a glycerophospholipid-signaling molecule present in all tissues that binds to receptors, such as LPA1-6, to initiate intracellular signaling cascades (24). Because serum contains LPA, in the present study, rAScs were serum starved for 24 h, in order to avoid the effects of LPA in serum on YAP expression. YAP expression was decreased in rAScs cultured in dMEM/F12 (SS without LPA), whereas treatment of rAScs with LatB further decreased YAP expression. Treatment with LPA (10 µmol/l) significantly increased YAP expression, whereas LatB treatment partly abolished the effects of LPA on rAScs. These results suggested that LPA may stimulate YAP expression in rAScs by modulating the actin cytoskeleton. Previous studies have revealed that YAP overexpression increases liver and heart size by increasing cell number (25)(26)(27). YAP overexpression in other tissues, such as the skin and intestines, results in an enlargement of the stem cell pool but no overall organ enlargement (28,29). The general conclusion from previous genetic analyses is that YAP induces cell proliferation and tissue development. In the present study, the effects of YAP activation on rASc proliferation were detected. S1P has been reported to possess overlapping effects with LPA (30). The present results revealed that LPA and S1P promoted rASc proliferation, potentially by increasing YAP expression. cTGF and Ankrd1 are both well-characterized YAP target genes. Treatment with LPA and S1P increased cTGF, Ankrd1 and YAP gene expression in rAScs. Furthermore, treatment with LPA or S1P increased the mRNA expression levels of PCNA in rASCs. Conversely, YAP knockdown significantly abrogated the proliferative effects of LPA and S1P on rAScs, thus suggesting that LPA and S1P promote rASc proliferation by activating YAP. coordinated proliferation and differentiation of adult stem cells is important for the regeneration and homeostasis of adult tissues. Balanced proliferation and differentiation of muscle satellite stem cells, which express myogenic regulator factor 5, are critical for the innate regeneration response of adult skeletal muscles (31). It has previously been reported that YAP is mainly localized in the nuclei of mouse myoblasts (32), and upon differentiation, YAP is translocated to the cytoplasm and phosphorylated (32). These findings suggested that YAP localization and activity serve a role in the regulation of cell differentiation. The osteogenic differentiation-related master gene RUNX2 has been revealed to interact with YAP and TAZ (33)(34)(35). It has also been demonstrated that recruitment of TAZ to RUNX2 target genes significantly promotes osteogenic differentiation of MScs, whereas TAZ knockdown in MScs results in reduced osteogenic differentiation (35). To the best of our knowledge, the role of YAP in the osteogenic or adipogenic differentiation of rAScs has yet to be fully elucidated. The present study investigated the effects of YAP inactivation on the osteogenesis and adipogenesis of rAScs. Initially, YAP protein expression was examined during osteogenic and adipogenic differentiation in rAScs; YAP protein expression was increased during osteogenesis and decreased during adipogenesis. Furthermore, YAP knockdown increased osteogenic differentiation and inhibited adipogenic differentiation. Since Wnt signaling is believed to be a major signaling pathway that controls MScs osteogenic differentiation, it may be hypothesized that YAP blocks osteogenic differentiation induced by Wnt signaling by binding β-catenin or inducing negative regulators of Wnt signaling. As such, YAP knockdown may enhance osteogenic differentiation, while inhibiting adipogenic differentiation of rAScs. These results suggested that YAP is a critical regulator of rASc differentiation; however, to the best of our knowledge, it has only been reported in the present study that YAP protein expression was increased during osteogenesis and decreased during adipogenesis, whereas YAP knockdown increased osteogenic differentiation and inhibited adipogenic defferentiation, thus further studies are required to confirm the detailed molecular mechanisms. In conclusion, the present study demonstrated that YAP subcellular localization in rAScs was regulated by cell density and the actin cytoskeleton. Furthermore, it was revealed that YAP expression in rAScs may be regulated, in part, by LPA and the actin cytoskeleton. YAP activation was demonstrated to promote rASc proliferation, whereas YAP knockdown promoted osteogenesis and inhibited adipogenesis. Therefore, targeted modulation of YAP in rAScs may be an effective novel strategy to control rASc proliferation and differentiation for the treatment of various musculoskeletal system diseases. However, the present study is potentially limited by the use of only rat-derived AScs, and further studies are required using human AScs, in order to translate these findings into humans.
2018-07-03T19:28:15.060Z
2018-06-18T00:00:00.000
{ "year": 2018, "sha1": "1855937b0fd1962e7d92071c39458cab846e40e2", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2018.3734/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1855937b0fd1962e7d92071c39458cab846e40e2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221594504
pes2o/s2orc
v3-fos-license
Semi-urgent pulmonary vein isolation using cryoballoon for haemodynamically unstable atrial fibrillation storm in a patient with low cardiac output syndrome: a case report Background Atrial fibrillation and heart failure are common coexisting conditions requiring hospitalisation for heart failure and death. Pulmonary vein isolation is a well-established option for symptomatic atrial fibrillation and for atrial fibrillation concomitant with heart failure with reduced left ventricular ejection fraction. Recently, pulmonary vein isolation using cryoballoon showed non-inferiority to radiofrequency ablation with respect to the treatment of patients with drug-refractory paroxysmal atrial fibrillation. However, the effectiveness of acute-phase rhythm control by semi-urgent pulmonary vein isolation using cryoballoon in patients with haemodynamically unstable atrial fibrillation storm accompanied with low cardiac output syndrome is unclear. Herein, we present a case in which semi-urgent pulmonary vein isolation using cryoballoon was effective for acute-phase rhythm control against drug-resistant and haemodynamically unstable repetitive atrial fibrillation tachycardia accompanied with low cardiac output syndrome. Case presentation A 57-year-old man was hospitalised for New York Heart Association functional class 4 heart failure with atrial fibrillation tachycardia and reduced left ventricular ejection fraction of 20% accompanied with low cardiac output syndrome-induced liver damage. The haemodynamics collapsed during atrial fibrillation tachycardia, which had become resistant to intravenous amiodarone and repeated electrical cardioversions. In addition to atrial fibrillation, atrial tachycardia and common-type atrial flutter appeared on day 3. Multiple organ failure progressed gradually due to haemodynamically unstable atrial fibrillation tachycardia storm accompanied with low cardiac output syndrome. On day 4, to focus on treatment of heart failure and multiple organ failure, semi-urgent rescue pulmonary vein isolation using cryoballoon to atrial fibrillation and cavotricuspid isthmus ablation to common-type atrial flutter were performed for acute-phase rhythm control. Soon after the ablation procedure, atrial fibrillation and common-type atrial flutter were lessened, and sinus rhythm was restored. A stable haemodynamics was successfully achieved with the improvement of hepatorenal function. The patient was discharged on day 77 without complications. Conclusions This case demonstrates that acute-phase rhythm control by semi-urgent pulmonary vein isolation using cryoballoon could be a treatment option in patients with haemodynamically unstable atrial fibrillation tachycardia storm accompanied with low cardiac output syndrome, which is refractory to cardioversion and drug therapy. (Continued from previous page) Conclusions: This case demonstrates that acute-phase rhythm control by semi-urgent pulmonary vein isolation using cryoballoon could be a treatment option in patients with haemodynamically unstable atrial fibrillation tachycardia storm accompanied with low cardiac output syndrome, which is refractory to cardioversion and drug therapy. Keywords: Atrial fibrillation, Low cardiac output syndrome, Catheter ablation, Congestive heart failure, Pulmonary vein isolation, Cryoballoon ablation Background Atrial fibrillation (AF) and heart failure (HF) are common coexisting conditions with hospitalisation for HF and death [1]. Pulmonary vein isolation (PVI) is a well-established option for symptomatic paroxysmal AF [2] and for AF concomitant with HF with reduced left ventricular ejection fraction (HFrEF) [3][4][5][6]. PVI for patients with AF concomitant with HFrEF demonstrated significant reduction in overall mortality rate and incidence of hospitalisation for worsening HF and improvement in left ventricular ejection fraction (LVEF) compared with the conventional drug therapy [5]. The cryoballoon is a recently developing ablation tool that showed non-inferior efficacy and overall safety [7][8][9]. However, the role and significance of acute-phase rhythm control by semi-urgent PVI is not yet established. Herein, we present a case in which semi-urgent PVI using cryoballoon was effective for acute-phase rhythm control against drugresistant and haemodynamically unstable repetitive AF tachycardia storm accompanied with low cardiac output syndrome (LOS) and LOS-induced multiple organ failure (MOF). Case presentation A 57-year-old man was referred to our hospital for New York Heart Association functional class 4 HF and extremely elevated liver enzymes. He had palpitation from 2 weeks ago which was accompanied with orthopnoea and serious fatigue on admission, despite being healthy without any history of AF. The electrocardiography showed AF tachycardia of approximately 180 beats per minute, and the bedside echocardiography showed low LVEF of 20%. In addition to fatigue, coexisting hypotension and elevated lactate of 13 mmol/L indicated LOS. Electrical cardioversion was conducted, barely terminated AF successfully on day 1, and restored blood pressure and urinalysis response to intravenous furosemide. Intravenous landiolol hydrochloride was administered for AF tachycardia. HF and LOS were treated with intravenous dobutamine, intravenous furosemide and oral tolvaptan, and non-invasive positive pressure ventilation under mild sedation using intravenous dexmedetomidine hydrochloride. On day 2, torsade de pointes suddenly occurred subsequent to a premature atrial beat in a long-short manner of coupling interval with QT prolongation. Cardiopulmonary resuscitation with electric cardioversions, intratracheal intubation, and establishment of mechanical ventilation were carried out, which achieved return of spontaneous circulation. Temporary atrium-atrium inhibited pacing was emergently established to shorten prolonged QT and maintain regular heart rate. However, AF tachycardia recurred repetitively. Repeated electrical cardioversions failed to terminate AF. Hypotension continued along with oliguria. The liver dysfunction was further exacerbated with aspartate aminotransferase of 11,708 U/L. The intravenous amiodarone was started. On day 3, AF tachycardia with hypotension still occurred despite intravenous amiodarone and gradually became resistant to electrical cardioversions. The atrial tachycardia (AT) and common-type atrial flutter (AFL) appeared in addition to AF. On day 4, disseminated intravascular coagulation (DIC) was diagnosed according to the Japanese Society on Thrombosis and Hemostasis criteria, with DIC score of 6 points [10]. Acute kidney injury was diagnosed according to the Kidney Disease Improving Global Outcomes criteria, fulfilling both serum creatine and urine output criteria [11]. At this time, because the patient's general condition was becoming worse and more resistant to treatment, we decided to perform semi-urgent rescue ablation to AF and common-type AFL for acute-phase rhythm control to treat HF and MOF. The patient's family agreed with our treatment policy and signed informed consent for the semi-urgent rescue ablation procedure. On the same day, prior to the procedure, transoesophageal echocardiography was performed and found no detectable thrombus in the left atrium (LA). PVI to the AF and cavotricuspid isthmus ablation to commontype AFL were planned. Fortunately, the patient's haemodynamics did not collapse at the time of ablation, we performed our institutional standard procedure including 3-D mapping. The cryoballoon was chosen as a PVI catheter because cryoballoon ablation was expected to have a shorter procedure time and lesser thrombogenic effect than other catheter types [7,12]. A 20-polar catheter (Response™; Abbott, St. Paul, MN, USA) was placed in the coronary sinus. A three-dimensional electroanatomical mapping was constituted by a cardiac mapping system (EnSite velocity™; Abbott). Transseptal access was obtained using the standard Brockenbrough needle technique with intracardiac ultrasound and fluoroscopic guidance, and an 8-Fr SL0 sheath (Swartz™; Abbott) was inserted into the LA. The cardiac geometry including all pulmonary veins (PV) was established using a 20-pole circular mapping catheter (Reflexion Spiral™; Abbott). No left atriography was conducted because of acute kidney injury. Then, we changed the SL0 sheath to a steerable sheath (FlexCath Advance™; Medtronic Inc., Minneapolis, MN, USA) and inserted a 28-mm secondgeneration cryoballoon catheter (Arctic Front Ad-vance™; Medtronic Inc.). In each PV, the cryoballoon was placed at the ostium of each PV in turns, and cryoballoon ablation was performed after complete occlusion of each PV as confirmed by the minimum amount of contrast agent. During cryoenergy deliveries, the oesophageal temperature and diaphragmatic compound motor action potential were monitored to avoid LA-oesophageal fistula and phrenic nerve injury. Additional touch-up radiofrequency ablation using FlexAbility™ (Abbott) to the residual LA-PV conduction gap at the bottom of the right inferior PV after cryoballoon ablation was conducted, and complete PVI was achieved (detailed PVI procedural data is shown in Table 1 and Fig. 1). Subsequently, a 20-polar catheter (Livewire™; Abbott) was placed around the tricuspid annulus, confirming that the AFL was cavotricuspid-isthmus dependent. The cavotricuspid isthmus ablation for common-type AFL was performed by standard procedure using radiofrequency ablation catheter and successfully achieved bidirectional block. The whole procedure was finished uneventfully and restored sinus rhythm and blood pressure of approximately 110 mmHg. Thereafter, AF and other atrial arrhythmias seldom occurred and was terminated by single electrical cardioversion (Fig. 2). Normal blood pressure and urine output were restored (Fig. 2). The hepatic and renal functions were improved gradually as well. On day 7, intravenous amiodarone was discontinued. On day 11, the patient was weaned from the ventilator support. The DIC was restored eventually. On day 18, pulmonary abscess that required long-term antimicrobial treatment was cured. On day 22, AF recurred, but oral amiodarone was restarted and suppressed the AF. Although the patient had a pulmonary abscess that required approximately one-month antimicrobial treatment after ventilator withdrawal, he was discharged alive on day 72. The elective coronary angiography and left ventriculography revealed no significant coronary stenosis and LVEF normalisation, diagnosing that the tachycardia-induced cardiomyopathy due to AF tachycardia was the cause of reduced left ventricular function. For 2 years, albeit the discontinuation of HF drugs and amiodarone, the patient has been free from HF symptoms and atrial arrhythmias including AF and AFL. Table 1 The procedural data of pulmonary vein isolation using cryoballoon Discussion and conclusion To our knowledge, this is the first report of semi-urgent rescue PVI using cryoballoon for acute-phase rhythm control against amiodarone-resistant AF tachycardia storm causing LOS and LOS-induced MOF with DIC in a tachycardia-induced cardiomyopathy. AF may cause adverse haemodynamic effects and lead to decrease in cardiac output through the loss of atrial contraction, reduction of left ventricular filling due to rapid ventricular rates and irregular RR interval, increase maximal oxygen consumption, and exacerbate mitral and tricuspid regurgitation [1]. Therefore, restoration of sinus rhythm in AF patients can be expected to improve cardiac output and decrease maximal oxygen consumption [3,4]. PVI is the established treatment for the rhythm control strategy of AF even refractory to antiarrhythmic drugs [2] because PVI has an anti-AF mechanism different from that of drugs, such as eliminating AF substrate, denervating the autonomic nerve, and most importantly eliminating AF triggers arising from PVs [13,14]. On the contrary, the role and significance of acute-phase rhythm control by semi-urgent PVI is not yet established. However, Morishima et al. reported a similar case. They described that the semiurgent rescue PVI could eliminate haemodynamically unstable AF storm and contribute to the improvement of haemodynamic states in a patient with an acute myocardial infarction, although the patient died from (See figure on previous page.) Fig. 1 Pulmonary vein isolation using cryoballoon. a: Baseline intracardiac electrograms of the left pulmonary veins before isolation. b: Intracardiac electrograms after the left pulmonary vein isolation. c: Baseline intracardiac electrograms of the right pulmonary veins before isolation. d: Intracardiac electrograms after the right pulmonary vein isolation. e: Fluoroscopic AP image demonstrating positions of the cryoballoon for all pulmonary veins. f: Three-dimensional map with grey area representing ablated area by pulmonary vein isolation using cryoballoon. CS, coronary sinus; dist, distal bipole; prox, proximal bipole; IPV, inferior pulmonary vein; LIPV, left inferior pulmonary vein; LSPV, left superior pulmonary vein; RIPV, right inferior pulmonary vein; RSPV, right superior pulmonary vein; SPV, superior pulmonary vein Fig. 2 Acute-phase clinical profile. Atrial fibrillation tachycardia accompanied with hypotension and oliguria. Semi-urgent pulmonary vein isolation using cryoballoon improved haemodynamics. Solid arrows indicate electrical cardioversions. Bar graph represents urine volume per hour. Solid line graph represents systolic blood pressure. Dotted line graph represents heart rate. BP, blood pressure ventricular fibrillation as a complication of acute myocardial infarction [15]. This report also supported the benefit of semi-urgent rescue PVI on acute-phase rhythm control against haemodynamically unstable AF tachycardia. In addition, the present case had LOS-induced MOF and DIC in which organ perfusion flow was originally reduced due to left ventricular dysfunction and was additionally reduced by AF tachycardia storm [1]. Furthermore, especially in LOS-induced shock states, the importance of cardiac output increase and the resultant increase in blood pressure by rhythm control might be emphasised because of the following pathophysiology: (1) central shift of the circulating blood and organ perfusion in contrast flow reduction due to neurohormonal response [16], (2) change of the source of liver blood perfusion supply from the portal vein to the hepatic artery due to hepatic arterial buffer reaction [17], and (3) dependency of renal blood perfusion on cardiac output and on blood pressure in hypotension [18]. Together with these considerations, we thought that the acute-phase rhythm control by semi-urgent PVI would have a certain role in LOS-induced MOF due to AF tachycardia, at least in elimination as aggravation factor, and would enable focus on the intensive care of original diseases and disorders. We chose the cryoballoon ablation for semi-urgent rescue PVI in the present case. Cryoballoon ablation is a balloon-based ablation system using cryoenergy. In recent years, cryoballoon ablation has become the most effective alternative approach to radiofrequency catheter ablation showing non-inferiority to radiofrequency catheter ablation in freedom from AF/AT recurrence and overall safety [7][8][9]. Several clinical studies showed shorter procedure time in cryoballoon ablation than that in radiofrequency catheter ablation [7,9]. An animal study showed that cryoballoon ablation has lower incidence of thrombus formation than radiofrequency catheter ablation [12]. The shorter procedure time may suppose lesser load in intensive care setting. The lower incidence of thrombus formation may be favourable in intensive care setting accompanied with DIC. Although the significance of rescue cryoballoon ablation is unknown, we considered that cryoballoon ablation could be a favourable tool when we were required to perform semiurgent rescue PVI, like in the present case. On the contrary, balloon-based ablation including cryoballoon ablation generally uses additional amount of contrast medium. We might consider performing the balloon-based ablation including cryoballoon ablation without left atriography for the patient with acute kidney injury, like in the present case. At the same time, we would like to emphasise the importance of careful and timely assessment of the benefit and risk of semi-urgent rescue PVI using cryoballoon because semi-urgent PVI using cryoballoon could be a complex procedure for a complex case. In conclusion, acute-phase rhythm control by semiurgent PVI using cryoballoon might be a considerable treatment option in patients with haemodynamically unstable AF tachycardia which is refractory to cardioversion and drug therapy and accompanied with LOS and LOS-induced MOF with DIC.
2020-09-11T14:14:31.695Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "cf184b8a3446edb9826dfe282e8e603b8856cf65", "oa_license": "CCBY", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-020-01682-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf184b8a3446edb9826dfe282e8e603b8856cf65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9861048
pes2o/s2orc
v3-fos-license
Low-ω3 Fatty Acid and Soy Protein Attenuate Alcohol-Induced Fatty Liver and Injury by Regulating the Opposing Lipid Oxidation and Lipogenic Signaling Pathways Chronic ethanol-induced downregulation of peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α) and upregulation of peroxisome proliferator-activated receptor gamma coactivator 1-beta (PGC1β) affect hepatic lipid oxidation and lipogenesis, respectively, leading to fatty liver injury. Low-ω3 fatty acid (Low-ω3FA) that primarily regulates PGC1α and soy protein (SP) that seems to have its major regulatory effect on PGC1β were evaluated for their protective effects against ethanol-induced hepatosteatosis in rats fed with Lieber-deCarli control or ethanol liquid diets with high or low ω3FA fish oil and soy protein. Low-ω3FA and SP opposed the actions of chronic ethanol by reducing serum and liver lipids with concomitant decreased fatty liver. They also prevented the downregulation of hepatic Sirtuin 1 (SIRT1) and PGC1α and their target fatty acid oxidation pathway genes and attenuated the upregulation of hepatic PGC1β and sterol regulatory element-binding protein 1c (SREBP1c) and their target lipogenic pathway genes via the phosphorylation of 5′ adenosine monophosphate-activated protein kinase (AMPK). Thus, these two novel modulators attenuate ethanol-induced hepatosteatosis and consequent liver injury potentially by regulating the two opposing lipid oxidation and lipogenic pathways. Introduction Alcohol liver disease is a major cause of morbidity and mortality, affecting millions world-wide [1]. Long-term exposure of ethanol causes fatty liver disease or hepatosteatosis [2], which further leads to steatohepatitis, fibrosis, and finally cirrhosis that may result in death [3]. Hepatosteatosis is characterized by the accumulation of lipids, triglyceride and cholesterol, due to an imbalance between hepatic lipid degradation and synthesis, leading to an enlarged fatty liver [3]. Studies have shown that alcohol causes the following: (i) increased mobilization of adipose fat into the liver, due to increased adipose lipoprotein lipase, (ii) decreased fat oxidation due to downregulation of fatty acid oxidation genes, (iii) increased fat synthesis due to upregulation of lipogenic genes, and (iv) impaired synthesis of apolipoprotein B and secretion of very low density lipoprotein (VLDL), the major lipoprotein for the export of hepatic lipids to peripheral tissues [4]. Transcriptional coactivators peroxisome proliferator receptor coactivator 1 alpha (PGC1 ) and peroxisome proliferator receptor coactivator 1 beta (PGC1 ) as well as sterol regulatory element-binding proteins (SREBPs) play vital roles in regulating the lipid oxidizing and lipogenic genes and thereby control the progression of hepatosteatosis and the consequent onset of fibrosis and other forms of liver injury [5,6]. Peroxisome proliferator-activated receptors (PPARs) are members of the nuclear hormone receptor super family that are ligand-dependent transcription factors. There are three isotypes, namely, PPAR , PPAR , and PPAR . Whereas PPAR is expressed in all tissues controlling the fatty acid oxidation pathway genes, PPAR is primarily expressed in 2 Oxidative Medicine and Cellular Longevity adipose tissue and the liver, regulating the lipogenic pathway genes. PPAR is found in many tissues although mainly in gut, kidney, and heart [7][8][9]. It is linked to colon cancer [10] but has not been well studied. PGC1 regulates lipid oxidation pathway genes via PPAR and PGC1 regulates lipogenic pathway genes via the sterol regulatory elementbinding proteins SREB1a, SREB1c, and SREBP2 [11]. SREB1c predominantly regulates fatty acid biosynthesis while SREB1a and SREBP2 control cholesterol synthesis [3]. AMP activated protein kinase (AMPK) is known to be activated by phosphorylation to form phosphorylated AMPK (pAMPK), which, in turn, phosphorylates and inactivates acetyl CoA carboxylase (ACC) and the rate-limiting enzyme of lipogenesis [4,12,13]. PGC1 is controlled by silence regulator gene 1 (SIRT1), the eukaryotic equivalent of SIR2 gene in prokaryotes, and histone acetyltransferases (HAT) [14]. SIRT1 activates PGC1 by deacetylation while HAT inactivates PGC1 by acetylation [15]. On the other hand, SIRT1 destabilizes SREBP1c by deacetylation while HAT stabilizes SREBP1c by acetylation [16]. PGC1 is upregulated by dietary saturated fat and coactivates SREBP1c and liver X receptor (LXR) families of transcription factors leading to increased lipogenesis, lipoprotein transport, and VLDL secretion [17,18]. Therefore, any modulator that can either activate PGC1 via the interplay between SIRT1 and histone acetyltransferase (HAT) or inactivate PGC1 /SREBP1c should be beneficial in preventing alcoholic hepatosteatosis and consequent liver injury. Omega-3/6 fatty acids are polyunsaturated fatty acids (PUFA) obtained from fish and plant sources. The most common omega-3 PUFA are eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), and alpha-linolenic acid (ALA). Whereas algae and oils from fish such as salmon, mackerel, and herring are rich in EPA and DHA, ALA is found in vegetable oils such as canola, flax seed oil, soybean oil, and nuts such as walnuts [19]. Soy proteins (SP) are found in soybean legume containing all 8 essential amino acids and very low saturated fat [20]. In recent times, both omega-3 PUFA and SP have received increased attention due to their beneficial effects against cardiovascular disease, obesity, type 2 diabetes, and certain cancers, among others [19,21,22]. Low omega 3 fatty acids (low-3FA) are known to have lipid lowering effects in humans [23] while SP lowers plasma and liver cholesterol and triglycerides in both animals and humans [24]. Studies have shown that SP prevents hyperinsulinemia and reduces the expression of LXRs and SREBP1c mRNAs in obese Zucker rat model [25][26][27]. However, the molecular mechanisms by which these dietary modulators can control the two transcriptional coactivators are yet to be explored. In this study, we demonstrate the novel actions of low-3FA and SP in inhibiting alcoholic hepatosteatosis by regulating two opposing vital pathway genes of lipid degradation and synthesis via PGC1 and PGC1 , respectively. Therefore, low-3FA and SP are potentially potent dietary modulators that seem to have these profound lipid lowering properties involving lipid catabolic and anabolic pathways. Moreover, low-3FA and SP stimulate AMPK phosphorylation and block ethanol-induced increased lipogenesis. Thus, this may be the first time a systematic approach is made to alleviate alcoholic hepatosteatosis by the combined effects of novel natural modulators that promise to intervene with both lipid oxidizing and lipogenic pathways. Animals. Wild-type (WT) female Wistar rats (∼150 g body weight) from Charles River, Wilmington, MA, were housed in pairs per cage in plastic cages, in a temperaturecontrolled room, at 25 ∘ C with 12-hours light-dark cycle. All animals were fed a pelleted commercial diet (Purina Rodent Chow, number 500, TMI Nutrition, St. Louis, MO) during the first week of acclimation period after arrival. Experiments were performed according to the approved institutional animal care and use committee protocol. Female rats were randomly divided into 4 groups of 5 rats each and were pairfed Lieber-DeCarli control or ethanol (EtOH) liquid diets (36% total fat calories) with high-3FA (14.1% of calories as 3FA) or low-3FA (2.7% of calories as 3FA) fish oil or EtOH with SP for 4 weeks. Diets. The diets are isocaloric and their formulations are according to the modified method of Lieber and DeCarli [28] with the recommended normal nutrients, vitamins, and minerals according to AIN-93 diet [29]. Thus, 36% of the total energy of ethanol diet is from fat, 20% from protein, 36% from EtOH, and the rest from the carbohydrate. The corresponding isocaloric control diet has isoenergetic amounts of dextrinmaltose in place of EtOH. EtOH concentration in the liquid diet was gradually increased starting at 1% level on day 1 and reaching the 5% level over a 7-day period to allow the animals to adapt to EtOH in the diet. These diets are supplemented with 120 IU of tocopherol/L and 200 mg/L of tertiary-butyl hydroquinone as antioxidants as per AIN-93 diet recommendations [28,29]. Lipid and Lipoprotein Analysis. Blood samples were collected and centrifuged at 3100 rpm using a Beckman J6M (Beckman Coulter, Indianapolis, IN) for 10 min at 4 ∘ C. Separated serum, plasma, and liver samples were frozen at −80 ∘ C until assayed. Liver lipids and high density lipoproteins (HDL) were extracted as previously described [30,31]. Cholesterol was analyzed using Sigma diagnostic kit number 352 (Sigma-Aldrich, St. Louis, MO) according to the method of Allain et al. [32] and triglycerides were analyzed using Sigma diagnostic kit number 339 (Sigma-Aldrich, St. Louis, MO) according to the method of McGowan et al. [33]. All protein concentration determinations were done according to Bradford method [34] with bovine serum albumin (BSA) as the standard. Isolation of Plasma HDL and Its Labeling with [ 3 H] Cholesteryl Oleate. HDL was isolated from various pooled groups of rat plasma according to Gidez et al. [30]. Protein concentration was determined colorimetrically using bovine serum albumin (BSA) as a standard [34]. HDL cholesterol content was measured according to Zlatkis and Zak [35]. HDL labeling with [ 3 H] cholesteryl oleate was performed according to Basu et al. [36], and the specific activity is expressed as dpm/mg HDL cholesterol. Quantification of Hepatosteatosis by Oil Red O. Livers from various experimental groups were cut into small pieces and washed immediately with ice cold PBS and mounted on optimum cutting temperature (OCT) embedding compound in peel-a-way embedding molds (Electron Microscope Sciences, Hatfield, PA). Liver tissues were cryosectioned and stained with oil red O to measure accumulation of lipid using an automated histometric system (Image-Pro Plus 6.1, Media Cybernetics, Bethesda, MD) as described previously [37]. The data are expressed as average oil red O percentage area of lipid staining. Values are means ± SEM. RNA Isolation and Real-Time RT-PCR. The total RNA was isolated from each liver using the Tri-Reagent (Molecular Research Center, Cincinnati, OH) as manufacturer's instructions. Isolated total RNA was reverse transcribed by in vitro transcription as described by the manufacturer (Invitrogen, Carlsbad, CA). Quantitative real-time PCR was performed using a Bio-Rad iCycler using the SYBR green PCR mix (Bio-Rad, Hercules, CA). Typical real-time PCR reaction mixture included same amount of cDNA templates from RT, 10 pM of each primers, 10 M of dNTPs, 3 mM of MgCl 2 , 10x buffer, and 2 of high fidelity Taq DNA polymerase in a reaction volume of 50 L with 0.1x SYBR Green I. The PCR conditions were 3 min at 95 ∘ C followed by 40 cycles at 95 ∘ C for 30 seconds, 55 ∘ C for 30 seconds, and 72 ∘ C for 1 min. Each primer pair was first tested by regular PCR to be highly effective and specific for amplification. -Actin was used as the standard housekeeping gene. Ratios of specific mRNA and actin mRNA expression levels were calculated by subtracting the threshold cycle number (Ct) of the target gene from the Ct of actin and raising 2 to the power of this difference. Ct values were defined as the number of PCR cycles at which the fluorescent signal during the PCR reaches a fixed threshold. Target gene expressions were expressed relative to -actin expression. The various primer pairs for indicated rat genes and transcription factors are listed in Supplemental Table 1 in Supplementary Material available online at http://dx.doi.org/10.1155/2016/1840513. Western Blot Analysis. Liver extracts from each experimental group were diluted into SDS-PAGE sample buffer [50 mM Tris (pH 6.8), 2% SDS, 10% glycerol, 15 mM 2mercaptoethanol, and 0.25% bromophenol blue] and electrophoretically resolved in Novex (Life Technologies, San Diego, CA) 4-20% denaturing polyacrylamide gels. Proteins are electrophoretically transferred to PVDF membrane and processed for immunodetection using the corresponding polyclonal primary antibodies for each of the above factors. After thorough washing, the primary antibody was detected with horse radish peroxidase conjugated secondary antibody specific to IgG of the respective primary antibody. Protein bands were visualized by chemiluminescence and quantified using FluorChem Imager (Alpha Innotech, CA). The nuclear extracts from each group were analyzed for the level of SIRT1, PGC1 , and PGC1 and the mature form of SREBP1c in the respective groups using the respective specific antibodies, while total protein extracts were analyzed for the levels of ACC, c-Met, AMPK, and pAMPK using respective specific antibodies. To determine the levels of acetylated-PGC1 , the liver nuclear extract from each group was initially immunoprecipitated with anti-PGC1 followed by immunoblotting with acetylated lysine antibody. The polyclonal antibodies for all the above transcription factors were purchased from Santa Cruz Biotechnology (Santa Cruz, CA), Cayman Chemicals (Ann Arbor, MI), and UpState Cell Signaling Solutions (Lake Placid, NY). The specificity of each antibody was verified before use for the above analyses. Immunoprecipitation Analysis. Immunoprecipitation was performed as previously described [38]. To determine the levels of acetylated-PGC1 , the liver nuclear extract from each group was initially immunoprecipitated with anti-PGC1 (Abcam, Cambridge, MA), followed by immunoblotting with acetylated lysine antibody (Cell Signaling Technology, Danvers, MA). Statistical Analysis. Experimental data were statistically analyzed, employing the paired and unpaired " " tests on the control and the experimental values. The appropriate data were analyzed by one-way or two-way analysis of variance (ANOVA) at < 0.05 followed by Tukey contrast to evaluate the true correlation between various parameters. were also markedly increased in EtOH group by 3.9-fold ( < 0.05) and 4.1-fold ( < 0.05), respectively, compared to control. In contrast, dietary low-3FA or SP feeding to EtOHfed groups significantly decreased serum and liver cholesterol and triglycerides to the level closer to that of the control group. Furthermore, the hepatic accumulation of lipids as measured by oil red O staining is markedly increased in EtOH group by 7.5-fold ( < 0.001) as compared to the control. This effect is significantly reduced after dietary administration of low-3FA and SP in the EtOH-fed group by 93% ( < 0.05) and 45% ( < 0.05), respectively (Figure 1(e)). Effects of Low-3FA and SP on EtOH-Mediated Alterations in the Lipid Oxidation Pathway. Chronic EtOH leads to a significant decrease in fatty acid oxidation (48.7 ± 5.8 nmoles/g/h, < 0.05) as compared to control (100 ± 8.6 nmoles/g/h). We further investigated whether the mechanisms of action of low-3FA and SP on EtOH-induced decrease in fatty acid oxidation are mediated via the regulation of the transcriptional coactivator PGC1 , SIRT1, and the downstream pathway. Figure 2(a) showed that low-3FA and SP treatment restored chronic EtOH-mediated 32% ( < 0.05) downregulation in SIRT1 mRNA by 85% ( < 0.05) and 80% ( < 0.05), respectively, as compared to EtOH group. EtOH also significantly downregulated PGC1 mRNA by 40% ( < 0.05) that was restored to 1.5-fold ( < 0.05) and 2-fold ( < 0.05) over the control level by low-3FA and SP treatment, respectively (Figure 2(b)). Additionally, CPT1 mRNA was also markedly downregulated by chronic EtOH (24%, < 0.05) which was restored to 1.5-fold ( < 0.05) over the control level by these dietary modulators (Figure 2(c)). Similarly, chronic EtOH markedly decreased the nuclear protein expression of SIRT1 and PGC1 by 38% ( < 0.05) and 35% ( < 0.05), respectively, which was restored over the control levels by low-3FA and SP treatment (Figures 2(d) and 2(e)). PPAR , a ligand-activated transcription factor, involved in the regulation of hepatic fatty acid oxidation [39], was also evaluated. EtOH significantly decreased PPAR protein levels by 50% ( < 0.05) that was restored by 1.8fold ( < 0.05) and 1.6-fold ( < 0.05) by low-3FA and SP treatment, respectively (see supplementary materials, Figure S1). Thus, low-3FA and SP are effective modulators in correcting the decreased fatty acid oxidation caused by Oxidative Medicine and Cellular Longevity chronic EtOH via the regulation of SIRT1, PGC1 , CPT1, and PPAR . In order to test whether the action of low-3FA and SP on hepatic lipid catabolism was mediated through the active or inactive forms of PGC1 via the modulation of SIRT1, we determined the level of acetylated PGC1 in the liver tissue of various groups. Figure 3 shows that chronic EtOH increased the hepatic acetylated (inactive) form of PGC1 by 40% ( < 0.05) because of EtOH-mediated decrease in SIRT1 by 38% ( < 0.05) as compared to the control (Figure 2(d)), thereby accounting for decreased fatty acid oxidation. In contrast, low-3FA and SP decreased the inactive form of PGC1 by 37% and 25%, respectively, as compared to EtOH group (Figure 3) via the upregulation of SIRT1 (Figures 2(a) and 2(d)), thereby accounting for restoring the decreased fatty acid caused by chronic EtOH to the control level. Thus, low-3FA and SP may lower alcoholic hepatosteatosis by augmenting the relative levels of active form of PGC1 ; that in turn effectively restored hepatic lipid catabolism that is impaired by chronic alcohol exposure. Effects of Low-3FA and SP on Chronic EtOH-Mediated Alterations in the Lipogenic Pathway. Figure 4(a) shows that chronic EtOH markedly upregulated PGC1 mRNA level by 52% ( < 0.05) as compared to the control, and low-3FA and SP downregulated the EtOH effect by 61% ( < 0.05) and 55% ( < 0.05), respectively. Similarly, Figure 4 was reduced to 30% and 50% ( < 0.02) of the control value by low-3FA and SP treatment, respectively. Chronic EtOH also markedly upregulated the mRNA expression levels of ACC, which regulates fatty acid synthesis by 2-fold ( < 0.05) and this was significantly suppressed by 50% ( < 0.05) in the low-3FA-EtOH group and by 70% ( < 0.05) in SP-EtOH group (Figure 4(c)). In contrast, as shown in Figure 4(d), the mRNA expression levels of c-Met were significantly downregulated by 35% ( < 0.05) after chronic EtOH administration, and low-3FA and SP treatment significantly restored EtOH-induced downregulation of c-Met mRNA level to 86% ( < 0.05) and 95% ( < 0.05), of the control value, respectively. These results were confirmed by measuring the nuclear or total protein expression of the above genes relative to those of the corresponding subcellular marker proteins. Figure 4(e) shows that low-3FA and SP fed rats showed suppressed EtOH-mediated increase (60%, < 0.05) in the relative nuclear expression of PGC1 by 68% ( < 0.05) and 63% ( < 0.05), respectively. Similarly, as shown in Figures 4(f) and 4(g), the relative nuclear protein expressions of SREBP1c and ACC were also markedly increased in chronic EtOH group by 30% ( < 0.05) and 50% ( < 0.05), respectively, compared to the control group. Administration of dietary low-3FA and SP reversed these EtOH-mediated effects by decreasing SREBP1c protein expression by 50% ( < 0.05) and 56% ( < 0.05), respectively (Figure 4(f)), and ACC protein expression by 85% ( < 0.05) and 60% ( < 0.05), respectively (Figure 4(g)). On the other hand, c-Met expression was decreased in the EtOH group by 25% ( < 0.05), which were restored in low-3FA and SP groups by 35% ( < 0.05) and 45% ( < 0.05), respectively, as compared to the EtOH group (Figure 4(h)). Since chronic EtOH increases hepatic ACC activity and lipogenesis by decreasing the phosphorylation of AMPK (pAMPK), a known inhibitor of ACC, we tested whether low-3FA or SP can counteract these effects of chronic EtOH by modulating the phosphorylation status of AMPK. As shown in Figures 5(a) and 5(b), although the level of total AMPK was unaffected in all groups, low-3FA and SP restored the hepatic level of pAMPK that was decreased by 50% ( < 0.05) in EtOH group. This increase in pAMPK could also account for decreased ACC activity and lipogenesis after low-3FA or SP treatment. These findings are consistent with the ability of low-3FA or SP to (i) inhibit chronic EtOH-induced increase in lipogenic pathway genes and (ii) restore ethanol-mediated decreased intracellular transport of hepatic triglycerides to the blood compartment due to impaired VLDL assembly and secretion. This would lead to the low-3FA or SP-mediated reduction in fatty liver caused by chronic alcohol abuse. Discussion Our results show that low-3FA and SP exert their hypolipidemic action by upregulating primarily the lipid oxidizing genes via SIRT1 and PGC1 signaling pathway that are suppressed by chronic ethanol and downregulating the lipogenic pathway genes predominantly via the PGC1 and SREBP1c signaling pathway. Our data also support the alternative possibility that low-3FA and SP could prevent alcoholinduced activation of ACC activity by phosphorylating it via pAMPK. SIRT1 is an NAD-dependent deacetylase (histone deacetylase (HDAC)) that has been linked to many beneficial effects of cellular processes including gene silencing, insulin resistance, glucose homeostasis, fatty acid metabolism, and aging, while HAT catalyses the opposite reaction [40]. Thus, SIRT1 activates PGC1 by deacetylation while HAT inactivates PGC1 by acetylation. On the other hand, SIRT1 destabilizes SREBP1c by deacetylation while HAT stabilizes SREBP1c by acetylation. You et al. [16] and Lieber et al. [41] have elegantly shown that both long chain and medium chain saturated fatty acids in the diet restore the expressions of SIRT1 and PGC1 that are downregulated by long chain polyunsaturated fatty acids (PUFA) in chronic ethanol-fed animals. However, PPAR was unaffected by chronic ethanol. Previously, Fischer et al. [42] have shown in mice that ethanol leads to PPAR dysfunction resulting in impaired fatty acid oxidation and consequent onset of fatty liver that is overcome by a PPAR agonist. Similarly, other studies [43,44] have shown that alcohol-mediated fatty liver and injury are prevented by PPAR agonist presumably by activating c-Met and blocking alcohol-mediated induction of TNF . We recently showed that compared to high fish oil control liquid diet, feeding of the same high fish oil liquid diet containing 5% (w/v) ethanol for 8 weeks significantly downregulated hepatic SIRT1, and PGC1 with the concomitant decreased hepatic rate of fatty acid oxidation [37]. Nanji et al. [45], Ronis et al. [46], and Song et al. [47] have demonstrated that saturated fatty acids protect against chronic alcohol-induced liver injury as compared to high levels of polyunsaturated fatty acids. In addition, Huang et al. [48] demonstrated that low levels of omega 3 polyunsaturated fatty acids, mainly docosahexaenoic acid, suppressed ethanol-induced hepatic steatosis. Similarly, Wada et al. [49] also demonstrated that low levels of fish oil fed prior to ethanol administration Oxidative Medicine and Cellular Longevity [23]. However, chronic ethanol-induced liver damage in rats fed a high fat diet (36% fat calories) was exacerbated [45,46,[54][55][56] with polyunsaturated FA from either vegetable oil ( 6 family) or fish oil ( 3 family) as evidenced by increased serum aspartate aminotransferase and alanine aminotransferase as well as by histopathology. In this study, fish oil constituted 36% of the total calories in the diet, which amounted to 14.1% of the total dietary calories as 3FA. In contrast, we showed [57] that the inclusion of only 2.7% of total dietary calories as 3FA resulted in lower plasma and liver lipids in chronic alcohol-fed animals. Furthermore, the same low level of dietary 3FA restored the decreased ApoE content in HDL. Thus, a low level of 3FA has beneficial effects [58][59][60], whereas a significant increase in 3FA seems to have a detrimental effect on the liver [45,46,[54][55][56]. It is possible that increased ethanol consumption in the intragastric model could have also caused the deleterious effects when PUFArich diet was fed [55]. Significantly, PUFA-containing lecithin diet was shown to prevent alcohol-induced hepatic fibrosis in baboons [61]. We showed [57] that low-3FA caused decreased VLDL production and serum lipids resulting in lipid-deficient ApoE, which can be easily sialylated and be associated with HDL. This would be consistent with the effects of low-3FA in reversing ethanol-mediated decrease in HDL-ApoE. Our previous work [58] also demonstrated that HDL from low-3FA-fed animals were more efficient in carrying out reverse cholesterol transport (RCT) function compared to the control animals regardless of whether the animals were on alcohol or control diet. We found [62] that cholesterol uptake by Hep-G2 cells from reconstituted HDL was stimulated by sphingomyelin (SPM). HDL phospholipid acyl chain composition is known to influence cholesterol efflux [63]. We also showed that chronic ethanol preferentially decreased SPM concentration in HDL of alcoholics leading to its impaired RCT function [64]. The present study shows that SP downregulated ethanolmediated overexpression of PGC1 , SREBP-1, and its target lipogenic genes such as ACC (Figure 2), whereas it restored ethanol-mediated downregulation of SIRT1, PGC1 , and lipid oxidizing genes such as CPT1 (Figure 4). Overall, our results suggest that the relative hypolipidemic effects of SP compared to low-3FA in regulating alcoholic hepatosteatosis were more due to alteration in the lipogenic pathway, whereas that of low-3FA compared to SP was more due to alteration in the lipid oxidizing pathway. In summary, this study has demonstrated the following. (1) Low-3FA and SP reduced alcoholic hyperlipidemia as well as hepatic lipid accumulation as evidenced by decreased liver cholesterol and triglycerides as well as hepatic histological lipid scores. (2) Low-3FA and SP prevented alcoholmediated downregulation of SIRT1 and PGC1 and their target fatty acid oxidation pathway genes. (3) Low-3FA and SP attenuated alcohol-mediated upregulation of PGC1 , SREBP1c, and its target lipogenic pathway genes. (4) Low-3FA and SP decreased the liver nuclear SREBP1c level that was increased by chronic ethanol treatment. (5) Low-3FA and SP restored the hepatic level of pAMPK that was decreased by chronic alcohol treatment. Conclusion Unlike high dietary 3FA, low dietary 3FA protects against chronic alcohol-induced liver injury. We have demonstrated that low-3FA and SP could potentially upregulate SIRT1/PGC1 and downregulate PGC1 /SREBP1c signaling pathways in alleviating alcoholic hepatosteatosis and liver injury. Thus, our study opens this field to explore other new therapeutic agents targeted on PGC1 and PGC1 pathways for protection against not only alcoholic liver diseases but also metabolic syndrome and obesity, the major world-wide health problems, especially when superimposed in alcohol abusers.
2018-04-03T00:16:27.721Z
2016-12-18T00:00:00.000
{ "year": 2016, "sha1": "dc43a503d0d18c485d83f828ac1667f5db012a25", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2016/1840513.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cf64587adf67521c0665b0ec79e1ff6770b6a7e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
220970511
pes2o/s2orc
v3-fos-license
Pancreatic Tumors Complicating Pregnancy: A Concern for Fetomaternal Well Being Introduction Hemoperitoneum resulting from the rupture of pancreatic tumors is a rare condition, especially during pregnancy. Case Presentation We report a case of a 21-year-old gravida 2, para 1, at 25+5 weeks of gestation, who presented to the hospital with severe epigastric pain and decreased fetal movement. Ultrasonography showed intrauterine fetal death, a retroperitoneal mass in the epigastric region, and hemoperitoneum. Computed tomography scan revealed a heterogeneously enhancing pancreatic mass suggestive of pancreatic neoplasm. However, the late diagnosis and the delay in treatment resulted in a deterioration of maternal status with eventual mortality. Conclusion Diagnostic difficulties occur because of the rarity of the condition and vague clinical presentations. In case of a pregnancy complicated by hemoperitoneum, prompt effort to stop the intraperitoneal bleeding is imperative. Introduction Neoplasms are uncommon during pregnancy, leaving no exception to pancreatic tumors, which affect nearly one in 1000 live births. 1 The symptoms of pancreatic neoplasms may simulate symptoms of normal pregnancy, resulting in a delay in diagnosis. Rupture of pancreatic neoplasms with subsequent hemoperitoneum is challenging, even more so during pregnancy as fetal maturation and maternal disease state have to be considered while planning intervention. Here, we discuss a case of suspected pancreatic neoplasm in a 21-year-old female, at 25+5 weeks of gestation, who presented with an acute abdomen and unstable vital signs. Case Presentation A 21-year-old gravida 2, para 1, at 25+5 weeks of gestation was referred to the emergency department of a university hospital with complaints of severe epigastric pain, vomiting, and decreased fetal movement for four days. She denied any history of trauma. Regarding menstrual history, she attained her menarche at 12 years, and she had a regular 30-day cycle with bleeding lasting for 3-4 days. Her past medical history was unremarkable. On examination, she was pale, severely dehydrated, tachycardiac (pulse -140/min), tachypneic (respiratory rate -35/min), hypotensive (blood pressure -100/60 mm of Hg), and febrile (temperature -101°F). Her Glasgow Coma Scale (GCS) score was 15/15. Abdominal examination revealed a palpable epigastric mass that was firm, tender, and measured roughly 10 cm x 8 cm. The uterus was approximately 22 weeks in size. Two contractions, each lasting fifteen seconds, were noted in ten minutes. On auscultation, fetal heart sounds could not be appreciated. Per-vaginal examination revealed a midline cervix of medium consistency, 1 cm dilation, and 20 to 30% effacement with the presence of "show". Laboratory investigations revealed severe anemia (4.4 gm%), thrombocytopenia (33,000/cu. mm), deranged prothrombin time (PT -60 seconds) and international normalized ratio (INR -5), and hypokalemia (2.4 mEq/L). Amylase and lipase were within normal limits. The standard values of these lab parameters used in our hospital are presented at the end of the text. Ultrasonography (USG) showed intrauterine fetal death (IUFD) of 26-27 weeks of gestation, a complex mass at the epigastric region (retroperitoneal area), and gross hemoperitoneum. Computed Tomography (CT) scan of the abdomen (Figures 1 and 2) showed a well-defined, round to oval, heterogeneously enhancing mass measuring 18 cm x 12.5 cm, originating from the body of the pancreas, suggestive of pancreatic neoplasm. A significant amount of high-density collection in the peritoneal cavity was suggestive of hemoperitoneum. In the emergency room, an intravenous line was opened in both the arms with a wide bore cannula, and the patient was managed with intravenous normal saline, pantoprazole (40 mg), ondansetron (4mg), and antibiotics (ceftriaxone 1 gm and metronidazole 500 mg). A multidisciplinary discussion with the surgical, radiological, and anesthetic team was done. After reviewing the case, the team decided to transfer the patient to radiology a second time to perform a contrast-enhanced computed tomography (CECT) scan, identify the bleeding point, and attempt arterial embolization in the same setting. However, she could not undergo CECT because of her unstable hemodynamic status and was transferred to the surgical intensive care unit (SICU) for further management. On reevaluation in SICU, she had hematuria, slight vaginal bleeding, bleeding from the nasogastric tube in situ, and her blood pressure was 90/50 mm of Hg. Per-vaginal examination showed progressive dilatation of the cervical os to 4 cm. Repeat evaluation of the coagulation profile demonstrated an INR of 20. Deranged coagulation profile and bleeding from multiple sites raised suspicion of IUFD induced coagulopathy. In the meantime, three units of fresh frozen plasma, three units of whole blood (including one unit of fresh blood), and two units of platelet-rich plasma were transfused over six hours. However, her blood pressure continued to drop, and she was eventually started on noradrenaline support (0.1 micrograms/kg/min). Her oxygen saturation continued to fall and eventually reached 52% in room air. Consequently, she was intubated. Repeat physical examinations were done at regular intervals to evaluate the clinical status of the patient and the progression of labor. Twenty-one hours after admission to the hospital, there was a spontaneous expulsion of the dead fetus weighing around 900 grams, with no gross anomalies, and a complete and normal-looking placenta. Ten units of oxytocin were administered intravenously following delivery. There was no evidence of active vaginal bleeding postpartum. Yet, her hemoglobin (6.4 gm%) and platelet count (28,000/cu. mm) were low. The surgical team still planned to acquire sufficient information via CECT on the exact source of bleeding before shifting the patient to the operating room for laparotomy. However, multiple attempts to shift the patient to radiology were unsuccessful. Thirty hours after hospitalization, her blood pressure plummeted, and she went into asystole. She was managed with ten cycles of cardiopulmonary resuscitation (CPR), following which her blood pressure reverted to 84/46 mm Hg. Simultaneous inotrope and vasopressor support were barely helpful, as severe bradycardia ensued, followed by asystole. Two hours later, she was pronounced dead. The family members of the deceased did not provide consent for the autopsy. Thus, postmortem histopathological evaluation of the tumor could not be done. In the end, we were unable to get more details of the tumor or the bleeding site. Discussion In a review of over 4.8 million deliveries in California for nine years, Smith and colleagues (2003) noted the incidence of malignant neoplasms during pregnancy or in the subsequent 12 months to be only about 0.94/1000 live births. 1 Pancreatic neoplasms, though rare, can complicate pregnancy as evidenced by a few reported cases in the literature [pancreatic adenocarcinoma (8), cystic pancreatic neoplasms (13), and pancreatic neuroendocrine tumors (3)]. [2][3][4][5] Mucinous cystic neoplasms and adenocarcinoma are the most common forms of pancreatic malignancies, with pancreatic neuroendocrine tumors accounting for less than 5% of all pancreatic tumors. 5,6 Patients with pancreatic neoplasms present with epigastric pain, postprandial fullness, palpable abdominal mass, nausea, vomiting, diarrhea, steatorrhea, and/or weight loss. 7 These tumors may contain estrogen receptors and be sensitive to estrogen manipulation. This may cause the tumors to be more aggressive during pregnancy, with a higher chance of rupture and intraperitoneal bleed. 3,4 This may explain the tumor bleeding in this case. Besides, symptoms like abdominal pain, nausea, vomiting, and backache may be construed to pregnancy and lead to a delay in diagnosis and eventual disease progression. Several other tumors in pregnancy can also present with symptoms that result in a misdiagnosis. Pheochromocytoma or paraganglioma, which occur in 0.007% of all pregnancies, can present with new-onset hypertension, palpitations, diaphoresis, and headache, all of which may result in a misdiagnosis of hypertensive disorders in pregnancy. 8 Renal cell carcinoma often presents with vague and mild abdominal discomfort, which can be falsely labeled as pregnancy-related. 9 The pregnancy-related changes in the breast often overshadow the diagnosis of breast cancer and allow these cancers to remain unnoticed until the first postpartum year. 10 Cushing's syndrome due to adrenal adenoma in pregnancy shares many clinical signs and symptoms with a normal pregnant woman like weight gain, hypertension, hyperglycemia, and fatigue. 11 Imaging with USG, CT scan, and magnetic resonance imaging (MRI); tumor markers; and biopsy of the mass can help establish a diagnosis of pancreatic neoplasm. The lower risk of fetal radiation exposure compared to a CT scan, and improved quality of imaging compared to a transabdominal USG, makes MRI the preferred imaging modality to decide resectability in pregnant patients. 5 Though biopsy is an important tumor diagnostic tool, some authors do not recommend it during pregnancy because of the potential risks of biopsy related bleeding, rupture of the lesion, or peritoneal seeding of the biopsy material. 12 Acute hemoperitoneum is a rare obstetrical emergency, and its presence in combination with an epigastric mass is rarer. The differentials are few. Pregnancy is associated with an increased risk of rupture of some liver lesions, hepatic adenoma being the most common benign mass. Pregnancy complicated by hepatic hemangioma, hemolysis, elevated liver enzymes, and low platelet count (HELLP) syndrome, and hepatocellular carcinoma may also result in spontaneous hepatic rupture and subsequent hemoperitoneum. 13 Another infrequent differential is an ectopic hepatic pregnancy, with a projected incidence of 1 in 10,000 to 25,000 live births, and it typically presents with acute symptoms like abdominal pain and bleeding. Hemoperitoneum secondary to such a pregnancy may either be a prolonged and gradual drip or a massive hemorrhage resulting in hypovolemic shock. Abdominal USG can identify the hepatic location of the pathology, and laparoscopic surgery can be an amenable option in the first trimester. 14 For any surgical emergency in pregnancy, the benefit of delaying surgery for fetal maturation must be balanced with the risk of maternal disease progression. Surgery in the first trimester may cause spontaneous abortion or poor fetal outcomes, including congenital anomalies. 5,15 Fetal organogenesis is completed once the pregnancy enters the second trimester and the smaller size of the fetus allows for an easier surgical procedure as compared to the third trimester. 5,16,17 With malignancy that is highly aggressive, we should discuss the possibility of termination of pregnancy with the patient in the first trimester itself to pursue further management without any delay. 2 Chemoradiation therapy has been proposed as another modality of management for resectable pancreatic cancer, both in adjuvant and neoadjuvant forms. 15 Non-obstetric surgeries during pregnancy require a multidisciplinary approach involving the obstetrician, general surgeon, anesthesiologist, and neonatologist for the best management. There are several indications for non-obstetric surgeries during pregnancy, with appendicitis, cholecystitis, bowel obstruction, adnexal torsion, and trauma being the most common ones. 18 As in the presented case, tumors complicated by hemoperitoneum may also be a rare indication of surgery during pregnancy. The data concerning the effect of these surgeries on the developing fetus and the pregnancy has been conflicting. As mentioned previously, there is a potential increase in the risk of spontaneous abortion among patients undergoing general anesthesia and surgery. 5,19 Surgery can also increase the risk of complications like fetal hypoxia, infection, preterm labor, and rate of cesarean delivery. 19,20 In our case, the patient initially presented with equivocal symptoms, which could have been a part of her pregnancy. However, the patient had unstable vital signs, which called for urgent diagnosis. The results from the imaging studies alerted us to the presence of a pancreatic mass that probably ruptured, resulting in hemoperitoneum and landed the patient in shock. In a situation like this, a high degree of suspicion is required to make a diagnosis early in the course of the disease. Even in the presence of gross hemoperitoneum, we deferred an exploratory laparotomy with a view of doing so after the expulsion of the dead fetus for the fear of IUFD induced coagulopathy. In retrospect, to avoid maternal morbidity and mortality, the best approach would have been to perform emergency laparotomy to evacuate the blood and control the bleeding site in an early stage of presentation, rather than waiting for the delivery of the fetus. Conclusion This case report highlights the fact that pancreatic neoplasms, being uncommon during pregnancy, may pose a diagnostic difficulty. Obstetricians should always keep the possibility of ruptured pancreatic neoplasm in mind when encountered with an acute abdomen in pregnancy. In pancreatic tumors complicating pregnancy, prompt effort to stop the intraperitoneal bleeding must be strongly considered first along with the use of critical resuscitative measures such as vasopressor and inotrope support. Data Sharing Statement Not applicable. Ethics Approval Not required. Consent for Participation and Publication Written informed consent was obtained from the patient's parents for the publication of this case report and any accompanying images. A copy can be made available for review upon request. Author Contributions UJ and SRU compiled the case and drafted the initial manuscript. VA did the initial literature review. AR did the clinical diagnosis, investigations, and treatment of the case. All authors made substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; took part in drafting the article or revising it critically for important intellectual content; gave final approval of the version to be published; and agree to be accountable for all aspects of the work. Funding There is no funding to report. Publish your work in this journal The International Medical Case Reports Journal is an international, peer-reviewed open-access journal publishing original case reports from all medical specialties. Previously unpublished medical posters are also accepted relating to any area of clinical or preclinical science. Submissions should not normally exceed 2,000 words or 4 published pages including figures, diagrams and references. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2020-07-09T09:12:39.457Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "625d8d03ee2985ce3013d919300d4456b2732146", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=59540", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89407d10bff23f71c7ca043129ff2a26466773eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24386934
pes2o/s2orc
v3-fos-license
Anticancer DNA vaccine based on human telomerase reverse transcriptase generates a strong and specific T cell immune response ABSTRACT Human telomerase reverse transcriptase (hTERT) is overexpressed in more than 85% of human cancers regardless of their cellular origin. As immunological tolerance to hTERT can be overcome not only spontaneously but also by vaccination, it represents a relevant universal tumor associated antigen (TAA). Indeed, hTERT specific cytotoxic T lymphocyte (CTL) precursors are present within the peripheral T-cell repertoire. Consequently, hTERT vaccine represents an attractive candidate for antitumor immunotherapy. Here, an optimized DNA plasmid encoding an inactivated form of hTERT, named INVAC-1, was designed in order to trigger cellular immunity against tumors. Intradermal injection of INVAC-1 followed by electrogene transfer (EGT) in a variety of mouse models elicited broad hTERT specific cellular immune responses including high CD4+ Th1 effector and memory CD8+ T‑cells. Furthermore, therapeutic INVAC‑1 immunization in a HLA-A2 spontaneous and aggressive mouse sarcoma model slows tumor growth and increases survival rate of 50% of tumor-bearing mice. These results emphasize that INVAC-1 based immunotherapy represents a relevant cancer vaccine candidate. Introduction Among a wide range of tumor antigens, 1 hTERT is a very attractive and important target for anticancer immunotherapeutic approaches. Human TERT is the rate-limiting catalytic subunit of the telomerase complex, a ribonucleoprotein enzyme which synthesizes telomeric DNA at chromosome ends. 2 It prevents apoptosis through the telomere-dependent pathway promoting cell immortality effectively allowing cells to grow indefinitely. These properties have been described in malignant cell transformation as some of the hallmarks in cancer genesis. 3 Human TERT is overexpressed in the vast majority (80-90%) of cancer cells regardless of origin, 4 and is associated with poor prognosis. 5 This overexpression leading to presentation of hTERT peptides at the surface of tumor cells enables recognition and destruction of these tumor cells by a natural anti-hTERT immune response. This response has been highlighted in some cancer patients revealing natural break of self-tolerance. 6 Given its widespread expression in tumor cells, hTERT is considered to be a near universal TAA. 7 The crucial role of hTERT in oncogenesis justifies its use in clinical immunotherapy as a treatment for cancer. To date, different clinical approaches have been explored based on MHC class I or II restricted hTERT peptides, autologous antigen-presenting cells (APCs; dendritic cells or B lymphocytes) loaded with hTERT peptides or transduced with hTERT mRNA. 8 Unfortunately, these strategies suffer from some limitations and disadvantages 9 and have had limited clinical impact for most patients 7 despite the induction of specific immune responses. DNA vaccines offer numerous advantages such as the flexibility to incorporate easily multiple genes which encode either full-length or partial tumor antigens and/or immunostimulatory molecules. 10 In addition, several add-on features have been explored such as the use of strong viral promoters, codon-optimized genes, the addition of immunoglobulin sequences or the enrichment of CpG motifs in the plasmid backbone. 11 On top of this DNA is safe, stable and easy to manufacture. Despite these traits initial DNA vaccination studies showed that DNA was generally poorly immunogenic, 12 especially when self-antigens were studied. Recent improvements in EGT, or DNA electroporation, such as the use of a sequence of high volt-low volt pulses have greatly increased the efficiency of DNA vaccination. 13 There are now many ongoing clinical trials using DNA vaccination for cancer immunotherapy or against infectious diseases. 14 We report the development of INVAC-1, a DNA plasmid for cancer immunotherapy. It encodes a modified full-length hTERT protein devoid of telomerase catalytic activity fused to human ubiquitin. INVAC-1 triggered broad and strong hTERT specific cytotoxic and Th1 immune responses in different mouse strains immunized through the intradermal route (ID) combined with an optimized EGT protocol. It shows an antitumor activity against a spontaneous and aggressive mouse sarcoma. These pharmacological data support the use of INVAC-1 in human clinical trials for the treatment of various types of cancers. Expression, subcellular localization and catalytic inactivity of INVAC-1 hTERT protein INVAC-1 DNA construct encodes a nine bp deletion coding for three critical amino acids (867-869, Valine-Aspartic Acid-Aspartic Acid or VDD) in the catalytic site. They are highly conserved across polymerases and mutation of any of these residues resulted in catalytic inactivation. 15 The nucleolar localization sequence (NoLS) at the N-terminal region (residues 1-47) was deleted and replaced by a complete 76 residue human ubiquitin (Ubi, MW D 8.4 kDa) moiety. Ubiquitin-fused proteins enter into a rapid proteasome-dependent degradation pathway leading to enhanced MHC class I peptide presentation and improved specific immune responses to a variety of antigens. 16 Telomerase expression in HEK293T cells was assessed by protein gel blotting (Fig. 1B). INVAC-1 construct expressed 2 distinct hTERT specific proteins, a weak upper band corresponding to the Ubi-hTERT fusion protein at the predicted size of 127.4 kDa and a major lower band corresponding probably to INVAC-1 hTERT protein lacking the ubiquitin sequence (119 kDa). By contrast, wild-type hTERT protein was detected at 124.5 kDa as expected. The INVAC-1 hTERT protein signal disappeared over time indicating that it was rapidly degraded, likely through the ubiquitin-dependent proteasome pathway, in contrast to wild-type hTERT protein which was stable out to 96 h (Fig. 1B). The impact of the NoLS deletion on the cellular localization was assessed in transfected QT6 cells, which do not express cross-reacting telomerase antigens, the avian and human enzymes sharing »50% amino acid homology. INVAC-1 hTERT protein was localized in the cytoplasm, both at 24 h ( Fig. 1C) and 48 h (data not shown), while wild-type hTERT was mainly detected in the nucleus and nucleolus showing that the deletion of the nucleolar localization signal drastically altered the subcellular distribution. In order to confirm the lack of INVAC-1 hTERT activity, a critical requirement for any human trials, telomerase activity was determined in transfected CrFK cells using a telomeric repeat amplification protocol (TRAP) assay. Relative telomerase activity (RTA) data showed that INVAC-1 hTERT protein was completely devoid of any telomerase activity in comparison to wild-type hTERT control (Fig. 1D). Taken together, these results demonstrate that INVAC-1 hTERT protein displayed the characteristics and properties expected given the modifications engineered into the INVAC-1 plasmid construct. INVAC-1 induces broad and strong hTERT specific T-cell responses CD8 C T-cells are known to be essential effectors involved in the elimination of tumor cells while CD4 C play a major role in orchestrating the global antitumor response. 17,18 To evaluate whether DNA vaccination with INVAC-1 could trigger specific CD8 C /CD4 C T-cell-mediated immune responses, C57BL/6 (H2-b) and BALB/c (H2-d) mice and Tg mice (HLA-B7 and HLA-A2/DR1) were immunized with INVAC-1. Fourteen days after, specific immune responses in spleen were monitored via an IFNg ELISpot assay using specific hTERT peptides. As shown in Fig. 2A, INVAC-1 immunized mice generated a significantly higher hTERT specific CD8 C T-cell response compared to any strain of control mouse (p < 0.01). In the same way, a significant CD4 C T-cell response restricted to HLA-DRB1 was detected in HLA-A2/DR1 immunized mice in comparison to controls (p < 0.01). CD4 C T helper 1 (Th1) cells secreting several cytokines such as IFNg, TNF and IL-2 are particularly important for the induction of efficient cell-mediated immunity against tumors. Consequently, the polarization profile of hTERT specific CD4 C T-cells induced in HLA-A2/DR1 after INVAC-1 vaccination was investigated using a cytokine binding assay. Results showed not only that significant concentrations of IL-2, TNF and IFNg, Th1 cytokines (Fig. 2B), but also IL-6 were detected in supernatants from immunized mice in comparison with control mice (p < 0.05). Thus, INVAC-1 vaccination is able to promote expansion of hTERT specific CD8 C T-cells and specific CD4 C T-cells with a predominant Th1 profile. Prime-boost vaccination enhanced, shortened and broadened hTERT specific responses Most vaccination protocols are based on a prime-boost regimen in order to improve the frequency of vaccine specific immune responses. Consequently, the impact of prime-boost vaccination on the generation of hTERT specific CD8 C T-cell response was evaluated in C57BL/6 immunized four times with INVAC-1. The hTERT specific CD8 C T-cell response was monitored in PBMCs over time by an IFNg ELISpot assay using H2-Kb/Db restricted hTERT peptides (Fig. 3A). hTERT specific CD8 C T-cell responses were observed 14 d post-priming (mean # spots: 38, p <0.001). The second vaccination (first boost) shortened the time to generate hTERT specific immune responses (10 d after the boost) although this response was not higher (mean # spots: 40, p <0.001). Interestingly, this immune response was long lasting since it was still detected at D118 (mean # spots: 9) suggesting the establishment of a memory response. As is often observed following CD8 C T-cell activation and expansion, a contraction phase occurs before the generation of a stable memory population. 19 To demonstrate this hypothesis, mice were vaccinated twice more at D123 and D144. As expected, responses were faster and higher (at D154, mean #spots: 96, p < 0.001) confirming the establishment of a hTERT specific CD8 C memory response. The breadth of the hTERT specific immune response was studied in a homologous prime-boost approach in HLA-B7 mice immunized with INVAC-1 at D0 and D21. At D31, this immune response was analyzed by an IFNg ELISpot assay using 13 pools of overlapping peptides spanning the entire hTERT sequence. As shown in Fig. 3B, INVAC-1 immunization induced a broad repertoire of T-cells against numerous hTERT epitopes (at least 13 epitopes) as all peptide pools were able to stimulate T-cells. Generation of hTERT specific cytotoxic CD8 C T cell responses Having established that the INVAC-1 DNA vaccine was able to induce CD4 C and CD8 C -mediated immunity, it was essential to establish that CD8 C T-cells displayed cytotoxic activity. Toward this end, we focused on granzyme B (GrB), a key mediator of target cell death secreted by T-cells. The GrB secretion was evaluated using an ELISpot assay in HLA-B7 Tg mice immunized twice. As shown in Fig. 4A, a higher frequency of hTERT specific CD8 C T-cells secreting GrB was detected in splenocytes from INVAC-1 immunized mice as compared to controls (p < 0.01). In a second step, the in vivo cytotoxic activity was evaluated by flow cytometry using CFSE-labeled and peptide-pulsed splenocytes in mice. To this end, C57BL/6 mice were immunized twice and the specific killing activity was evaluated 10 d post-vaccination. There was a decrease of highlylabeled CFSE cells pulsed with the immunodominant peptide p660 in INVAC-1 immunized mice as compared to controls. There was also a slight decrease of medium-labeled CFSE cells pulsed with the subdominant peptide p1034 (Fig. 4B). Approximately 50% of p660 pulsed-cells and 14% of p1034 pulsed-cells were killed in INVAC-1 immunized mice (Fig. 4C). CTL infiltration in tumors is of paramount importance for the efficiency of cancer immunotherapy approaches. Therefore, the infiltration of hTERT specific CD8 C T-cells in tumors was assessed. TC-1 bearing mice were vaccinated at D5 (palpable tumors), sacrificed at D19 and splenocytes, tumor draining lymph nodes cells and tumor infiltrated lymphocytes were isolated and assessed with an IFNg ELI-Spot assay using H2-Kb/Db restricted peptides. As shown in Fig. 4D, hTERT specific CD8 C T-cells were significantly detected in all samples from INVAC-1 immunized mice as compared to controls (p < 0.01) suggesting the capacity of these cells to circulate and migrate into the tumor. Hence, INVAC-1 immunization generated hTERT specific CD8 C T cells that exhibited in vivo cytotoxic activity probably through the GrB mediated pathway and which can infiltrate the tumor. , BALB/c mice (6-7 mice per group), HLA-B7 mice (4-6 mice per group) and HLA-A2/DR1 mice (5-6 mice per group) were immunized once. Fourteen days later, an IFNg ELISpot assay was performed with splenocytes stimulated with a pool of hTERT restricted peptides according to mouse MHC. IFNg hTERT specific CD8 C or CD4 C T-cells/200,000 splenocytes are represented as mean § SD. Mann-Whitney non-parametric test against mice control, ÃÃ p < 0.01. (B) HLA-A2/DR1 mice (3-5 mice per group) were immunized twice (prime-boost) with INVAC-1 (D0 and D21). At D31, splenocytes were Ficoll purified and stimulated with a pool of 3 hTERT specific peptides restricted to HLA-DRB1. Supernatants from stimulated cells were recovered and tested in a cytokine binding assay in order to evaluate the concentration of Th1, Th2 and Th17 cytokines secreted by hTERT specific CD4 C T-cells. Cytokine concentrations in pg/mL are represented as mean § SD. Mann-Whitney non-parametric test against mice control, Ã p < 0.05. Discussion Human TERT is considered as a near universal tumor antigen for immunotherapy approaches and thus represents a relevant target for such a therapeutic strategies. Indeed, it has been demonstrated that hTERT specific T-cell responses can be induced in vitro in HLA-A Ã 0201 patients with prostate cancer as well as in healthy donors suggesting the existence of T-cell precursors for hTERT which is the basis for breaking tolerance. 6 In addition, previous clinical vaccination strategies using the telomerase antigen as target, such as GV1001 peptide showed induction of a robust hTERT specific immune response in pancreatic and NSCLC cancer with clinical benefits for responding patients. 8 Many groups have demonstrated natural anti-hTERT CTLs and CD4 C T-cell responses as Godet et al. 20 These findings justified our choice to develop INVAC-1, a hTERT DNA vaccine for cancer immunotherapy. INVAC-1 incorporates a number of safety features. Deletion of the catalytic VDD triplet ensured a inactive form of the protein that was essential given that hTERT plays a critical role in tumorigenesis process. The NoLS deletion led to drastic alteration of its subcellular distribution. In addition, this modified protein was fused to ubiquitin taking into account the so-called N-end rule (destabilizing residue at N-terminal position) to oriente its degradation through the ubiquitin-dependent proteasome pathway. Protein expression assay showed that INVAC-1 hTERT was rapidly degraded by the proteasome in contrast with wild-type hTERT. Proteasome targeting systems such as ubiquitin fusion and N-end rule have been shown to play a key role for efficient proteasomal degradation, and subsequently for antigen presentation through the MHC class I pathway. 21 Indeed, ubiquitin-fused DNA vaccines have show a significant improvement of the antigen-specific cellular immune response. 16 Several studies were carried out to optimize the immunization procedure for INVAC-1 vaccine delivery (data not shown), especially the EGT process. 13 DNA vaccination through the ID route has been shown to be more efficient than intramuscular or subcutaneous routes of immunization. 22 Indeed, skin is readily accessible, presents a large surface for vaccination and constitutes a relevant immunogenic tissue due to a rich network of immunocompetent APCs such as Langerhans cells and dermal DCs. 23 To enhance the immune response elicited by INVAC-1 vaccination, we developed EGT using skin-specific parameters to be used in humans with the CLINIPORATOR®2 (IGEA, Italy). 13 Recent improvements in EGT technology have reemphasized the interest in DNA vaccination. 10 Indeed, EGT was shown to induce a robust antigen-specific T-cell response in HPV-infected women. 24 This combination is a relevant alternative to peptide and viral vector-based immunotherapy vaccine strategies since EGT enhances DNA uptake and improves peptide presentation through MHC molecules by antigen presenting cells in vivo. 25 Furthermore, DNA is suitable for the expression of large proteins such as hTERT, for which peptides may be presented by various MHC molecules matching the majority of HLA haplotypes in the human population. 9 Numerous studies have demonstrated that CD4 C Th1 and CD8 C immune responses elicited by DNA vaccination are crucial to mediate efficient antitumor immune responses. 10 INVAC-1 immunization induced high frequencies of hTERT specific CD4 C and CD8 C T-cells producing IFNg in different mouse strains. These results confirmed that hTERT antigen was well processed in vivo through both endogenous and exogenous pathways allowing peptide epitope presentation on numerous MHC class I and class II. We have also shown that hTERT specific CD4 C T-cells elicited by INVAC-1 presented a Th1 polarization profile. These results are consistent with other studies demonstrating that the antigen specific CD8 C T-cell priming and expansion require the help of antigen specific CD4 C Th1 cells. 26 Similarly, Antony et al. demonstrated that the maintenance of antigen specific CD8 C T-cells was also dependent on IL-2-secreting CD4 C Th1 cells. Furthermore, the secretion of IL-2 supports the establishment of memory response which has been shown to be important for improving the frequency of specific T-cells. 27 In this study, we demonstrated that a homologous prime-boost regimen amplified hTERT specific CD8 C T-cells by generating a large number of secondary hTERT specific responses and by expanding rapidly the existing antigen specific memory T-cells which encounter the same antigen a second time round. The broad response induced by INVAC-1 against numerous hTERT epitopes spanning the entire protein confirmed the advantage of using the full-length antigen as compared to individual epitopes used in peptide vaccine development. 28 By encoding full-length protein, the whole MHC diversity among the human population is expected to be covered while tumor escape mechanisms should be limited. Indeed, during tumor immunoediting, MHC expression or presentation of tumor antigens decrease on cells leading to the generation of variants resistant to immune effector cell. 29 In some cases, the lack of a consistent correlation between the magnitude of the antigen-specific T-cell response and the control of tumor growth 17,18 indicates that rather than the quantity of specific T-cells it is necessary to generate polyfunctional effector T-cells. 19 These quality responses depend on their ability to proliferate, migrate, coordinate the immune responses and carry out effector functions by directly killing tumor cells through cytotoxic mechanisms or secretion of cytokines. 30 hTERT specific CD8 C T-cells induced by INVAC-1 have the capacity to secrete GrB and kill target cells in vivo, as well as to produce IFNg and TNF (data not shown). These characteristics highlight the quality of hTERT specific CD8 C T-cells induced by INVAC-1. Another important consideration is that effector T-cells must efficiently migrate to the tumor microenvironment in order to control malignant progression. 31 Tumor-infiltrating lymphocytes (TILs) secreting perforin and Th1/CTL cytokines with lytic potential have been shown to improve clinical outcomes by inhibiting tumor growth or tumor recurrence in multiple human cancers. 32,33 In the present study, the presence of circulating and tumor-infiltrating hTERT specific CD8 C T-cells with functional characteristics were observed. These results showed that these cells are able to reach primary tumors and probably disseminated metastases. Other investigations such as phenotypic characteristics or cytotoxic strength need to be further assessed in order to evaluate their functional status. Indeed, Appay et al. studied tumor-infiltrated lymphocytes on s.c. lesions obtained from two vaccinated patients. They found that Melan-A-specific CD8 C T-cells infiltrated in the tumor, although activated, appeared with suboptimal functional capacities compared with circulating Melan-A-specific CTL. 34 Therapeutic vaccination with INVAC-1 delayed the tumor growth of Sarc-T2r tumors in HLA-A2/DR1 transgenic mice expressing human MHC class I and class II. This antitumor effect is likely related to hTERT polyfunctional T-cells induced by INVAC-1. Indeed, the overall amino acid identity between mTERT and hTERT is 64%. 35 Moreover, it has been demonstrated that TERT peptides restricted by HLA-A2 can be endogenously processed and presented by human and murine tumor cells, which in turn are recognized by specific CTLs. 36,37 Taken together, these observations suggest that hTERT specific T-cells induced by INVAC-1 could recognize mTERT peptides presented by MHC class I HLA-A2 expressed on Sarc-T2r cell surface. Other studies showed that hTERT DNA vaccines induced similar results against TS/A murine breast cancer 38 or murine lung cancer 39 in syngeneic tumor models. Tumors often generate systemic immune suppressed or tolerogenic states which reduce the efficacy of immunotherapy. 40 FoxP3 C CD4 C T-cells (Treg), myeloid-derived suppressor cells in Sarc-T2r tumors and a downregulation of MHC I expression by tumor cells were evidenced in INVAC-1-immunized mice (data not shown). These components of tumor escape mechanisms could prevent complete tumor responses in the Sarc-T2r tumor model. In order to circumvent these hurdles, combination strategies need to be considered. Numerous studies showing synergistic antitumor effects by combining active immunotherapy with other therapies aimed at either reducing the bulk of tumors (such as chemotherapy or radiotherapy) or potentiating the immune response with drugs against immune checkpoint inhibitors (such as anti-CTLA-4 or anti-PD-1 antibodies). For example, it has been shown that a peptide-based vaccine combined with an anti-PD-1 antibody and low-dose of cyclophosphamide induced synergistic antigen-specific immune responses. 41 Likewise, in colon cancer patients, co-administration of a platin-based chemotherapy (oxaliplatin) and a carcinoembryonic antigen peptide pulsed DCs induced a higher level of immune response in patients receiving chemotherapy as compared to patients who did not received. 42 In conclusion, the pharmacological results presented in this report showed that treatment with INVAC-1 induced a reliable antitumor immunity and a moderate survival advantage in murine models. Additional data from safety pharmacology, toxicology and biodistribution studies showed that treatment with INVAC-1 was safe and well tolerated. Taken together, these data were considered adequate and reliable to support a First-In-Human phase I study by French competent authorities. The clinical study, evaluating INVAC-1 in patients with advanced cancer, is currently ongoing in two clinical centers (https://clinicaltrials.gov/). INVAC-1 carries a nine bp deletion that removes three crucial amino resides from the catalytic site along with deletion of the first 47 residues encoding the NoLS 43 which was replaced by the ubiquitin polypeptide (Ubi; 76 aa) according to the ubiquitin-fusion approach 44 (Fig. 1A). The modified hTERT sequence was de novo synthesized by GeneCust (Luxembourg) and subcloned into the NTC8685-eRNA41H-HindIII-XbaI vector backbone designed by Nature Technology Corporation (Lincoln, Nebraska). 45 INVAC-1 plasmid was amplified through an antibiotic-free selection procedure in NTC4862 E. coli cells 46 (DH5a attλ::P5/6 6/68-RNA-IN-SacB, catR). GLP and GMP batchs of INVAC-1 were manufactured by Eurogentec (Belgium) at used at a final concentration of 2 mg/mL. In vitro characterization of INVAC-1 hTERT protein To assess protein expression, HEK293T cells were transfected either with INVAC-1, the empty NTC vector as negative control and a vector expressing wild type hTERT (pTRIP-CMV-h-TERT) as positive control using jetPrime transfection reagent (Polyplus-transfection Inc., France). Cells were harvested from 18 to 96 h post-transfection, lysed in a specific RIPA buffer (Sigma-Aldrich, St. Louis, USA) and expression assessed by Western blotting assay. hTERT proteins were detected using a primary rabbit monoclonal anti-hTERT antibody (Abcam, Cambridge, UK) followed by a secondary goat anti-rabbit antibody-horseradish peroxidase (HRP) conjugate (Cell Signaling, Danvers, USA). b-actin protein was used as loading control. Peroxidase activity was detected using a chemiluminescence ECL HRP substrate reagent kit (GE Healthcare, Buckinghamshire, UK). For sub-cellular localization, QT6 cells were transfected for 24 h using Fugene HD transfection reagent (Promega, Charbonni eres-les-Bains, France). After fixation, permeabilization and blocking steps, cells were incubated with a rabbit monoclonal anti-hTERT antibody for 1.5 h followed by a goat anti-rabbit antibody-Alexa Fluor 488® conjugate (Life Technologies, Saint-Aubin, France) for 45 min at room temperature. After washes, samples were mounted in DAPI-containing mounting medium (VECTASHIELD®). Slides were analyzed by fluorescent microscopy (Axio observer Z1 and Axiovision, Carl Zeiss MicroImaging GmbH). In vitro telomerase activity was assessed on total cell protein extracts from CrFK cells transfected 24 h with DNA plasmids using the TeloTAGGG Telomerase PCR ELISAPLUS kit according manufacturer's instructions (Roche Diagnostic GmbH Mannheim, Germany). Briefly, protein extracts were used to evaluate the telomerase-mediated elongation of telomeric sequences. Products were amplified by PCR (30 cycles) using biotinylated primers. PCR amplification products were transferred to streptavidin pre-coated microplate, incubated with an anti-digoxigenin HRP linked antibody and revealed using TMB substrate. Absorbance was measured against a blank at 450 nm to determine the level of telomerase activity in each sample. The RTA was obtained using the following formula: where AS: sample absorbance; AS0: heat-treated sample absorbance; AS,IS: internal standard sample absorbance; ATS8: control template absorbance; ATS8,0: lysis buffer (TS8) absorbance; ATS8,IS: TS8 IS absorbance. Immunization procedure Female 6-week-old C57BL/6JRj and BALB/cJRj mice were purchased from Janvier laboratories (Saint-Berthevin, France). HLA-B7 and HLA-A2/DR1 transgenic (Tg) mice have been previously described. 47,48 and provided from the Institut Pasteur animal breeding facility. All experiments were conducted in strict accordance with the ethical guidelines and good animal practices of the European Committee (Directive 2010/63/EU) and were approved by the registered CETEA of Institut Pasteur committees on ethics in animal experimentation N 89 under the reference 2013-0026. Mice were immunized by intradermal (ID) injection with 100 mg in 50 mL (bilateral injection of 25 mL) of INVAC-1 or NTC empty plasmid or PBS, both as control, at the base of the tail (bilateral injections). Immediately after ID injection, EGT was performed using CLINIPORA-TOR® 2 (IGEA, Carpi, Italy); one High Voltage (HV) pulse (100 ms duration; 1,250 V/cm) followed 1,000 ms later by one Low Voltage (LV) pulse (400 ms duration; 180 V/cm) were applied with non-invasive plate electrodes (P-30-8G, IGEA). According to the experiment and vaccine regimen, mice could receive several administrations of DNA or control. IFNg and Granzyme B ELISpot assay Murine IFNg and Granzyme B ELISpot kits were purchased from Diaclone (Eurobio, Courtaboeuf, France) and R&D systems (Bio-Techne, Lille, France) respectively. They were used with Ficoll-purified lymphocytes from peripheral blood, spleen, tumor or tumor draining lymph nodes following the manufacturer's instructions. Briefly, cells were stimulated in triplicates at 2 £ 10 5 cells/well with pools of restricted hTERT peptides at 5 mg/mL. Serum-free medium and phorbol-12-myristate-13acetate (PMA)-ionomycin were used as negative and positive controls respectively. Spots were counted using the Immunospot ELISpot counter and software (Cellular Technology Limited, Bonn, Germany). In vivo cytotoxicity assay The capacity of CD8 C cytotoxic T-cells to kill peptide-loaded target cells in vivo was assessed as described previously. 51 Briefly, splenocytes from naive C57BL/6 mice were split into three equal parts and each part was stained with carboxyfluorescein diacetate succinimidyl ester (CFSE) at 5 mM (high concentration), 1 mM (medium) or 0.2 mM (low). Subsequently, CFSE high -labeled cells were pulsed with the immunodominant hTERT p660 peptide and CFSE medium -labeled cells were pulsed with the subdominant p1034 hTERT peptide for 1.5 h whereas CFSE low -labeled cells were left unpulsed. Cells were mixed in a 1:1:1 ratio and 6 £ 10 6 cells were i.v. injected in 50 mL of PBS into control or INVAC-1 immunized mice 10 d after the second immunization. Fifteen hours later, single-cell suspensions from spleens were analyzed by MACSQUANT® flow cytometer (Miltenyi, Germany). The percentage of specific killing was determined as follows: ½1 ¡ ½mean.%CFSE low =CFSE high or medium / CONTROL .%=CFSE low =CFSE high or medium / immunized £ 100: In vivo antitumor assessment For therapeutic vaccination experiment, 8 to 18-week-old HLA-A2/DR1 mice were subcutaneously engrafted with 2 £ 10 5 Sarc-T2r cells on the right abdominal flank. When, tumors were palpable, animals were randomized and immunized with INVAC-1 at D7, D14 and D21 post-engraftment or left non-treated. Twice a week, tumor growth was monitored using a caliper and mouse weight was individually followed. Mice were euthanized when tumors reached 2,000 mm 3 . The guidelines for the welfare and use of animals in cancer research were followed, especially for monitoring of clinical signs necessitating immediate intervention. 52 Tumor volume was calculated using the following formula: (L Ã l 2 )/2 (L D length; l D width). Results are expressed in mm 3 . Statistical analysis Statistical analyses were performed by a two-tailed Mann-Whitney non-parametric test using GraphPad prim 6.0 software (GraphPad Software Inc., USA). p values of 0 .05 were considered significant. Thirteen pools of INVAC-1 hTERT peptides spanning the whole protein were used for the breadth of immune response and were composed of 10 peptides of 15-mers overlapping by 11 AAs.
2018-04-03T05:12:12.899Z
2016-03-03T00:00:00.000
{ "year": 2016, "sha1": "a2f903da38bdff4382f5118bd47403aad6f94895", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2015.1083670?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "6e79e07de4dfeb879c72a818880ef05029473407", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
203625798
pes2o/s2orc
v3-fos-license
Japanese and Bohemian Knotweeds as Sustainable Sources of Carotenoids Japanese knotweed (Fallopia japonica Houtt.) and Bohemian knotweed (Fallopia x bohemica) are invasive alien plant species, causing great global ecological and economic damage. Mechanical excavation of plant material represents an effective containment method, but it is not economically and environmentally sustainable as it produces an excessive amount of waste. Thus, practical uses of these plants are actively being sought. In this study, we explored the carotenoid profiles and carotenoid content of mature (green) and senescing leaves of both knotweeds. Both plants showed similar pigment profiles. By means of high performance thin-layer chromatography with densitometry and high performance liquid chromatography coupled to photodiode array and mass spectrometric detector, 11 carotenoids (and their derivatives) and 4 chlorophylls were identified in green leaves, whereas 16 distinct carotenoids (free carotenoids and xanthophyll esters) were found in senescing leaves. Total carotenoid content in green leaves of Japanese knotweed and Bohemian knotweed (378 and 260 mg of lutein equivalent (LE)/100 g dry weight (DW), respectively) was comparable to that of spinach (384 mg LE/100 g DW), a well-known rich source of carotenoids. A much lower total carotenoid content was found for senescing leaves of Japanese and Bohemian knotweed (67 and 70 mg LE/100 g DW, respectively). Thus, green leaves of both studied knotweeds represent a rich and sustainable natural source of bioactive carotenoids. Exploitation of these invaders for the production of high value-added products should consequently promote their mechanical control. Introduction Japanese (Fallopia japonica Houtt.) and Bohemian (Fallopia x bohemica) knotweeds are large herbaceous plants which belong to the Polygonaceae family [1]. In the 19 th century, Japanese knotweed was brought from Asia to Europe primarily for ornamental purposes. On the other hand, Bohemian knotweed emerged thereafter as a hybrid between Japanese knotweed and Giant (Fallopia sachalinensis) knotweed-another member of Fallopia spp [1,2]. Due to their fast spread and strong resilience to extermination, knotweeds were soon classified as aggressive alien plant invaders [1,2]. Today, they pose a great economic and environmental threat as they cause loss of native biodiversity, affect agriculture and forestry, and they also endanger certain animal species [2,3]. Invasive knotweeds thrive alongside water bodies and their banks, and can raise problems with water quality, accessibility, and flow rate [3,4]. These plants are also found in urban areas, on roads and railways, where they cause significant structural damage to pavements, buildings, and traffic infrastructure [4]. Bohemian knotweed is known to be more vigorous and persistent than its parents and is one of the most invasive plants in Europe [4]. Figure 1. Chromatograms of leaf extracts on C18 HPTLC silica gel plates before (A) and after (B) exposure to HCl. Plates were developed with acetone:methanol (1:1 v/v) + 0.1% TBHQ in a saturated twin trough chamber and the images were acquired using white light in transmission mode. Plates were predeveloped with MeOH:dichloromethane 3:1 (v/v). Tracks: Yellow leaves of Bohemian knotweed (1), green leaves of Bohemian knotweed (2), yellow leaves of Japanese knotweed (3), green leaves of Japanese knotweed (4), spinach leaves (5), green-yellowish leaves of Japanese knotweed (6), and standard solution mix of (all-trans)-lutein (200 ng), (all-trans)-zeaxanthin (200 ng), and (all-trans)-β-carotene (500 ng) (7). Japanese and Bohemian knotweed green leaf extracts and spinach leaf extract showed similar chromatographic profiles, but unlike the latter, knotweeds contained some additional yellowcolored pigments in the RF region 0.84-1 ( Figure 1A). These were tentatively identified as phenolic compounds by comparison of HPTLC-MS 2 and HPTLC-MS 3 spectra with literature data [26]. Main yellow pigments of knotweeds were identified by standard co-migration as (all-trans)-β-carotene (RF = 0. 16) and (all-trans)-lutein and/or (all-trans)-zeaxanthin (RF = 0.55). As expected, green bands representing chlorophyll a/a' and chlorophyll b/b' ( Figure 1A, tracks 2, 4, and 5; RF region 0. 30-0.45) were absent in both yellow (senescing) Japanese and Bohemian knotweed leaf extracts. However, additional yellow bands appeared in the non-polar RF region 0.02-0.15 ( Figure 1A, tracks 1 and 3). These were assumed to be carotenes or xanthophyll esters. A chemical profile transition between green-and yellow-colored plant leaves was evident from the chromatogram of green-yellowish leaf extract, which contained compounds of green as well as of yellow leaf extracts ( Figure 1A, track 6). To further identify carotenoid constituents of knotweeds, the developed HPTLC plate was exposed to HCl vapor ( Figure 1B). A change in band color from yellow or orange to blue or green in the presence of a strong acid is generally indicative of an epoxide functional group within the carotenoid structure. A bathochromic shift in the absorption spectrum is a result of the formation of an oxonium ion at the terminal end of the carotenoid-conjugated double bond chain [27,28]. Thus, diepoxides give deep blue color and monoepoxides give a green-blue color [29]. In chromatograms of green leaf extracts of both knotweeds, the blue coloration of TLC bands indicated two epoxide carotenoids at RF = 0.65 and 0.73 ( Figure 1B, tracks 2, 4, and 5). In senescing leaves, highly hydrophobic carotenoids with RF < 0.15 were identified as epoxides as well. A drop in absorbance for (all-trans)-lutein/zeaxanthin (RF = 0.55) and (all-trans)-β-carotene (RF = 0.16) was observed after Figure 1. Chromatograms of leaf extracts on C18 HPTLC silica gel plates before (A) and after (B) exposure to HCl. Plates were developed with acetone:methanol (1:1 v/v) + 0.1% TBHQ in a saturated twin trough chamber and the images were acquired using white light in transmission mode. Plates were predeveloped with MeOH:dichloromethane 3:1 (v/v). Tracks: Yellow leaves of Bohemian knotweed (1), green leaves of Bohemian knotweed (2), yellow leaves of Japanese knotweed (3), green leaves of Japanese knotweed (4), spinach leaves (5), green-yellowish leaves of Japanese knotweed (6), and standard solution mix of (all-trans)-lutein (200 ng), (all-trans)-zeaxanthin (200 ng), and (all-trans)-β-carotene (500 ng) (7). Japanese and Bohemian knotweed green leaf extracts and spinach leaf extract showed similar chromatographic profiles, but unlike the latter, knotweeds contained some additional yellow-colored pigments in the R F region 0.84-1 ( Figure 1A). These were tentatively identified as phenolic compounds by comparison of HPTLC-MS 2 and HPTLC-MS 3 spectra with literature data [26]. Main yellow pigments of knotweeds were identified by standard co-migration as (all-trans)-β-carotene (R F = 0. 16) and (all-trans)-lutein and/or (all-trans)-zeaxanthin (R F = 0.55). As expected, green bands representing chlorophyll a/a' and chlorophyll b/b' ( Figure 1A, tracks 2, 4, and 5; R F region 0.30-0.45) were absent in both yellow (senescing) Japanese and Bohemian knotweed leaf extracts. However, additional yellow bands appeared in the non-polar R F region 0.02-0.15 ( Figure 1A, tracks 1 and 3). These were assumed to be carotenes or xanthophyll esters. A chemical profile transition between green-and yellow-colored plant leaves was evident from the chromatogram of green-yellowish leaf extract, which contained compounds of green as well as of yellow leaf extracts ( Figure 1A, track 6). To further identify carotenoid constituents of knotweeds, the developed HPTLC plate was exposed to HCl vapor ( Figure 1B). A change in band color from yellow or orange to blue or green in the presence of a strong acid is generally indicative of an epoxide functional group within the carotenoid structure. A bathochromic shift in the absorption spectrum is a result of the formation of an oxonium ion at the terminal end of the carotenoid-conjugated double bond chain [27,28]. Thus, diepoxides give deep blue color and monoepoxides give a green-blue color [29]. In chromatograms of green leaf extracts of both knotweeds, the blue coloration of TLC bands indicated two epoxide carotenoids at R F = 0.65 and 0.73 ( Figure 1B, tracks 2, 4, and 5). In senescing leaves, highly hydrophobic carotenoids with R F < 0.15 were identified as epoxides as well. A drop in absorbance for (all-trans)-lutein/zeaxanthin (R F = 0.55) and (all-trans)-β-carotene (R F = 0.16) was observed after HCl exposure, but there were no significant changes in the measured absorption spectra in the range 190-800 nm, despite an appreciable darkening of those bands ( Figure 1B, tracks 2, 4, and 5). Under strongly acidic conditions such as those used here, carotenoids are prone to isomerization and degradation, or may even react with co-migrating compounds and plate impurities, which could explain the reduced absorption and changes in band color. HPTLC-MS analysis of bands immediately after plate development was also attempted, but co-migration of compounds and stationary phase additives produced a high MS background, which, for the most part, impeded the carotenoid structural elucidation and quantitation. In order to gain the extra separation resolution, knotweed leaf extracts were analyzed by HPLC-PDA-MS. A ProntoSIL C30 column was used because it enables the differentiation even between all-trans and cis isomers of pigments [30,31]. Analytes were identified by comparing the absorption and MS n spectra with the literature data, by co-elution with standards, and by relative hydrophobicity of eluted compounds. Again, chromatographic profiles of spinach and knotweed green leaf extracts were very similar and less complex in comparison to yellow leaf extracts ( Figure 2, Table 1). all-trans-Lutein, all-trans-β-carotene, neoxanthin, and violaxanthin were identified as major carotenoid constituents of green leaf extracts of Japanese and Bohemian knotweeds, representing roughly 86% and 82%, respectively, of the total carotenoid content ( Table 2). Luteoxanthin, all-trans-zeaxanthin, antheraxanthin, and (9-cis)-and (13-cis)-β-carotene were found at lower levels. Luteoxanthin was differentiated from its structural isomers violaxanthin and neoxanthin by differences in absorption spectra; with a shorter conjugated chain of double bonds in its structure, the absorption maximum of luteoxanthin was blue-shifted by 20-25 nm compared to the other two epoxycarotenoids. Geometric isomers of β-carotene were confirmed by a controlled isomerization of the all-trans form with iodine [32]. Moreover, both (9-cis)-and (13-cis)-β-carotene showed a small but characteristic hypsochromic spectral shift of a few nanometers and the appearance of the cis absorption peak in the 330-338 nm region. The green-yellowish leaf extract of Japanese knotweed showed a transition between the "all-green" and "all-yellow" leaf chromatographic profiles ( Figure 2E). The concentration of free carotenoids diminished (most profoundly for the epoxide violaxanthin), while carotenoid esters appeared. In yellow senescing leaves, free carotenoid content was even lower and amounted to less than 67% and 42% of the total pigments for Bohemian and Japanese knotweed, respectively. The remainder of pigments was in the form of esters (Table 1), which contained an epoxide carotenoid at the structural core (mainly violaxanthin). There was one exception, tentatively identified either as zeinoxanthin palmitate oleate or β-cryptoxanthin palmitate oleate. The presence of epoxide esters was in line with the blue coloration of bands seen in the hydrophobic region of the HPTLC plate (0.02 < R F < 0.15) after HCl exposure ( Figure 1B). Esterification of xanthophylls is known to coincide with chloroplast degradation in autumn and it is a plant's way of adapting to the changes in its metabolic system. Enhancement of xanthophyll's lipophilicity supposedly increases their stability and solubility in lipid-rich plastoglobules once the thylakoid membrane gets disrupted. Epoxides react more rapidly with fatty acids [27,33], so this might explain why mainly esters of violaxanthin and antheraxanthin were found in senescing leaves of both knotweeds. For Bohemian knotweed, the ester-forming fatty acids were determined as palmitic, myristic, stearic, oleic, and lauric acids. GC-MS analysis additionally confirmed their identity by co-elution with standard compounds and/or by a comparison of the obtained MS data with the NIST library. Given the high correlation of chromatographic profiles of both knotweeds ( Figure 2D,F), it is safe to assume that these fatty acids also form esters with carotenoids from Japanese knotweed. Apart from being present in green leaf extracts, (9-cis)-β-carotene was also found in trace amounts in senescing leaf extracts. Carotenoids occur naturally in the all-trans geometric form, but they are prone to isomerization. Upon heating and light exposure, all-trans-β-carotene is particularly unstable and readily converts into one of its cis-isomers. At equilibrium, their relative abundances are in the following order: all-trans > 9-cis > 13-cis > 15-cis [34]. Low amounts of violaxanthin and luteoxanthin cis isomers were also determined ( Table 1). α-Carotene and free βcryptoxanthin were not detected in any of the studied leaf extracts. Finally, chlorophylls, visualized as green bands on the HPTLC plate, were baseline resolved by means of HPLC and identified as chlorophyll a/a' and b/b'. Interestingly, TLC-MS 2 and MS 3 analyses of green bands revealed only pheophytin a and b (chlorophyll lacking Mg 2+ ), most probably due to the intrinsic acidity and polarity of the silica-based adsorbent. In this study, green and senescing leaves of Bohemian and Japanese knotweeds were comprehensively explored for their carotenoid content for the first time. A tentative identification and quantitation of carotenoids from green leaves of Japanese knotweed was attempted before by Lachowicz et al. [35], but the number and identity of detected pigments differ significantly between the studies. A C18 stationary phase used by Lachowicz et al. is often ineffective of separating certain isomeric pigments, which might have led to their co-elution and detection failure. Moreover, the addition of an acidic mobile phase modifier (formic acid in their case) presumably led to low carotenoid recovery yields and their on-column transformations, e.g., the conversion of neoxanthin into neochrome [36]. Apart from being present in green leaf extracts, (9-cis)-β-carotene was also found in trace amounts in senescing leaf extracts. Carotenoids occur naturally in the all-trans geometric form, but they are prone to isomerization. Upon heating and light exposure, all-trans-β-carotene is particularly unstable and readily converts into one of its cis-isomers. At equilibrium, their relative abundances are in the following order: all-trans > 9-cis > 13-cis > 15-cis [34]. Low amounts of violaxanthin and luteoxanthin cis isomers were also determined ( Table 1). α-Carotene and free β-cryptoxanthin were not detected in any of the studied leaf extracts. Finally, chlorophylls, visualized as green bands on the HPTLC plate, were baseline resolved by means of HPLC and identified as chlorophyll a/a' and b/b'. Interestingly, TLC-MS 2 and MS 3 analyses of green bands revealed only pheophytin a and b (chlorophyll lacking Mg 2+ ), most probably due to the intrinsic acidity and polarity of the silica-based adsorbent. In this study, green and senescing leaves of Bohemian and Japanese knotweeds were comprehensively explored for their carotenoid content for the first time. A tentative identification and quantitation of carotenoids from green leaves of Japanese knotweed was attempted before by Lachowicz et al. [35], but the number and identity of detected pigments differ significantly between the studies. A C18 stationary phase used by Lachowicz et al. is often ineffective of separating certain isomeric pigments, which might have led to their co-elution and detection failure. Moreover, the addition of an acidic mobile phase modifier (formic acid in their case) presumably led to low carotenoid recovery yields and their on-column transformations, e.g., the conversion of neoxanthin into neochrome [36]. a Confirmed by a controlled isomerization of β-carotene with iodine [32]. b (all-trans)-β-carotene was confirmed by spiking the studied leaf extracts with (all-trans)-β-carotene standard. Slight retention time shifts of (all-trans)-β-carotene between leaf extracts presumably occurred due to different leaf matrices. Base fragment ions are in boldface. Quantitation of Carotenoids Carotenoids are prone to degradation and isomerization when exposed to oxidants, high temperature, light, metals, and acids. Therefore, special attention should be given to experimental conditions during all stages of qualitative and quantitative analysis. Synthetic antioxidants are often employed to enhance the stability of pigments [20]. Interestingly, addition of 2-tert-butylhydroquinone (TBHQ) (0.1% w/v) to the extraction solvent did not increase the extraction recovery of carotenoids in our case. Triethylammonium acetate (TEAA) was also used before as an extraction solvent additive and as a mobile phase modifier in HPLC [42]. In our case, the presence of 15% 1M TEAA slightly increased carotenoid recovery (4 ± 1%; n = 3). Moreover, it was previously reported that, in terms of recovery, ultrasound-assisted solid-liquid extraction of carotenoids is inferior to solid-liquid extraction where the extraction is performed by means of mere stirring [37]. However, we observed that when reduced light and the absence of oxygen was ensured during extraction, both procedures delivered comparable results (<1% difference; RSD = 1.2%, n = 3). This demonstrated that short-term exposure of carotenoids to high temperatures (several thousand • C presumed at local hot spots created during sonication) did not seem to have a detrimental effect on pigment degradation and, on the other hand, there was no evident increase in extraction recovery resulting from a more efficient disruption of chloroplast membranes by ultrasonic waves. Both contributions could eventually balance one another, resulting in the observed zero net effect on recovery. Further work is needed to pinpoint the major individual factors that govern extraction recovery, but this was outside of the scope of this study. Since solid-liquid extraction by stirring enabled better temperature control and a higher throughput in our case, and since ultrasonic assisted extraction had no clear advantages, the latter was not pursued any further. Good extraction recoveries of (all-trans)-lutein were obtained for all studied leaf extracts (>81%). Recovery of (all-trans)-β-carotene from green leaf extracts was also excellent (>85%); however, for senescing leaf extracts, these values were as low as 54%. These low recoveries are presumably a reflection of the changes in the chemical profile of senescing leaves. We assume that higher levels of endogenous acids, which ironically increase stability of xanthophylls through natural esterification, should have an adverse effect on the stability of free carotenoids. This is especially true for a more labile (all-trans)-β-carotene. Total concentration of carotenoids in green leaf extracts of both knotweeds were appreciably higher compared to the senescing leaf extracts ( Table 2). The results show that with chloroplast decomposition, degradation of not only green chlorophylls, but also carotenoids occurs. However, among non-esterified carotenoids, (all-trans)-zeaxanthin is a clear exception with an increase in its concentration in autumn leaves. These values, relative to its isomer (all-trans)-lutein, were still low, but significant, which indicates an alternative biological role of this pigment in autumn. Qualitatively and quantitatively, there were only minor differences in carotenoid profiles between the two studied knotweed species. Total contents of carotenoids in green leaf extracts of Japanese knotweed (378 mg/100 g DW) and Bohemian knotweed (260 mg/100 g DW) were comparable to the total carotenoid content determined in spinach (384 mg/ 100 g DW)-a well-established rich source of these biologically active secondary metabolites. The average total carotenoid content of green leaves of Japanese knotweed measured here (Table 3) was approximately three times higher compared to a study on Japanese and Giant knotweed reported by Lachowicz et al. [35]. The reason for this discrepancy most probably lies in the different sample preparation and analytical procedures; here, controlled experimental conditions (reduced light, inert atmosphere) were used to ensure the stability of labile analytes [36]. Compared to common food sources rich in (all-trans)-lutein and (all-trans)-β-carotene (Table 2), green leaves of Japanese and Bohemian knotweeds present an excellent potential alternative. Marigold flower (Tagetes erecta L.), which is being massively cultivated on plantations for the production of lutein-containing food supplements, contains comparable amounts of (all-trans)-β-carotene and has only approximately 2-fold higher levels of (all-trans)-lutein. Preparation of Carotenoid Standard Solutions Carotenoid standards (1 mg) were individually weighed into 50 mL flasks. Dichloromethane (2 mL) was added to facilitate dissolution and then the flask was filled up to the volume mark with acetone which was previously deaerated by sparging with nitrogen for 30 min. Working standard solutions were prepared by further dilution of each standard stock solution (0.02 mg/mL) with acetone. Exact concentrations of working standards were determined spectrophotometrically using the following relationship: where c is concentration in µg/mL, A (λmax) is the measured absorbance, and A (1%,1cm,λ) is the specific absorption coefficient for the selected analyte in acetone, taken from the CaroteNature certificate (2540 for lutein, 2500 for β-carotene, and 2350 for zeaxanthin) [52,53]. All standard solutions prepared in this manner were stored in amber glass vials (National Scientific Company, USA) at -80 • C prior to use. Plant Materials and Preparation of Leaf Extract Solutions for the Analysis of Carotenoids and Fatty Acids Fresh yellow, green, and green-yellowish colored leaves of Japanese and Bohemian knotweeds harvested in the area of Ljubljana, and fresh green leaves of spinach purchased at a local market, were frozen in liquid nitrogen and lyophilized by Micro Modulyo (IMA Edwards, Bologna, Italy) for 24 h at -50 • C. Dry material was afterwards again frozen in liquid nitrogen and pulverized by Mikro-Dismembrator S (Sartorius, Göttingen, Germany) at 1700 min −1 for 1 min. Pulverized material was transferred into amber glass vials and stored at -80 • C prior to use. Extraction of carotenoids was carried out by accurately weighing the pulverized plant material (20-30 mg) into 45 mL glass tubes. Then, 10 mL of 90% acetone (aqueous):1 M TEAA (85:15, v/v) was added and the mixture was stirred in the Carousel 12 Plus apparatus (Radleys, Safron Walden, UK) for 15 min at room temperature under nitrogen atmosphere and reduced light conditions. Afterwards, leaf extracts were centrifuged at 1800 g for 5 min. Supernatants were filtered through a 0.45-µm polyvinylidene fluoride (PVDF) (LLG labware, Meckenheim, Germany) membrane and stored at -80 • C prior to use. All leaf extract solutions were prepared in triplicate. To determine the extraction recovery of carotenoids from pulverized leaves, samples were spiked with known amounts of (all-trans)-β-carotene (60 mg/L) and (all-trans)-lutein (112 mg/L) in acetone prior to sample preparation. Standard additions were made at 100% level and the details are summarized in Table 4. Controls were prepared by adding the same volume of acetone instead of (all-trans)-β-carotene and (all-trans)-lutein standard solution to the pulverized leaves. The spiked and control leaf solutions were then further processed by the extraction procedure described above and stored at -80 • C prior to analysis. All leaf extract solutions were prepared in duplicate. The recovery was calculated according to the following relationship: where A sp , A c , and A STD denote analyte peak areas in the chromatogram of a spiked and control leaf extracts, and in the chromatogram of a standard solution at the concentration level of the spike, respectively. For qualitative GC analysis of fatty acids, pulverized yellow leaves of Bohemian knotweed (900 mg) were weighed into a glass beaker, n-hexane (40 mL) was added, and the suspension was stirred for 1 h. After the extraction, the solution was filtered through a black ribbon filter paper (pore size 12-15 µm, from Sartorius, Germany) and the solvent was removed under reduced pressure. Afterwards, n-hexane (1 mL), BF 3 in methanol (1 mL; 1.3 M), and Na 2 SO 4 (10 mg) were added to the solid residue (or individual fatty acid standard in the case of standard preparation (5-10 mg)). Leaf extracts and standard solutions were transferred to 4-mL amber vials, capped, vortexed for 1 min, and then heated at 90 • C for 1 h to carry out the transesterification. Afterwards, the upper n-hexane layer was transferred to a GC vial and submitted to GC-MS analysis. HPTLC with Densitometry Analysis HPTLC analyses were performed on 20 × 10 cm glass-packed C18 RP HPTLC silica gel plates with 0.20 mm layer thickness (Merck, Art. No. 1.05914.0001). Prior to use, plates were pre-developed with methanol:dichloromethane (1:1, v/v) and then dried in the oven for 20 min at 100 • C [37]. Leaf extracts and standard solutions were applied on the plates as 8 mm bands, 15 mm from the side edge, 10 mm from the bottom edge, and 10 mm apart by means of Linomat 5 or Automatic TLC Sampler 4 (samples: 50 µL-leaf extracts of spinach and green leaves of both knotweeds, 75 µL-leaf extracts of senescing leaves; standards: 200 ng of lutein, zeaxanthin, and 500 ng of β-carotene) from Camag (Muttenz, Switzerland). The plates were developed at ambient temperature in a saturated (30 min) twin trough chamber (20 x 10 cm, Camag) with developing solvent 0.1% TBHQ in methanol:acetone (1:1, v/v) [37]. The developing distance 7 cm was achieved in 12 min. Developed plates were dried under a stream of cool air using a hair dryer. The plates were documented by Camag Digistor 2 documentation system (Camag) under white light transmission mode and, after that, scanned at 450 nm by Camag TLC scanner 3 in absorption/reflectance mode. Spectra of bands were also recorded in the range from 190 to 800 nm. Scanning speed was 10 nm/s and slit dimensions were: Length 6 mm, width 0.4 mm. Instruments were controlled by WinCATS software (Version: 1.4.9.2001). The same plates were afterwards also exposed to HCl vapor for 20 s (epoxide test) [29] in a twin trough chamber and, after that, documented and scanned again as described above. HPTLC-MS n Analysis For a direct coupling of HPTLC to an ion trap LTQ Velos MS system (Thermo Fisher, San Jose, Ca, USA), a TLC-MS interface (Camag) was used. Elution of zones of interest from the plates was performed by using methanol:dichloromethane (3:1, v/v) as an elution solvent. The flow rate was maintained at 200 µL/min and 0.2% acetic acid in methanol was added to the HPTLC effluent in a ratio of 1:40 prior to the introduction of the solution into the MS system. Atmospheric pressure chemical ionization (APCI) in positive ion mode was used. Ion source parameters were set as: Transfer capillary and vaporizer temperature 300 • C, sheath gas flow rate 30 a.u. (arbitrary units), auxiliary gas 10 a.u., spray voltage 3 kV, and discharge current 3 µA. MS n experiments were carried out by fragmentation of target precursor ions (isolation width 2 amu) using different collision energies (30-45%). MS and MS n spectra were acquired in the 200-1500 m/z range. The MS background was subtracted by sampling the HPTLC plate between tracks at the same R F value as the measured band. Acquired MS spectra were processed using Xcalibur software (Version 2.1.). HPLC-PDA Analysis For HPLC analyses, an HPLC Surveyor Plus system from Thermo Finnigan was used, which was equipped with a thermostated autosampler (Autosampler Surveyor Plus), a quaternary pump (Surveyor LC Pump Plus), and a diode-array detector (Surveyor PDA Plus) with a 5 cm LightPipe flow cell. Separations were performed on a ProntoSIL C30 column (250 x 4.6 mm i.d., 5 µm) from Bischoff (Leonberg, Germany), connected to a security guard cartridge (C18 4 x 3 mm i.d.) from Phenomenex (Torrance, CA, USA). The mobile phase consisted of acetone (A) and 0.1 M TEAA in water (B), and the following gradient was applied: 0-12 min 90% A, 12-25 min 90-100% A, 25-30 min 90-100% A, 30-35 min 90% A. The flow rate was maintained at 0.8 mL/min, while column oven and autosampler temperatures were set at 35 • C and 15 • C, respectively. Injection volume was 10 µL. Acquisition wavelength was set to 450 nm. Spectra were also acquired in the range from 195-790 nm. ChromQuest 5.0 software was used for data evaluation. Quantitation of (all-trans)-lutein, (all-trans)-zeaxanthin, and (all-trans)-β-carotene was done by a five-point linear external standard calibration method using the appropriate standards. Levels of all other identified (and unknown) carotenoids were estimated by using the (all-trans)-lutein external standard calibration curve. For these analytes, a general relative response factor was calculated (in reference to (all-trans)-lutein) as a ratio between an average carotenoid absorption coefficient (A (1%,1cm,λ) = 2500) [54] and the specific absorption coefficient of (all-trans)-lutein in acetone (A (1%,1cm,λ) = 2540) [55]. In these cases, data were acquired at absorption maxima (λ max ) of individual quantified analyte. Levels of esters were calculated by assuming the same molar absorption coefficient as their free carotenoid analogues. Total carotenoid content was expressed as milligrams of (all-trans)-lutein equivalent per 100 grams of dry leaf weight (mg LE/100 g DW). All quantitation analyses were carried out in triplicate. HPLC-MS n Analysis The HPLC-MS system comprised an Accela 1250 UHPLC system (Thermo Fisher) coupled to an LTQ Velos MS system (Thermo Fisher) with an APCI ion source operating in positive ion mode. The HPLC system consisted of a thermostated Accela autosampler with a 25-µL loop, a quaternary high-pressure Accela pump, and a diode array Accela PDA detector. Xcalibur software (Version 2.1.) was used for evaluation of chromatograms. All HPLC conditions were the same as stated above for HPLC-PDA analysis, only solvent B was replaced with water. APCI ion source conditions were as follows: Transfer capillary temperature 275 • C, vaporizer temperature 300 • C, spray voltage 5 kV, discharge current 3 µA, sheath gas flow rate 35 a.u., and auxiliary gas flow rate 5 a.u. MS and MS n spectra were acquired in a 200-1200 m/z range. For fragmentation of selected ions, different collision energies (30-45%) were applied. GC-MS Analysis GC-MS analyses were carried out using a GC Ultra-system (Thermo Electron Corporation) equipped with a DSQ II MS detector. Compounds were separated on a ZB-5HT column (20 m × 0.18 mm i.d. × 0.18 µm thickness) from Phenomenex. Inlet temperature was set at 290 • C and He was used as a carrier with a constant flow rate of 1 mL/min. Ion source temperature was set at 200 • C. Injection volume was 1 µL with a split ratio 1:30. Temperature gradient: Held at 150 • C for 1 min, 5 • C/min to 180 • C, 1 • C/min to 190 • C, 10 • C/min to 240 • C. Electron ionization (70 eV) ion source in positive ion mode was used and MS data was acquired in the m/z 50-650 range. The compounds were identified by fatty acid standard co-elution and by comparison of the acquired data with the NIST MS library (version 2.3). Conclusions To conclude, carotenoids from mature and senescing leaves of Japanese and Bohemian knotweeds were explored, and we show that these plants contain a variety of carotenoids, but mainly (all-trans)-lutein, violaxanthin, and (all-trans)-β-carotene. In autumn, the leaves turn yellow due to chlorophyll degradation; however, the levels of carotenoids were shown to drop significantly as well (by more than 80%) and a major proportion of these pigments was found in the esterified form. Nonetheless, with a high total carotenoid content of 260-380 mg LE/100 g DW for mature (green) leaves, we propose Japanese and Bohemian knotweeds as new sustainable sources of carotenoids. Marigold flower (Tagetes erecta L.), a gold standard in the field of lutein production, contains high levels of carotenoids, but the existence of its vast plantations are now being questioned, because these could alternatively be used for cultivation of food crops for the ever growing human population. On the other hand, a large-scale harvesting of knotweeds and exploitation of their plant parts (zero waste) are encouraged and should present a powerful and economically-driven method of mechanical control of these highly invasive alien plant species.
2019-10-02T13:04:16.922Z
2019-09-28T00:00:00.000
{ "year": 2019, "sha1": "aae9cb0ce84a8dfae0f7b5fec79097a342052c95", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/8/10/384/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2748e3dda8918743adc98d1960444ea44ef4f93", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
4454624
pes2o/s2orc
v3-fos-license
Multimodal mechanistic signatures for neurodegenerative diseases (NeuroMMSig): a web server for mechanism enrichment Abstract Motivation The concept of a ‘mechanism-based taxonomy of human disease’ is currently replacing the outdated paradigm of diseases classified by clinical appearance. We have tackled the paradigm of mechanism-based patient subgroup identification in the challenging area of research on neurodegenerative diseases. Results We have developed a knowledge base representing essential pathophysiology mechanisms of neurodegenerative diseases. Together with dedicated algorithms, this knowledge base forms the basis for a ‘mechanism-enrichment server’ that supports the mechanistic interpretation of multiscale, multimodal clinical data. Availability and implementation NeuroMMSig is available at http://neurommsig.scai.fraunhofer.de/ Supplementary information Supplementary data are available at Bioinformatics online. Introduction The development of novel high throughput 'omic' technologies in the last decade has revealed new insight and progresses in areas of cancer, cardiovascular and metabolic disorders. The datasets coming from these technologies have led to the discovery of candidate biomarkers and potential drug targets. However, in other areas such as neurodegenerative diseases, this mechanistic understanding is either rather limited or almost absent. Readouts in translational biomedicine are going beyond molecular level: they can span from genes and genetic variation information to imaging and organ-level (or even organism-level) data and markers. The definition of a disease as 'dysregulated pathways' may hold true for cancer, but is inappropriate for neurodegenerative diseases as pathways refer typically to rapid molecular processes and the alterations in neurodegenerative diseases are slow and multifacetted. There is simply no such thing as a 'degeno-gene' (in analogy to the 'onco-gene'). Supporting that, there have not been any described 'cause-effect' relationships that would explain the different pathological changes observed in patients with these disorders. When the effects of dysregulation can be easily observed-like in monogenic diseases-it is generally not so difficult to link the phenotype with the event that lead to it. This is likely to be attributed to a short and direct chain of causality (Hofmann-Apitius et al., 2015a). Hence, because the complexity of neurodegenerative diseases is 3679 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com enormous; it is crucial to integrate a wider spectrum of causal assertions into models that represent and organize the available mechanistic knowledge. MSigDB (Subramanian et al., 2015) is the prototypic implementation of a system that allows for the identification of perturbed pathways. However, the output of ranking algorithms like GSEA is usually a list of associated canonical pathways that do not contain disease-specific information and multimodal data. In addition, canonical pathways are also biased towards cancer biology (Hofmann-Apitius et al., 2015b). Adopting the fundamental principle of 'running patterns in data against a knowledge base of established patterns' ('pathways'; 'signatures'), we have developed a mechanism enrichment server and extended it towards multiscale and multimodal data. This is where the two 'M' of NeuroMMSig come from: Multimodal and Mechanistic. It is noteworthy that the difference between NeuroMMSig and other, conventional methods for pathway enrichment or functional gene annotation lies in the specificity of the disease context. Pathway enrichment is based upon canonical pathways, which are not disease specific. The multimodal mechanisms behind NeuroMMSig, however, are manually curated and contain detailed representations of multimodal pathophysiology in a well-defined disease context. Here, we present NeuroMMSig, a web server for mechanism enrichment that allows submission of multiscale data from molecular to clinical level to return mechanisms that fit best the data. We have focused on neurodegenerative diseases, as we try to establish a 'mechanism-based taxonomy of Alzheimer's Disease (AD) and Parkinson's Disease (PD)'. This is the core of the AETIONOMY project (www.aetionomy.eu) and in fact, NeuroMMSig (DB and Server) form the backbone of attempts at stratifying patient subgroups based on disease mechanisms. Categorization of NDD pathways from mechanism based models Disease knowledge assembly models were built using Biological Expression Language (BEL) which integrate literature-derived 'cause and effect' relationships in the form of triples (Kodamullil et al., 2015). We have captured a representative subsample of the scientific knowledge on existing canonical pathways in AD and PD (Iyappan et al., 2016) which have been grouped into subgraphs. Multimodal data integration, data sources and software NeuroMMSig's subgraphs have been enriched with multimodal data (e.g. imaging features, variant information and drugs). The methodology describing how the linking across different data scales was performed is provided in the Supplementary text. Moreover, we have developed an enrichment algorithm to rank the subgraphs based on the input. NeuroMMSig server NeuroMMSig is available at http://neurommsig.scai.fraunhofer.de/. A user interface offers a simple, yet comprehensive menu (Fig. 1A). Input fields allow users to submit multimodal data (e.g. genes, SNPs, imaging features). Users can also set the enrichment algorithm parameters and define the operators of the query. After data submission, a ranked list of subgraphs is displayed to the user (Fig. 1B). Here, associated information to the submitted data is shown as icons in a user friendly table: drug-gene interactions, known regulating miRNA and co-expressed networks. Moreover, when the user selects one or multiple subgraphs and clicks on 'Visualize Network', NeuroMMSig displays the graph representing the selection where the user can investigate how the disruption of the network occurs (Fig. 1C). For that reason, NeuroMMSig offers multiple functionalities enabling graph mining and reasoning over the graphs (e.g. graph algorithms, search and exporting options, knowledge provenance and Sankey diagram representations for pathway analysis). Application scenario The five most relevant genes associated with 'Dopamine signaling pathway' in PD according to SCAIView [http://academia.scaiview. com/academia/] (Supplementary text) were used as an input (Fig. 1A). Two subgraphs were retrieved from NeuroMMSig ('Dopaminergic subgraph' and 'Synuclein subgraph') and they were selected for further analysis (Fig. 1B). Using the query tools, the two main hub nodes SNCA and Parkinson's disease were removed from the network in order to avoid most of the paths going through them, which biases the retrieval of best candidate mechanisms. By choosing a process of interest such as 'alpha synuclein toxicity', the server proposes candidate mechanisms in which the data-mapped-nodes may perturb normal physiology ( Fig. 1C and D). Discussion Harmonization of heterogeneous and multiscale datasets is yet a tremendous challenge in the field of neurodegeneration. The gap between molecular and clinical data is too wide to establish stable and meaningful assertions between imaging features and genes, for instance. Thus, integration of different data scales is a necessary step to shed some light on the mechanisms underlying neurodegenerative diseases. The modeling approach chosen in NeuroMMSig is capable of explaining causal and correlative relationships among different entities namely genes, proteins or biological processes in the context of neurological disorders (Kodamullil et al., 2015). These relationships reveal the upstream and downstream regulators of each node in the network and how they are activating/inhibiting their neighboring nodes. Thus, navigating through the network it is possible to identify the root or primarily cause of a dysfunctional gene or protein which eventually contributes to the disorder. The inventory of mechanisms specific for neurodegenerative diseases, which forms the basis of NeuroMMSig, is composed of small cause-and-effect models encoded in OpenBEL. Evidences for the BEL-encoded mechanisms come from the scientific literature, from experimental data analysis and from clinical readouts such as imaging biomarkers. Furthermore, both AD and PD models incorporate genetic and epigenetic information, which might, for instance, indicate and partially explain the effect of a particular SNP in a mechanism (Khanam et al., 2015;Naz et al., 2016). The presented work also serves as a comparison tool between different diseases. Thus, it allows to systematically identify shared-mechanisms between them. Combining all together, the BEL-encoded mechanisms contain pathophysiology information at highest resolution, with highly curated evidences spanning from the genetics and epigenetics layer via cell-type specific information to clinical phenotypes and biomarkers. Hence, NeuroMMSig overcomes some of challenges that pathway analysis methods currently have, as indicated by Khatri et al. (2012).
2017-08-03T00:35:25.625Z
2017-06-23T00:00:00.000
{ "year": 2017, "sha1": "fc51539aa5cc99df611626b0aa1cb9bac88f71ca", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/bioinformatics/article-pdf/33/22/3679/25167477/btx399.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fc51539aa5cc99df611626b0aa1cb9bac88f71ca", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
326213
pes2o/s2orc
v3-fos-license
Left Ventricular Mass in Patients with Heart Failure 232 Left ventricular hypertrophy was identified as a risk factor of cardiovascular morbidity and mortality in the Framingham Study 1-3. Patients with heart failure due to ventricular dysfunction undergo cardiac anatomical changes, which are included under the concept of cardiac remodeling 4. Remodeling occurs in different clinical circumstances and in different patients in heterogeneous ways. From the clinical point of view, great differences are observed in the left ventricular mass estimated on physical examination, on electrocardiography, or by use of the dimension of the cardiac image on chest radiography. Among patients with heart failure of the same etiology, some with a similar functional condition were observed to have different magnitudes of the cardiac image on chest radiography. From the echocardiographic point of view, greater mortality and hospitalization rates due to cardiovascular diseases were observed in patients with left ventricular dysfunction and left ventricular hypertrophy. The echocardiographic estimate of left ventricular mass added prognostic information to other cardiovascular risk factors. However, assessment of left ventricular hypertrophy, as compared with other clinical variables, suggested independence between left ventricular hypertrophy and left ventricular ejection fraction 5, a different finding from that of our initial study. The correlation tests of left ventricular mass with age, duration of symptoms, left ventricular end-diastolic pressure, and pulmonary artery systolic and occlusion pressures did not show statistically significant values. However, the correlation with the left ventricular ejection fraction calculated on echocardiography was significant. The stepwise regression analysis showed a negative correlation between left ventricular mass and left ventricular ejection fraction6. On autopsy study, patients with hypertensive, ischemic, and idiopathic cardiomyopathies had similar weights, which were greater than those of patients with cardiomyopathy due to Chagas’ disease7. Therefore, the clinical observation, echocardiographic data, and autopsy studies allow the hypothesis that the left ventricular mass may be a relevant variable for the prognosis of patients with heart failure. In this context, we formulated the hypothesis that left ventricular mass may also provide prognostic information about patients with symptomatic heart failure. This study aimed at assessing, in a large case series of patients with heart failure being followed up for more than 10 years, the distribution of left ventricular mass on echocardiography, its correlations with other clinical variables, and its potential prognostic influence. Left ventricular hypertrophy was identified as a risk factor of cardiovascular morbidity and mortality in the Framingham Study [1][2][3] . Patients with heart failure due to ventricular dysfunction undergo cardiac anatomical changes, which are included under the concept of cardiac remodeling 4 .Remodeling occurs in different clinical circumstances and in different patients in heterogeneous ways. From the clinical point of view, great differences are observed in the left ventricular mass estimated on physical examination, on electrocardiography, or by use of the dimension of the cardiac image on chest radiography.Among patients with heart failure of the same etiology, some with a similar functional condition were observed to have different magnitudes of the cardiac image on chest radiography. From the echocardiographic point of view, greater mortality and hospitalization rates due to cardiovascular diseases were observed in patients with left ventricular dysfunction and left ventricular hypertrophy.The echocardiographic estimate of left ventricular mass added prognostic information to other cardiovascular risk factors.However, assessment of left ventricular hypertrophy, as compared with other clinical variables, suggested independence between left ventricular hypertrophy and left ventricular ejection fraction 5 , a different finding from that of our initial study.The correlation tests of left ventricular mass with age, duration of symptoms, left ventricular end-diastolic pressure, and pulmonary artery systolic and occlusion pressures did not show statistically significant values.However, the correlation with the left ventricular ejection fraction calculated on echocardiography was significant.The stepwise regression analysis showed a negative correlation between left ventricular mass and left ventricular ejection fraction 6 . On autopsy study, patients with hypertensive, ischemic, and idiopathic cardiomyopathies had similar weights, which were greater than those of patients with cardiomyopathy due to Chagas' disease 7 . Therefore, the clinical observation, echocardiographic data, and autopsy studies allow the hypothesis that the left ventricular mass may be a relevant variable for the prognosis of patients with heart failure.In this context, we formulated the hypothesis that left ventricular mass may also provide prognostic information about patients with symptomatic heart failure. This study aimed at assessing, in a large case series of patients with heart failure being followed up for more than 10 years, the distribution of left ventricular mass on echocardiography, its correlations with other clinical variables, and its potential prognostic influence. Methods A cross-sectional study 8 was carried out for assessing left ventricular mass in a cohort of patients with heart failure being followed up on an outpatient basis. Objective To assess left ventricular mass in patients with heart failure and its correlations with other clinical variables and prognosis. Methods The study comprised 587 patients aged from 13.8 years to 68.9 years, 461 (78.5%) being males and 126 (21.5%) females.Left ventricular mass was estimated by using M-mode echocardiography and was indexed by height. Results The left ventricular mass index ranged from 35.3 g/m to 333.5 g/m and increased with age.The left ventricular mass index was greater in males (mean, 175.7 g/m) than in females (mean, 165.7 g/m).The left ventricular mass index was greater in patients with hypertensive cardiomyopathy (mean of 188.1 g/m), with idiopathic dilated cardiomyopathy (mean, 177.7 g/m) and with cardiomyopathies of other etiologies (mean, 175.1 g/m) than in patients with chagasic (mean, 164.3 g/m) or ischemic (mean, 162 g/m) cardiomyopathy.The left ventricular mass index in patients with heart failure showed a correlation with age, sex, etiology, and left atrial diameter.The correlation with left ventricular ejection fraction was negative: the increase in the left ventricular mass index was associated with a reduction in ejection fraction.The relative risk of death was 1.22 for each 50-g/m increase in the left ventricular mass index. The diagnosis of heart failure was established based on the Framingham criteria 9 , and the diagnosis of the etiology of heart failure was based on previously established criteria 10 and on the International Classification of Diseases 1993, 10th review (ICD 10). Patients aged < 75 years diagnosed with symptomatic heart failure due to systolic ventricular dysfunction were included in the study. Patients with the following characteristics were excluded from the study: heart failure due to cardiomyopathies indicated for surgical treatment (myocardial revascularization, aneurysmectomy, valvuloplasty, or valvular replacement); hypertrophic cardiomyopathy; chronic obstructive pulmonary disease; recent acute myocardial infarction; and unstable angina.Patients with the following characteristics were also excluded from the study: creatinine clearance lower than 30 mL/kg/min; liver failure; peripheral arterial disease; cerebrovascular disease; type I diabetes mellitus; recent infection; neoplasias; or active peptic ulcer disease. This study comprised 587 patients with heart failure, who were followed up from April 1991 to December 2000.Their ages ranged from 13.8 to 68.9 years (mean of 45.6 years, standard deviation of 10.3 years), 461 (78.5%) were males and 126 (21.5%) were females. The time elapsed between symptom onset and entrance into the study ranged from 6.9 days to 243.5 months (mean, 50.1 months; standard deviation, 49 months). Echocardiographic, radioisotopic, electrocardiographic, and functional characteristics of the population studied are shown in table 1. The patients were followed up on an outpatient care basis.The clinical treatment included dietary guidance, instruction about the general principles of treatment, and prescription of medications adjusted to their needs and tolerance.The medications included angiotensin-converting-enzyme inhibitors, diuretics, and digitalis.Beta-blockers were gradually introduced in the treatment from 1997 onwards. Data on evolution were supplemented by a review of hospital records, telephone contact by the researchers, and research in the ProAim (Programa de Aprimoramento de Informações de Mortalidade do Município da Cidade de São Paulo -Program on improvement of the information on mortality in the city of São Paulo). Echocardiographic measurements were taken according to the criteria recommended by the American Society of Echocardiography 11 . The left ventricular mass was indexed by the patient's height (g/m) 12 , and the term "ventricular mass index" began to be used.For comparisons with the left ventricular mass index, this indexation was also performed for other echocardiographic variables. The left ventricular mass index was studied in regard to age, sex, duration of symptoms, left and right ventricular ejection fraction on radioisotopic ventriculography, maximum O 2 consumption on cardiopulmonary exercise testing, maximum and minimum heart rates, and the presence of nonsustained ventricular tachycardia on 24-hour dynamic electrocardiography. The demographic and functional variables of the population studied and the left ventricular mass index were initially examined by using exploratory descriptive analysis, with identification of the minimum, maximum, and mean values, median, and standard deviation of the variables studied.Then the left ventricular mass index was studied in regard to the probability of survival by using the Kaplan Meier method 13 .Death was considered an event; the surgical interventions, including transplantation, were considered censored data.The comparisons were performed by means of the log-rank and Breslow tests 13 . To assess the relations of the left ventricular mass index with the demographic and functional variables, multivariate linear regression was used. The relative risk of death was estimated by using the Cox proportional hazards regression model 14 .An analysis of residues was performed to assess whether the suppositions performed when using the Cox model were satisfied.The results are shown as relative risk, P value, and respective 95% confidence intervals. The statistical significance level adopted was P < 0.05.The calculations were performed by using SPSS software, version 10.0, and SAS software, version 8.2. The protocol was approved by the Institutional Committee of Research in Human Beings. Results The left ventricular mass index ranged from 35.3 g/m to 333.5 g/m (mean, 173.5 g/m; standard deviation, 44 g/m) and increased The left ventricular mass index categorized into quartiles showed no significant difference in regard to the probability of survival (fig.4). The left ventricular mass index increased 0.39 g/m for each increase in year of the patient's age, and the other variables (sex, etiology, left ventricular ejection fraction on radioisotopic ventriculography, left atrial diameter) remained constant (tab.II).The left ventricular mass index in male patients was 11.2 g/m greater than that in female patients. The left ventricular mass index in patients with hypertensive cardiomyopathy, compared with that in patients with ischemic car-according to age categorized in quartiles (fig.1).The left ventricular mass index ranges according to the age groups were as follows: from 79. Left ventricular mass index was lower in patients with chagasic and ischemic cardiomyopathies than in those with hypertensive cardiomyopathy, idiopathic dilated cardiomyopathy, and cardiomyopathies of other etiologies (fig.3).The left ventricular mass index ranges according to the etiologies of cardiomyopathy were as follows: from 96.2 g/m to 309.3 g/m (mean, 188.1 g/m; standard deviation, 44.6 g/m) in patients with hypertensive cardiomyopathy; from 35.3 g/m to 332.6 g/m (mean, 177.7 g/m; standard deviation, 45.9 g/m) in patients with idiopathic dilated cardio- cardiomyopathy of other etiologies were 16.7 g/m and 12.9 g/m greater, respectively, and that of patients with chagasic cardiomyopathy showed no statistically significant difference when compared with that in patients with ischemic cardiomyopathy.The left ventricular mass index increased 6.96 g/m for each 5-mm/m increase in the left atrial diameter. The left ventricular mass index on radioisotopic ventriculography had a negative relation with left ventricular ejection fraction.The left ventricular mass index decreased 1.2 g/m for each 1-unit increase in ejection fraction. For each 1-g/m increase in the left ventricular mass index, the relative risk of death increased 0.4% (P = 0.0418) (95% CI: 0 to 1%).Because the left ventricular mass index in our case series ranged from 35.35 to 333.52 g/m, the relative risk of death was 1.22 (95% CI: 1 to 1.49) for each 50-g/m increase. Discussion This study comprised a large cohort of patients with symptomatic heart failure of different etiologies, including Chagas' heart disease, who were followed up on an outpatient care basis at a single institution for 10 years.Patients with cardiomyopathy of unknown etiology (idiopathic, 37.7%) were the most frequent, followed by patients with ischemic cardiomyopathy (19.4%), chagasic cardiomyopathy (16.5%), and hypertensive cardiomyopathy (14%).This etiologic distribution differs from that of other case series, in which ischemic cardiomyopathy (34% to 60% of the cases) [15][16][17] , idiopathic cardiomyopathy (18.2% to 59% of the cases) 16,17 , and hypertensive cardiomyopathy (3.8% to 23.6% of cases) 16 predominated.Therefore, our results were assessed according to these characteristics, including the etiologic distribution. M-mode echocardiography was used to assess left ventricular mass.Alterations in ventricular dimension and geometry may induce inaccuracies in the estimate of left ventricular mass index.The left ventricular mass was indexed by height because patients with heart failure may vary in weight due to fluid retention or loss.This indexation was validated in the literature in a study with extreme methodological strictness, which assessed 864 individuals and found an association between left ventricular mass and height (r=0.39,P < 0.001 in males; r=0.23,P < 0.001 in females) 12 .In addition, height is strongly associated with lean body mass, which is an excellent predictor of left ventricular mass 18 .Despite of restrictions, M-mode echocardiography has been used in large population studies, including those showing the important relation between left ventricular mass and cardiovascular morbidity and mortality 12,19 . Age influenced left ventricular mass index in an independent way.The 1-year increase in age was associated with a 0.39-g/m increase in left ventricular mass index.This observation differs from the previous population studies of individuals with no cardiomyopathy, in which age had no influence on left ventricular mass 20,21 .On the other hand, the Framingham study showed a relation between age and left ventricular mass in patients with cardiomyopathy, but this relation was not shown in patients without cardiomyopathy 22 .Therefore, the appearance of heart failure may represent a modulatory factor of the relations between left ventricular mass and age. Sex influenced the left ventricular mass index adjusted for height in an independent way; the left ventricular mass index was 11.2 g/m greater in males as compared with that in females.This finding confirms data of previous epidemiologic studies including hypertensive patients 12,21,23,24 .Therefore, in regard to sex and with adjustment of the other variables of comparison, patients with heart failure maintain the difference in left ventricular mass. The left ventricular mass index was higher in patients with hypertensive cardiomyopathy, followed by those with idiopathic dilated cardiomyopathy.The left ventricular mass index in patients with ischemic cardiomyopathy and chagasic cardiomyopathy did not show a statistically significant difference.A study of patients with aortic stenosis showed that the increase in left ventricular mass resulted from a combination of hypertrophy and hyperplasia of myocytes 25 .Therefore, the mechanisms acting on the increase in ventricular mass may act differently according to the etiology of the cardiomyopathy that causes heart failure. The relation between the left atrial diameter on echocardiography and the left ventricular mass index was assessed.The left ventricular mass index increased 6.96 g for each 5-mm increase in the left atrial diameter.In this study, the numerical estimate of this relation is noteworthy.One hypothesis is that the same variables that influence ventricular mass may also influence left atrial diameter 26,27 .On the other hand, the increase in left ventricular mass could contribute to an increase in the atrial dimensions, due to both hemodynamic and biochemical factors.Although the hypothesis of the left atrial enlargement secondary to left ventricular diastolic dysfunction may be attractive, a study in patients with arterial hypertension by use of Doppler diastolic indices did not show this occurrence 28 .On the other hand, this same study showed that the left atrial size in hypertensive patients with left ventricular hypertrophy on electrocardiography did not depend on left ventricular mass 28 .Therefore, a relation between left atrial diameter and left ventricular mass in patients with heart failure exists, but not in myocardial hypertrophy of patients with arterial hypertension. The left ventricular ejection fraction on radioisotopic ventriculography showed a negative relation with the left ventricular mass index.A 1-unit increase in ejection fraction was associated with a 1.2-g/m decrease in the left ventricular mass index.The association of left ventricular mass index and left ventricular ejection fraction on radioisotopic ventriculography may contribute to explain the lower survival resulting from an increase in left ventricular mass index.The influence of the decrease in ejection fraction leading to an increase in mortality was observed in other studies 15,29 . Left Ventricular Mass in Patients with Heart Failure Although the comparison of the probabilities of survival of patients, whose left ventricular mass indices were categorized in quartiles, showed no statistically significant difference, the Cox proportional hazards regression model revealed that for each 1-g/m increase in left ventricular mass index, the relative risk of death increased 0.4%.Because the left ventricular mass index in our case series ranged from 35.35 g/m to 333.52 g/m, the relative risk of death was 1.22 for each 50-g/m increase in the left ventricular mass index.In the Framingham study with patients without cardiomyopathy, for each 50-g/m increase in the left ventricular mass, a relative risk of death due to heart diseases of 1.73 was observed in males and of 2.12 in females 30 .In a case series of elderly patients without cardiomyopathy, whose ages ranged from 59 years to 90 years, the incidence of coronary events for each 50-g/m increase in left ventricular mass was 1.67 in males and 1.60 in females 31 .Therefore, the increase in left ventricular mass index is an unfavorable prognostic factor. It is worth noting that the case series studied comprises symptomatic patients in an advanced phase of the disease.These observations, however, may not be applicable to the general population or patients with heart failure in another phase of clinical evolution. In conclusion, the influence of left ventricular mass was not very strong, but could contribute to the prognostic assessment of patients with heart failure.Therefore, the relations with other variables, including the negative correlation with left ventricular ejection fraction on radioisotopic ventriculography, require further studies. Fig. 1 - Fig. 1 -Left ventricular mass index in regard to age. Table I -Laboratory characteristics of the population studied * sum of the measures of posterior wall thickness and interventricular septum thickness divided by the left ventricular diastolic diameter.Left Ventricular Mass in Patients with Heart Failure
2017-09-27T18:38:31.666Z
2004-09-01T00:00:00.000
{ "year": 2004, "sha1": "0633f499cde8f83be8c93a47ef47b04fe39af7a3", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/WNSnJNVP4GrqsJyGbsyVmSw/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0633f499cde8f83be8c93a47ef47b04fe39af7a3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247763532
pes2o/s2orc
v3-fos-license
Regulation of the antigen presentation machinery in cancer and its implication for immune surveillance Evading immune destruction is one of the hallmarks of cancer. A key mechanism of immune evasion deployed by tumour cells is to reduce neoantigen presentation through down-regulation of the antigen presentation machinery. MHC-I and MHC-II proteins are key components of the antigen presentation machinery responsible for neoantigen presentation to CD8+ and CD4+ T lymphocytes, respectively. Their expression in tumour cells is modulated by a complex interplay of genomic, transcriptomic and post translational factors involving multiple intracellular antigen processing pathways. Ongoing research investigates mechanisms invoked by cancer cells to abrogate MHC-I expression and attenuate anti-tumour CD8+ cytotoxic T cell response. The discovery of MHC-II on tumour cells has been less characterized. However, this finding has triggered further interest in utilising tumour-specific MHC-II to harness sustained anti-tumour immunity through the activation of CD4+ T helper cells. Tumour-specific expression of MHC-I and MHC-II has been associated with improved patient survival in most clinical studies. Thus, their reactivation represents an attractive way to unleash anti-tumour immunity. This review provides a comprehensive overview of physiologically conserved or novel mechanisms utilised by tumour cells to reduce MHC-I or MHC-II expression. It outlines current approaches employed at the preclinical and clinical trial interface towards reversing these processes in order to improve response to immunotherapy and survival outcomes for patients with cancer. Introduction The advent of immunotherapeutics has revolutionised treatment in cancer. These agents harness our immune system to promote anti-tumour responses and herald the potential for long-term survival in patients with otherwise incurable disease [1]. Specifically, immune checkpoint inhibitors (ICIs) are now standard of care in many solid organ cancers. They block inhibitory signals expressed by either tumour or immune cells, unleashing the brakes on our adaptive immune system to fight cancer cells. Yet only a minority of patients respond [2]. Ongoing research focuses on tumour resistance mechanisms against ICIs. One method of 'immune escape' invoked by tumour cells is through alterations of their antigen presentation machinery (APM) [3], making them invisible to the adaptive immune system. The major proteins in the APM are Major Histocompatibility Complex class I and II (MHC-I and MHC-II) and associated subunits (such as Beta 2 Microglobulin [B2M]) [4,5]. Tumour recognition by immune cells requires presentation of non-self peptides (neoantigens) by tumour cells through MHC Class I or II complexes. Loss or reduced expression of MHC or their subunits abrogates T cellmediated anti-tumour immunity. Defects in MHC expression has been observed in most common cancers at variable frequencies from 0% to 93% [4]. Deciphering mechanisms to reactivate MHC expression by tumour cells may therefore lead to the identification of alternative approaches to increase anti-tumour immunity. This review describes known mechanisms controlling MHC I and II expression in cancer and highlight how these mechanisms could be tackled towards treatment response and improving patient survival. MHC-I MHC-I function and antigen processing pathway MHC I molecules encoded by human leukocyte antigen (HLA) genes [6] are present on the cell surface of all nucleated cells [7]. They play an evolutionary role in immunosurveillance by presenting intracellular peptides to cytotoxic CD8 + T cells. Immunogenic foreign peptides, such as neoantigens, are recognised by T cell receptors (TCRs) on CD8 + T cells resulting in cell killing. The processing of neoantigens is mediated by the ubiquitin-proteasome system [8]. Proteasomes break down endogenous proteins tagged by ubiquitin into oligopeptides (8-13 amino-acid length) to enable effective presentation by MHC-I. Tumour cells that are exposed to oxidative stress or inflammatory stimuli up-regulate immunoproteasomes [9]. These immunoproteasomes have distinct catalytic activity to specifically generate diverse non-self peptides. The cleaved peptides are then transported to the endoplasmic reticulum (ER) by the specialised TAP (transporter associated with antigen processing) protein [10], where they bind to newly synthesised MHC-I molecules. The neoantigen-MHC I complex is released from the ER and then exocytosed into the plasma membrane for presentation to CD8 + T cells. Immune evasion through down-regulation of MHC-I in cancer Immune evasion is a hallmark of cancer to reduce visibility of tumour cells to the immune system [11]. Tumour immune surveillance not only relies on the expression of neoantigens by tumour cells but also the proficient neoantigen presentation to T cells through the MHC complexes. Defects in MHC-I synthesis, transport and loading of appropriate peptides result in low or absent cell surface expression of MHC-I and thus immune evasion. Altered MHC expression can be mediated through HLA or B2M loss of heterozygosity (LOH) [12]. However recent studies have also shown that loss of MHC protein expression can occur in HLA or B2M wildtype tumour cells, highlighting the role of non-genetic mechanisms in regulating MHC-I expression [4,[13][14][15] ( Figure 1). MHC-I protein loss on immunohistochemistry has been described in almost all types of solid organ tumours. Some studies describing this occurrence in >90% of their cohorts [4]. Loss of heterozygosity The genes encoding MHC-I are composed of the highly polymorphic class Ia 'classical' human leukocyte antigen genes (HLA-A, -B and -C) on chromosome 6 [16]. Loss of both HLA alleles results in total elimination of MHC-I expression [17]. Deletions of one allele (HLA LOH), reduces MHC-I expression by half. Tumours leverage the genomic instability associated with LOH, whereby a further mutation in the other allele results in complete MHC-I loss, to evade immune recognition [3,18]. This phenomenon has also been described for the gene encoding the B2M light chain and appears to be more prevalent in metastatic compared with primary lesions [19]. A pan-cancer analysis of 83 644 patient samples representing 59 different solid organ tumours revealed the prevalence of HLA LOH to be 17% [20]. In Non-Small Cell Lung Cancer (NSCLC), HLA LOH has been shown to be both a negative prognostic and predictive biomarker for ICI therapy [21]. Somatic mutations Somatic mutations in HLA alleles may also have similar functional implications to deletions, precluding effective neoantigen presentation [22]. Whole-exome sequencing (WES) analysis of a TCGA cohort involving 7930 paired tumour and healthy samples revealed the presence of 298 non-silent mutations in 3.3% of patients [23]. HLA mutations were more prevalent in head and neck, lung squamous and stomach cancers [23]. These findings support earlier work demonstrating the presence of HLA mutations and other components of the MHC-I APM pathway including TAP1 in small cell lung cancer (SCLC) and melanoma cell lines, and B2M in human melanoma tumours [24][25][26]. DNA hypermethylation Hypermethylation of gene promoters and enhancers of HLA alleles, B2M and other APM regulatory genes have been described in solid tumours [27][28][29]. In breast cancer cell lines, DNA methyl transferase inhibitors increased MHC-I expression and antigen presentation, leading to an increased T cell infiltration in mouse models of breast cancer [27]. This increase in MHC-I expression is thought to be due to reduced methylation of HLA genes [27], but also through demethylation of endogenous retrovirus (ERV) genes that trigger cytosolic sensing of dsRNA (double stranded RNA) [28]. ERVs are a relic of ancient infections that comprise 8% of the human genome [30]. These genes are silenced through hypermethylation but treatment with DNA methyl transferase inhibitors induced their expression, activating the dsRNA sensing pathway [27,28]. This pathway stimulated an interferon type I cellular response and NFκB (nuclear factor κB)-mediated activation of MHC-I expression. Histone regulation Histone modifications by trimethylation of lysine residues on histone 3 (H3K27me3) or deacetylation result in gene silencing and have been shown as mechanisms invoked by tumour cells to silence the APM [31,32]. An in vitro whole genome CRISPR/Cas9 screen in leukemia cell lines revealed EZH2 (enhancer of zest homolog 2) as a negative regulator of MHC-I, the master regulator of MHC-I transcription NRLC5 (nucleotide-binding domain and leucine-rich repeat caspase recruitment domain-containing 5), and TAP expression [33]. EZH2 catalyses the trimethylation of H3K27 leading to inhibition of transcription. Reversal of the H3K27me3 with an EZH2 inhibitor up-regulated MHC-I in leukemia as well as neuroblastoma and SCLC cells. In diffuse large B cell lymphoma (DLBCL) with EZH2 Y641 mutation, treatment with an EZH2 inhibitor reduced H3K27me3 in the promoter region of NLRC5, resulting in increased NLRC5 and MHC-I expression [34]. EZH2 inhibition also increased antigen presentation in head and neck squamous cell carcinoma cell lines, restoring sensitivity to anti-PD1 therapy in in vivo mouse model of head and neck cancer [35]. Histone deacetylase (HDAC) inhibitors have also demonstrated in vitro and in vivo efficacy in restoring MHC-I expression and immune control for various solid organ cancers [32,36], either as monotherapy or in combination with ICIs. Several human clinical trials are in progress examining their efficacy [37]. IFNγ-dependent Transcriptomic regulation of MHC-I is tightly controlled to elicit an appropriate immune response. The transcriptional transactivator NLRC5 is a critical regulator of MHC I expression. It forms a scaffold with DNA binding proteins RFX (regulatory factor X), CREB (cAMP responsive element binding protein 1), ATF1 (activating transcription factor 1) and NF-Y (nuclear factor Y) to form the CITA complex (class I transactivator) on the proximal promoter of HLA genes [38]. These regulatory elements and transcription factor/transactivator complex are also present on the promoter of other APM genes including TAP1 and B2M. In vivo deletion of NLRC5 resulted in loss of MHC-I expression in mice, without altering MHC-II expression, demonstrating the critical role of NLRC5 in specifically controlling MHC-I expression. IFNγ is a key regulator of MHC-I expression through JAK1/2 ( janus kinase 1/2)-STAT1 (signal transducer and activator of transcription 1) signalling and activation of NLRC5 expression. This transduction pathway also activates IRF1/2 (interferon regulatory factor 1/2) expression that binds the ISRE (interferon-stimulated response element) present in the proximal promoter of HLA genes. Alterations of these key transcription factors through genetic deletions or epigenetic modification results in loss of MHC-I expression. NLRC5 loss has been described in several solid organ cancers [39,40]. Its absence abrogates MHC-I expression and CD8 + T cell mediated cytotoxic responses, thus conferring inferior patient survival [39]. Similar findings have been ascribed to the loss of IRF1/2, particularly in melanoma patients with ICI resistance [41]. Defects in IFNγ signalling through loss of JAK1 or JAK2 has also been associated with reduced MHC expression and resistance to ICIs [42,43]. DUX4 (double homeobox 4), a preimplantation embryonic transcription factor normally silenced in somatic tissues, was found reactivated in many cancers. Its expression reduced MHC-I expression, likely through DUX4-mediated inhibition of JAK1 and STAT1 expression [44]. DUX4 overexpression was associated with resistance to immune checkpoint blockade in melanoma [44]. IFNγ-independent Defects in IFNγ signalling is an important mechanism of primary and acquired resistance to ICIs [45][46][47][48]. Targeting the IFNγ pathway to increase MHC-I expression may therefore not represent an appropriate strategy to increase neoantigen presentation. Recent efforts have aimed to uncouple MHC-I expression from interferon signalling to increase MHC-I expression in tumours with defective IFNγ signalling [49]. NFκB is a known regulator of MHC-I expression through direct binding of p50/p65 subunits to NFκB response elements present in the enhancer region of HLA-A and B [50]. NFκB is activated by double-stranded RNA (dsRNA) sensors such as TLR3 (toll-like receptor) and PKR (serine/threonine kinase R) [51,52]. MHC-I, TAP1 and B2M expression were found to be up-regulated after treatment of melanoma cells with BO-112, an activator of dsRNA sensing and NF-κB signalling, restoring the cytotoxic activity of tumour-specific T cells [49]. Multiple agents can induce dsRNA sensors and could potentially be combined with ICIs to re-establish anti-tumour immunity. Transcriptional regulations associated with oncogenic drivers Aberrant activation of cell signalling pathways through oncogenic drivers can down-regulate MHC-I. An in vitro shRNA screen targeting 526 kinases identified the MAP kinase (MAPK) pathway, including downstream kinases MEK and ERK, as negative regulators of HLA-A expression [53]. Tumour cells with oncogenic activating mutations in the EGFR (epidermal growth factor receptor), ALK (anaplastic lymphoma kinase) or RET (rearranged during transfection) kinases were found to reduce MHC-I expression. Pharmacological inhibition of these kinases increased MHC-I cell surface expression, potentially increasing immune recognition [53,54]. The MYC family of proteins regulates transcription of ∼15% of the human genome [55]. Over-expression or dysregulation of the N-MYC and C-MYC oncoproteins is observed in up to 70% of human tumours and is associated with reduced immunosurveillance [56][57][58]. Mechanisms responsible for immune evasion of MYC-expressing tumour are starting to emerge with the observation that MYC prevented loading of dsRNA to TLR3 in pancreatic cancer cells, reducing NFκB signalling and MHC-I expression [58]. MicroRNAs (miRNAs) miRNAs are a class of non-coding RNAs that are characterised by their short length (∼21-25 bp). They can bind to the 3 0 untranslated region (UTR) of mRNA and inhibit their translation through mRNA degradation or translational repression. Binding sites for mir-148a-3p and mir-125a-5p were found in the 3 0 untranslated region of HLA-A, -B, -C mRNA and TAP2 mRNA respectively. Overexpression of mir-148a-3p reduced cell surface expression of MHC-I in colorectal and oesophageal cancer [59,60], while inhibition of mir-148a-3p restored MHC-I expression and increased T-cell mediated killing in vitro and in vivo [60]. miRNAs may be therapeutically targeted using complementary antisense RNAs (anti-miRs) packaged in lipid nanoparticles for optimal drug delivery of the oligonucleotides [61]. Long non-coding RNAs (lncRNAs) Like miRNAs, lncRNAs are not translated into proteins. lncRNAs are >200 bp long that predominantly reside in nuclei. They are responsible for diverse processes that result in transcriptional and post-transcriptional regulation of gene expression. The oncogenic lncRNA LINK-A was recently found to be a negative regulator of MHC-I and B2M cell surface expression in triple negative breast cancer cells (TNBC), and a negative predictive biomarker in patients treated with ICIs [62]. LINK-A was shown to abrogate phosphorylation of the E3 ubiquitin ligase TRIM71, resulting in increased degradation of the MHC-I peptide loading complex. Other lncRNAs have been associated with positive regulation of MHC-I expression, such as LINC02195 that positively correlated with MHC-I-related protein expression in head and neck squamous cell carcinomas cell lines and patient samples [63]. Post translational regulation of MHC-I Cancer cells may also evade immune recognition through post translational modification of MHC-I. ER-associated protein degradation (ERAD) constitutes a quality control system to eliminate misfolded or unassembled proteins from the ER [64]. Tumour cells exploit this pathway to induce degradation of the nascent MHC-I chain to hinder antigen presentation. Staphylococcal nuclease and tudor domain containing 1 (SND1), an oncoprotein overexpressed in solid tumours, guides the heavy chain of MHC-I to the ERAD, resulting in its dislodgement into the cytoplasm and subsequent degradation. Loss of SND1 increased MHC-I expression and cytotoxic T cell infiltration in in vivo models of melanoma and colorectal cancer, resulting in decreased tumour burden [65]. Increased turnover of the antigen loaded MHC-I is another mechanism by which tumour cells evade immune surveillance. Expression of the transmembrane protein MAL2 (myelin and lymphocyte protein 2) is associated with worst prognosis in TNBC cells [66]. Molecular analysis demonstrated that MAL2 promoted intracellular endocytosis of peptide bound MHC-I complexes through direct interactions with endosomeassociated proteins [66]. Knockout of MAL2 in patient-derived tumour organoid models resulted in enhanced CD8 + T cell-mediated cytotoxicity, thus making MAL2 a potential therapeutic target. Tumours may also modify their cell membranes to sterically inhibit MHC-I interactions with CD8 + T cells. Specifically, high cell surface expression of glycosphingolipids by tumour cells impedes MHC-I and CD8 + T cell interaction [66]. Membrane expression of glycosphingolipids is modulated by the protease SPPL3 (signal peptide peptidase like 3). SPPL3 loss has been shown to be a negative prognostic biomarker in gliomas [67]. Reduced SPPL3 activity increased cell surface expression of glycosphingolipids, forming a shield preventing presentation of MHC-I-loaded peptides to CD8 + T cells. There is considerable interest in inhibiting glycosphingolipids synthesis using clinically approved inhibitors which have demonstrated in vitro efficacy in glioma cell lines [66]. Autophagy has been proposed as another mechanism utilised by tumour cells to reduce cell surface expression of MHC-I and avoid immune recognition [68,69]. Immunofluorescence analysis of human pancreatic ductal adenocarcinoma (PDAC) tumours and NSCLC cell lines demonstrated a preponderance for intracellular sequestration of MHC-I proteins in the autophagosomes and lysosomes [69]. Genetic inhibition of autophagy or pharmacological lysosomal inhibition resulted in increased total and cell surface expression of MHC-I, indicating a specific role for autophagy in the trafficking of MHC-I to the lysosome [69]. ATG4B (autophagy related 4B cysteine peptidase) is a cysteine protease that has an essential role in autophagosome formation. Inhibition of autophagy in genetically engineered murine PDAC cells expressing a dominant-negative form of ATG4B increased cell surface expression of MHC-I, tumour cell killing in in vitro co-culture assay with cytotoxic T cells and enhanced CD8 + T cell infiltration in vivo [68,69]. These tumours also responded more efficiently to ICI therapy than their wild-type counterparts [69]. These findings are particularly notable given the lack of efficacy using ICIs in clinical trials for patients with PDAC [70]. MHC-II function and processing pathway Immuno-oncology research thus far has predominantly focussed on augmenting cytotoxic CD8 + T cell responses. However, there is increasing interest in harnessing CD4 + T helper cells to potentiate sustained antitumour immunity [71,72]. CD4 + T cells are activated by MHC-II-bound peptides. MHC-II molecules present exogenously derived peptides and have traditionally been associated with professional antigen presenting cells (APCs) such as dendritic cells, macrophages and B cells [73]. While tumour cells do not constitutively express MHC-II, IFNγ present in the tumour microenvironment can induce MHC-II in tumour cells (tsMHC-II). Indeed accumulating evidence now highlight a critical role for tsMHC-II towards the activation of CD4 + T cells [5]. CD4 + T helper cell differentiation is induced by the binding of a naïve CD4 + TCR to an MHC-II peptide complex combined with a second co-stimulatory signal where CD28 on CD4 + T cells binds to CD80/86 found on professional APCs. These T helper cells promote CD8 + T cell mediated responses and immunological memory [71,74]. Tumour cells do not express the classical co-stimulatory ligands CD80/86 [75]. However, they may utilise other cell-surface proteins to interact with CD28 on CD4 + T cells. Examples of these co-stimulatory molecules include OX40 and CD70, both found in solid cancers [76,77]. The presence of tsMHC-II is associated with increased CD4/CD8 tumour infiltrating lymphocytes, improved survival and responsiveness to ICIs [78][79][80]. Analysis of a cohort of melanoma patients treated with ICIs also revealed that the loss of tsMHC-II and MHC-I were not interdependent, suggesting that they may be independently regulated in cancer [81]. In a study of 5942 tumours, neoantigens that poorly bound to MHC-II were positively selected during cancer evolution. The degree of positive selection was even stronger than the association observed between MHC-I and its neoantigens [82]. These findings suggest that CD4 + T cell-mediated immunosurveillance may be a dominant mechanism for immune control of tumours. Both MHC-I and MHC-II genes are highly polymorphic. However, MHC-II can bind a greater diversity of neoantigenic proteins. Their binding pocket can allow peptides of a longer length (>13 amino acid) and accommodates peptide side chains. The regulation of antigen processing and presentation by MHC-II in professional APCs has been reviewed elsewhere [73]. Here we focus on findings pertaining to the regulation of MHC II in non-professional antigen presenting tumour cells (Figure 2). Immune evasion through down-regulation of MHC-II in cancer Little is known on the mechanisms driving the regulation of tsMHC-II. However, some studies are starting to emerge elucidating immune-evasion mechanisms associated with MHC-II complex down-regulation [83]. Expression of MHC-II is controlled by the transcriptional master regulator class II transactivator (CIITA) [84]. The CIITA complex is a scaffold of proteins that recruit activators, including RFX5, at transcriptional start sites of MHC-II related genes. They are a key component of MHC-II induction, though never binding DNA directly. The expression of CIITA is controlled by four promoters: promoters I ( pI), II ( pII), III ( pIII), IV ( pIV) [85]. Constitutive expression of CIITA in APCs is predominantly regulated by pI and pIII. The strongest inducer of CIITA in response to IFNγ stimulation is pIV [86]. Modulation of MHC-II expression in cancer has been associated with perturbed regulation of CIITA expression through genomic, epigenetic, transcriptional or post-translational mechanisms. Genetic mechanisms of MHC-II down-regulation Genomic alterations in the CIITA gene, including point mutations and gene fusions have been observed in different types of lymphoid tumours [87,88]. Point mutations in CIITA or its promoter complex have also been observed in melanoma and microsatellite unstable (MSI-H) colorectal cancer (CRC) [89,90]. Frameshift mutations in the RFX5 gene are also a common event in MSI-H CRC, occurring in approximately a quarter of cases [91]. These alterations resulted in reduced tsMHC-II expression and immunogenicity of tumour cells. DNA hypermethylation has been described at CIITA-promoter sites or directly affecting MHC-II genes. Hypermethylation of CIITA-pIV has been demonstrated in gastric cancer [92]. Hypermethylation of HLA-DR and HLA-DQ genes and absence of tsMHC-II expression have been associated with inferior survival in patients with oesophageal squamous cell carcinoma [93]. DNA methyltransferases, such as DNMT1 and DNMT3B, mediate these methylation effects. Their inhibition through genetic inactivation or pharmacological agents have been shown to induce MHC-II expression in colorectal and breast cancer cell lines [27,92]. Histone regulation Histone acetylation promotes transcription of MHC-II related genes. B cell lymphoma cells with MHC-II expression were characterised by H3 and H4 acetylation at the HLA-DRA promoter compared with cell lines lacking MHC-II [94]. This process was shown to be induced by IFNγ [94]. In B cell lymphomas, inactivating mutation in the histone acetyl transferase CREBBP resulted in reduced MHC-II expression, further showing the importance of histone acetylation in MHC-II expression [95,96]. Histone de-acetylation at CIITA or HLA-DRA promoters has been observed in vitro in several solid organ and haematological malignancies, abrogating MHC-II expression [94,97,98]. Pre-clinical data support a role for HDAC inhibitors in up-regulating tsMHC-II expression through a CIITA-dependent mechanism [98,99]. Histone methylation driven by EZH2 has also been shown to regulate MHC-II expression in DLBCL where tumours with EZH2 Y641 mutation had low expression of MHC-I and II [34]. Treatment of these cell lines with an EZH2 inhibitor increased MHC II expression by reducing H3K27me3 on the CIITA promoter. Importantly Green text: expression/over-expression negatively regulates antigen presentation. Red text: reduced/loss of expression negatively regulates antigen presentation. this work not only provides a rationale for targeting EZH2 in combination with ICIs, but also identifies EZH2 mutation as a biomarker to stratify patients who may respond to this combination therapy. Transcriptional modulation of MHC-II Loss of interferon signalling can reduce MHC-II transcription. IRF2 has been shown to be a transcriptional activator of the CIITA-pIV promoter [100]. IRF2 loss is described in several cancers and associated with attenuation of MHC-I and MHC-II expression [100,101]. Activation of the MAPK pathway also appears to be associated with reduced expression of MHC-II in NSCLC cell lines [102]. This effect was reversed using MEK inhibitors, indicating that inhibition of the MAPK pathway may increase tsMHC-II expression. CIITA can also be inhibited by factors that competitively bind to E-box elements in the CIITA-pIV region, thus preventing transcription. The oncogenes L-MYC and N-MYC have been shown to bind this region in SCLC cell lines, resulting in loss of CIITA transcription [103]. Over-expression of the C-MYC oncogene in Burkitt's Lymphoma was also found to impair MHC-II antigen presentation through several mechanisms including reduced expression of the chaperone protein HLA-DM that regulates neoantigen binding to the MHC-II groove [104]. CIITA expression is also affected by loss of STAT1 and retinoblastoma tumour suppressor genes, as observed in SCLC, breast and thyroid carcinoma cell lines [105,106]. Conversely, BLIMP-1 (B lymphocyte-induced maturation protein I) acts as a developmentally conserved repressor of CIITA transcription and is associated with plasma cell differentiation in myeloma [107,108]. Regulation of MHC-II antigen binding The MHC-II complex is a heterodimer assembled in the ER with the chaperone protein CD74, also known as the invariant chain, to prevent loading of endogenous peptides. The MHC-II/CD74 complex is transported from the ER and fuses with acidic endosomes where exogenous peptide loading occurs. Cleavage of CD74 leaves the short fragment CLIP (class II-associated invariant peptide) blocking the peptide binding groove of MHC-II [109]. The chaperone protein HLA-DM releases CLIP for degradation and catalyses the binding of exogenous peptides to the MHC-II binding groove. Given that CLIP prevents peptide binding onto MHC-II complexes until it associates with HLA-DM, its expression is generally inversely proportional to HLA-DM [110]. Higher levels of CLIP have been associated with worse prognosis in acute myeloid leukemia [111]. In contrast, high expression of HLA-DM appears to portend improved survival in ovarian cancer [112]. These findings may relate to the impact of unhindered peptide/MHC-II binding towards establishing a robust antitumour response. Conclusions Regulation of the APM in cancer is a critical mechanism that governs the anti-tumour immune response, ultimately determining survival outcomes for patients with cancer. Our review highlights the mechanisms of MHC-I/MHC-II regulation in tumour cells. Considerable studies have been undertaken to elucidate resistance mechanisms contributing to reduced immune visibility, particularly during the current era of immunotherapeutics. Yet more research is required to understand mechanisms of APM down-regulation that underpin resistance to current ICIs and discover novel regulators that may unleash anti-tumour immunity. Perspectives • Despite the promise of long-term survival using immunotherapeutics in patients with otherwise incurable cancer, many do not respond to treatment due to immune evasion by tumour cells. This review outlines the mechanisms that tumour cells utilise to down-regulate neoantigen presentation to avoid immune recognition and highlights current strategies that may reactivate these pathways. • Regulation of antigen presentation machinery in tumour cells may occur due to genomic, transcriptomic and post-translational modifications. Whilst most evidence to date focuses on elucidating mechanisms of MHC-I down-regulation, emerging research highlights the ability of tumour cells to express MHC-II and impact adaptive anti-tumour immunity. • Ongoing research aims to identify novel mechanisms of neoantigen presentation regulation. Targeting these pathways with novel or repurposed drugs may enable immunotherapy to work for patients with otherwise limited treatment options. Competing Interests The authors declare that there are no competing interests associated with the manuscript.
2022-03-29T06:23:00.057Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "565d2f85345c17ac7235119490605811e2534bb2", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/biochemsoctrans/article-pdf/50/2/825/932224/bst-2021-0961c.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8ac8e5e8bf333ab7fe5c782be5dbf75f658c24c4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218632502
pes2o/s2orc
v3-fos-license
Endogenous oxidized DNA bases and APE1 regulate the formation of G-quadruplex structures in the genome Significance G-quadruplex (G4) structures in functionally important genomic regions regulate multiple biological processes in cells. This study demonstrates a genome-wide correlation between the occurrence of endogenous oxidative base damage, activation of BER, and formation of G4 structures. Unbiased mapping of AP sites, APE1 binding, and G4 structures across the genome reveal a distinct distribution of AP sites and APE1 binding, predominantly in G4 sequences. Furthermore, APE1 plays an essential role in regulating the formation of G4 structures and G4-mediated gene expression. Our findings unravel a paradigm-shifting concept that endogenous oxidized DNA base damage and binding of APE1 in key regulatory regions in the genome have acquired a novel function in regulating the formation of G4 structures that controls multiple biological processes. Formation of G-quadruplex (G4) DNA structures in key regulatory regions in the genome has emerged as a secondary structure-based epigenetic mechanism for regulating multiple biological processes including transcription, replication, and telomere maintenance. G4 formation (folding), stabilization, and unfolding must be regulated to coordinate G4-mediated biological functions; however, how cells regulate the spatiotemporal formation of G4 structures in the genome is largely unknown. Here, we demonstrate that endogenous oxidized guanine bases in G4 sequences and the subsequent activation of the base excision repair (BER) pathway drive the spatiotemporal formation of G4 structures in the genome. Genome-wide mapping of occurrence of Apurinic/apyrimidinic (AP) site damage, binding of BER proteins, and G4 structures revealed that oxidized base-derived AP site damage and binding of OGG1 and APE1 are predominant in G4 sequences. Loss of APE1 abrogated G4 structure formation in cells, which suggests an essential role of APE1 in regulating the formation of G4 structures in the genome. Binding of APE1 to G4 sequences promotes G4 folding, and acetylation of APE1, which enhances its residence time, stabilizes G4 structures in cells. APE1 subsequently facilitates transcription factor loading to the promoter, providing mechanistic insight into the role of APE1 in G4mediated gene expression. Our study unravels a role of endogenous oxidized DNA bases and APE1 in controlling the formation of higherorder DNA secondary structures to regulate transcription beyond its well-established role in safeguarding the genomic integrity. endogenous damage | 8-oxoguanine | G-quadruplex structures | APE1 | base excision repair G -quadruplexes (G4) are noncanonical tetrahelical nucleic acid structures that arise from the self-stacking of two or more guanine quartets into a planar array of four guanine residues coordinated through Hoogsteen hydrogen bonding (1,2). Numerous in vitro biochemical and structural analyses have established that both DNA and RNA sequences with a G4 consensus motif (G ≥3 N 1-7 G ≥3 N 1-7 G ≥3 N 1-7 G ≥3 ) can form G4 structures (3). The formation of G4 DNA structures in the genome has emerged as an epigenetic mechanism for regulating transcription, replication, translation, and telomere maintenance (4). Dysregulation of formation (folding) or unfolding of G4 has been implicated in transcriptional dysregulation, telomere defects, replication stress, genomic instability, and many human diseases, including cancer and neurodegeneration (5,6). The recent genome-wide mapping of G4s in human cells using high-throughput chromatin immunoprecipitation sequencing (ChIP-Seq) was a breakthrough in establishing the regulatory role of G4s in vivo (4,7). Mapping revealed that G4 structures are overrepresented in key regulatory regions like gene promoters, 5′ and 3′ untranslated regions, and telomeric regions, which indicates a positive selective pressure for retention of these motifs at specific sites in the genome to regulate multiple biological processes. In vitro, many G4 DNA structures are thermodynamically more stable than double-stranded DNA (8). However, G4 formation (folding), stabilization, and unfolding must be regulated to coordinate biological processes. Several proteins (DNA or RNA helicases) that resolve G4 structures have been characterized (9); however, the mechanisms underlying the spatiotemporal formation and stabilization of G4 structures in the genome are largely unknown. Guanine (G) bases in potential G-quadruplex−forming sequences (PQS) have the lowest redox potential and are susceptible to the formation of 8-oxoguanine (8-oxoG), a prevalent endogenous oxidized DNA base damage in the genome (10,11). The 8-oxoG DNA glycosylase (OGG1) initiates the repair of 8-oxoG via the evolutionarily conserved DNA base excision repair (BER) pathway. OGG1 removes the oxidized base to generate an Apurinic/apyrimidinic (AP/abasic) site (12), and human AP endonuclease 1 (APE1) is subsequently recruited to the AP sites for repair through the BER pathway (13). APE1, a key enzyme in the BER pathway, is a multifaceted protein involved in telomere maintenance, transcription regulation, and antibody Significance G-quadruplex (G4) structures in functionally important genomic regions regulate multiple biological processes in cells. This study demonstrates a genome-wide correlation between the occurrence of endogenous oxidative base damage, activation of BER, and formation of G4 structures. Unbiased mapping of AP sites, APE1 binding, and G4 structures across the genome reveal a distinct distribution of AP sites and APE1 binding, predominantly in G4 sequences. Furthermore, APE1 plays an essential role in regulating the formation of G4 structures and G4-mediated gene expression. Our findings unravel a paradigm-shifting concept that endogenous oxidized DNA base damage and binding of APE1 in key regulatory regions in the genome have acquired a novel function in regulating the formation of G4 structures that controls multiple biological processes. genesis, which highlights the role of BER beyond its genome maintenance function (14)(15)(16). However, the molecular and functional connection of endogenous DNA damage and APE1 with the G4 structures remains largely unclear. Here, we conduct an unbiased genome-wide mapping of G4 structures, AP sites, and binding of APE1 and OGG1 proteins, and provide direct evidence that the occurrence of endogenous oxidized DNA basederived AP site damage is nonrandom and is predominant in PQS sequences. High-resolution microscopy of G4 dynamics in cells revealed that oxidized DNA base damage and the associated repair complexes play a critical role in the spatiotemporal regulation of G4 structures. Loss of either stable AP site binding/ coordination or acetylation of APE1 results in the abrogation of G4 structures and dysregulation of G4-mediated gene expression. Using in vitro biophysical and cell biological assays, we provide evidence that AP site damage and binding of APE1 to a PQS promotes G4 formation and facilitates transcription factor (TF) loading to regulate gene expression. Overall, our study comprehensively elucidates the role of endogenous oxidized DNA base damage and APE1 in controlling the formation of G4 structures in the genome to regulate transcription and other biological processes. Results Genome-wide Mapping of Endogenous AP Site Damage and Binding of Repair Proteins. AP sites are the most prevalent type of endogenous DNA damage in cells and are generated spontaneously or after cleavage of modified bases, including oxidized G in the BER pathway (17). To determine the genome-wide occurrence of AP site damage, we developed a technique to map AP sites (AP-seq) in the genome, which employs a biotin-labeled aldehydereactive probe that reacts explicitly with an AP site in DNA (18,19). By pull-down of biotin-tagged AP site DNA with streptavidin followed by sequencing, we could map AP sites in the genome at a ∼300-base pair (bp) resolution (see SI Appendix, Fig. S1 A, Left and Materials and Methods for details). We found a statistically significant (P < 0.001) occurrence of AP site damage in specific regions in the genome of lung adenocarcinoma A549 and colon cancer HCT116 cells ( Fig. 1A and SI Appendix, Fig. S1B). We also mapped genome-wide occupancy of APE1, the primary enzyme responsible for repairing AP sites, and acetylated APE1 (AcAPE1), which is acetylated at AP site damage in chromatin (20), in A549 and HCT116 cells by ChIP-Seq analysis using α-APE1 and α-AcAPE1 antibodies (Abs) (repair-seq; Fig. 1A and SI Appendix, Fig. S1 A, Right). The disappearance of AcAPE1 peaks in HCT116 cells expressing APE1-specific short hairpin RNA (shRNA) compared to isogenic wild-type (WT) HCT116 cells confirms the specificity of APE1 binding (SI Appendix, Fig. S1B). To analyze the genome-wide correlation between AP sites (AP-seq), APE1, and AcAPE1, we used the StereoGene method to estimate Kernel correlation (KC), which provides correlation as a function of genomic position (21). Genome-wide correlation analysis between AP sites and APE1 binding revealed a statistically significant, positive KC (KC = 0.2, P = 10 −10 ) and Spearman's correlation (r = 0.84) ( Fig. 1B and SI Appendix, Fig. S1C), which suggests that generation of AP sites and binding of APE1 and AcAPE1 are nonrandom and predominantly occur in specific regions in the genome. Analysis of endogenous AP site damage and AcAPE1 binding distribution relative to annotated genomic features revealed a predominant occurrence (∼60%) in gene bodies (exon and intron) and gene promoter regions (2,000 bp upstream [−] and downstream [+]; ±2 kilobases [kb]) of the transcription start site (TSS) (Fig. 1C and SI Appendix, Fig. S1D). Interestingly, we observed a significant fraction (∼10%) of AP site damage and APE1 and AcAPE1 binding in promoter regions, although promoters represent a very small portion of the human genome. Analysis of distribution across ±2 kb of TSS revealed that AP site damage and binding of APE1 and AcAPE1 are significantly enriched upstream and downstream of TSS (Fig. 1D). Furthermore, genome-wide karyogram mapping revealed clusters of APE1 and AcAPE1 in gene-rich regions (SI Appendix, Fig. S1 E and F). The AP-seq and AcAPE1 ChIP-Seq data were validated by real-time ChIP-PCR analysis of the MYC and P21 gene promoters with or without induction of oxidative DNA damage (Fig. 1E). Consistent with our ChIP-Seq data, superresolution (110 nm) structured illumination microscopy (SIM) revealed that AcAPE1 localizes to specific regions in the genome that bear the active enhancer marker H3K27ac and active promoter marker H3K4me3 (Fig. 1F). Collectively, these results demonstrate that endogenous AP site damage is not randomly distributed in the genome but is predominant at defined gene transcription regulatory regions. Genome-wide Mapping of Oxidative Base Damage and G4 Structure Formation. Analysis of AcAPE1-bound regions showed enrichment of G-rich sequences, which significantly overlapped with PQS in the genome. We used QGRS mapper (a web-based server for G-quadruplex prediction) (22) and found 72% of AcAPE1-bound sequences had a PQS score greater than 20 (PQS score ≥20 is considered significant). Upon overlap with genome-wide PQS map (23) (with cutoff of ≥20), AcAPE1bound sequences were found to be predominantly enriched in promoter and gene regulatory regions (Fig. 1G). Since G bases in PQSs are more susceptible to form 8-oxoG, and repair of 8-oxoG is initiated by OGG1 (10), we mapped the genome-wide binding of OGG1 using our previously generated acetylated OGG1 (AcOGG1) antibody (24) (Fig. 2A). The AcOGG1 ChIP-Seq showed a significant positive Spearman's correlation (r > 0.8) with APE1 and AcAPE1 binding (SI Appendix, Fig. S2A). To examine the association between AP site damage, binding of repair proteins, and G4 formation, we mapped the genome-wide occurrence of G4s using a G4-specific antibody (BG4) (25). As previously observed, G4 structures were enriched in promoters, 5′ untranslated regions (UTR), and gene bodies (Fig. 2B) (4). We observed genome-wide statistically significant, positive KCs and Spearman's correlations between G4 and APE1, AcAPE1, or AcOGG1 ( Fig. 2C and SI Appendix, Fig. S2A), which indicates a genome-wide relationship between oxidized base damage, binding of OGG1 and APE1, and G4 structures ( Fig. 2 B and C). Many have reasoned that gene promoters experience negative superhelicity during transcription, which converts the duplex DNA to a G-quadruplex on the purine-rich strand (26). We inhibited either transcription initiation or elongation with Triptolide and Actinomycin D, respectively (27), and performed APE1, AcAPE1, AcOGG1, and G4 ChIP-Seq. Interestingly, we found no significant alteration of the APE1 or G4 ChIP-Seq profiles after inhibition of transcription (SI Appendix, Fig. S2C). Furthermore, MYC and P21 promoter-directed real-time ChIP-PCR after Actinomycin D or Triptolide treatment showed no change in APE1 binding or G4 formation (SI Appendix, Fig. S2 D-G). These observations indicate that the occurrence of AP sites or G4 formation is not an indirect consequence of transcription. However, we found a significant reduction of enrichment of AcAPE1 and AcOGG1, which indicates that acetylation of these proteins is dependent on active transcription (SI Appendix, Fig. S2 C-G). Together, our data reveal a genome-wide correlation between AP site damage, binding of AcOGG1 and APE1, and formation of G4 structures, which suggests a link between endogenous damage, activation of BER, and G4 formation in vivo. Stable AP Site Binding and Acetylation of APE1 Play a Crucial Role in the Formation of G4 Structures in Cells. G4 structures were visualized in human cells by confocal and SIM microscopy using the G4 DNA-specific antibody 1H6. We found the formation of G4 foci in the nucleus of several cell lines, including primary lung fibroblast IMR90, A549, HCT116, and mouse embryonic fibroblasts (MEFs) (Fig. 3 A and B and SI Appendix, Fig. S3A). G4 foci detected by 1H6 was sensitive to DNase treatment, but not RNase A treatment (SI Appendix, Fig. S3B), which confirmed the specificity of 1H6 to DNA G4 structures. We observed a high colocalization frequency (r = 0.78) of G4 structures and APE1 or AcAPE1 staining (Fig. 3A). Down-regulation of APE1 levels by transient expression of APE1 small interfering RNA (siRNA) or stable expression of APE1 shRNA in HCT116 cells abolished the formation of G4 foci compared to isogenic control HCT116 cells, which indicates the critical importance of APE1 in regulating the formation or stabilization of G4 structures in the genome ( Fig. 3B and SI Appendix, Fig. S3C). We further confirmed this finding by expressing two independent APE1 shRNA under a Doxycycline (Dox)-inducible promoter (Fig. 3C) and observed a significant reduction in G4 foci formation upon Dox treatment ( Fig. 3C and SI Appendix, Fig. S3 D and E). As a negative control, we stained for histone H3K27Ac and c-Jun in APE1 downregulated cells and observed no change in staining, which further supports that APE1 down-regulation specifically reduced G4 staining ( Fig. 3B and SI Appendix, Fig. S4A). To determine whether binding of APE1 to AP site damage is essential for the formation of G4 structures, we treated HCT116 cells with methoxyamine (MX), a small molecule which binds to AP sites and competitively inhibits binding of APE1 to AP sites both in vitro and in cells. We found that pretreatment of cells with MX significantly inhibited the formation of G4 structures (Fig. 3B). Furthermore, we assessed whether OGG1, which initiates the repair of 8-oxoG by generating an AP site, plays a role in the formation of G4 by comparing G4 staining between WT (OGG1 +/+ ) MEF and OGG1 −/− MEF cells (28). Formation of G4 foci was significantly reduced in OGG1 −/− MEF (Fig. 3D). We also tested whether deletion of the ECD gene, a non-BER protein, in adeno-Cre expressing ECD fl/fl MEFs (29) affected G4 staining, and found deletion did not alter G4 foci formation (SI Appendix, Fig. S4 B-D). Altogether, these data demonstrate that the absence of either OGG1 or APE1 abolished the formation of most genomic G4 in cells, which suggests their critical role in G4 formation. Interestingly, although down-regulation of APE1 nearly abolished G4 staining with the 1H6 antibody, ChIP-Seq data revealed APE1 bound to only 50% of G4 enriched sequences (Fig. 2D). To resolve the apparent disagreement between the two results, we conducted G4 ChIP-Seq in APE1 down-regulated cells to Ctrl. and untreated vs. H 2 O 2 -treated samples was used to determine P values (****P < 0.0001, ***P < 0.001, error bars denote +SD). (F) Three-dimensional (3D) SIM images show colocalization of APE1 and AcAPE1 with H3K27Ac (active enhancer) and H3K4Me3 (active promoter) in A549 cells counterstained with DAPI. (Magnification: 63×.) Pearson coefficient was calculated (n = 10 cells) as an indicator of colocalization frequency. (G) Genome-wide distribution of AcAPE1-bound sequences with a PQS score of ≥20 in indicated annotated genomic regions in A549 cells. determine the differential G4 peak formation in the absence of APE1. Our analysis revealed a significant reduction of G4 enriched peaks upon APE1 knockdown in A549 cells (Fig. 4A). Reduced BG4 ChIP-Seq peaks between control and APE1 knockdown cells were determined using THOR peak caller based on log 2 fold change of ≤0.5, adjusted P value of ≤0.05 (30). Distribution of reduced G4 peaks relative to annotated genomic features revealed a predominant occurrence in the promoter and gene body regions in the genome (SI Appendix, Fig. S5A). Interestingly, there were some G4 peaks that rather increased in some regions of the genome in the absence of APE1, which suggests that APE1 regulates and stabilizes the formation of most, but not all G4 structures in the genome. APE1 is a multifunctional protein with a DNA repair function (AP endonuclease activity) and a transcription regulatory Ref-1 function (31). Furthermore, several of our previous studies had shown that APE1 is acetylated at multiple Lys (Lys6, 7, 27, 31, and 32) residues after binding to AP site damage in chromatin, and AcAPE1 modulates the expression of genes via functioning as a transcriptional coactivator or corepressor (20,32 and acetylation-defective K5R APE1 mutants failed to restore the formation of G4 foci in cells (Fig. 3E). H309 residue in active site pocket of APE1 is found to be critical for formation of hydrogen bonding with phosphate oxygen of AP site and coordinating the catalysis (34)(35)(36). Thus, this result suggests that disruption of AP site binding and the formation of a stable preincision complex of APE1 abrogates G4 formation. All together, both the AP site binding and acetylation of APE1 play a crucial role in the formation of genomic G4 structures in cells. APE1 Modulates G4-Mediated Expression of Genes. To identify genes that are regulated by APE1 and G4 structures, we performed RNA-Seq analysis with control and Dox-inducible APE1 knockdown (APE1 KD) A549 cells and compared differentially expressed (log 2 ≥ twofold change, adjusted p ≤ 0.05) genes with our A549 AcAPE1, AcOGG1, and G4 ChIP-Seq and AP-seq data. We found that 33% of differentially expressed genes have overlapping peaks of AP site damage, AcAPE1, AcOGG1, and G4 structures ( Fig. 4B and Dataset S2). The role of promoter G4 structures in regulating the expression of protooncogenes, such as MYC, KRAS, and BCL-2, has been well characterized in multiple studies (37)(38)(39). We found that these oncogene G4 promoters have overlapping endogenous AP site damage and AcAPE1 occupancy in A549 and HCT116 cells (Fig. 4C). Thus, to understand the role of APE1 in G4-mediated gene transcription, we used oncogenes KRAS and MYC as a model in our study. Promoterdirected ChIP-qPCR showed significant enrichment of APE1, AcAPE1, and G4 in the previously reported KRAS G4 promoter region (37), but not in the control non-G4 sequence region (Fig. 5A). To establish whether APE1 is involved in folding of the KRAS promoter G4, we performed ChIP-qPCR in WT and APE1 KD cells. The qPCR amplification revealed a significant reduction of G4 enrichment on the KRAS promoter compared to the negative control region in APE1 KD cells (Fig. 5 B and C). To investigate the importance of APE1 acetylation on G4 structure formation, we analyzed the correlation of the genome-wide occupancy of p300 [the acetyltransferase responsible for APE1 Fig. S5B). Moreover, inhibition of APE1 acetylation by adenovirus E1A 12S (E1A inhibits HAT function of p300) protein overexpression abrogated G4 enrichment in the KRAS promoter (SI Appendix, Fig. S5C), which indicates that AcAPE1 is important for G4 folding. A recent study has shown that specific G oxidation and binding of MAZ TF to the KRAS G4 promoter sequence regulate KRAS expression (37). ChIP-qPCR analysis in control, APE1 KD, or E1A overexpressing cells showed decreased MAZ occupancy on the KRAS G4 promoter in the absence of APE1 or AcAPE1 (Fig. 5 B and C and SI Appendix, Fig. S5C). Furthermore, induction of oxidative DNA damage by glucose oxidase (GO) increased enrichment of G4, APE1, and MAZ on the KRAS G4 promoter in WT cells but not in APE1 KD HCTT16 cells (Fig. 5D). Induction of oxidative damage by GO treatment increased KRAS gene expression in WT cells but not in APE1 KD cells (Fig. 5E). Importantly, APE1 KD attenuates both basal and oxidative stress-induced KRAS expression (Fig. 5E). Ectopic expression of WT APE1 in APE1 KD cells restored KRAS gene expression; however, acetylationdefective K5R APE1 or active site mutant H309A APE1 could not restore KRAS gene expression (Fig. 5F), which shows that both the AP site stable binding and acetylation of APE1 are necessary for modulating KRAS gene expression. Our ChIP-Seq data showed enrichment of APE1, AcAPE1, and G4 structures in the previously reported MYC G4 promoter, which was also validated by ChIP-qPCR (41) (Figs. 2B and 1E, and SI Appendix, Fig. S5D). To examine the role of APE1 in regulating G4-mediated gene expression, we utilized a promoter luciferase reporter with the WT c-MYC (MYC-WT) G4 sequence in the upstream promoter region of a firefly luciferase coding gene. The expression of c-MYC firefly luciferase was normalized to the relative expression of the renilla luciferase gene with a non-G4 promoter sequence (pRL-TK). We found that induction of oxidative damage by hydrogen peroxide activated MYC-WT luciferase expression in WT cells but not in APE1 KD cells (Fig. 6A). Importantly, APE1 KD attenuated both basal and oxidative stress-induced MYC-WT luciferase activity (Fig. 6A). The c-MYC promoter PQS sequence has five G tracks and was shown to form two alternative G4 structures in vitro utilizing four G tracks. The c-MYC promoter G4 can form using the second, third, fourth, and fifth G tracks (MYC-2345 G4; predominant form) or the first, second, fourth, and fifth G tracks (MYC-1245 G4) (Fig. 6B) (41). To confirm whether the effect of APE1 on gene expression is mediated through the G4 sequence, we introduced two separate mutations in the c-MYC G4 sequence of the promoter luciferase reporter: 1) MYC-G12A (G to A mutation of the 12th position of c-MYC G4), which can only form MYC-1245 G4, and 2) MYC-G18A (G to A mutation of the 18th position of c-MYC G4), which cannot form a G4 structure. Our results demonstrate that MYC-G12A has increased luciferase activity relative to MYC-WT (Fig. 6C), which suggests that the MYC-1245 conformation is involved in the induction of promoter activity compared to MYC-2345 G4, which was previously shown to negatively regulate MYC gene expression (42). MYC-G18A, which cannot form a G4, had significantly decreased luciferase activity relative to MYC-WT (Fig. 6C). Down-regulation of APE1 in cells reduced (∼fourfold) MYC-WT and (∼twofold) MYC-G12A luciferase expression, but did not significantly affect MYC-G18A luciferase expression (Fig. 6C). Similarly, activation of BCL-2 G4 promoter luciferase expression upon oxidative damage was found to be dependent on APE1 (SI Appendix, Fig. S5E) (38). Overall, our results suggest that APE1 modulates G4mediated gene expression, such as MYC or KRAS, via promoting the formation of G4 structures and facilitating TF loading. Binding of APE1 to AP Site Damage in PQS Promotes the G4 Folding, and Acetylation of APE1 Enhances Residence Time. Finally, we investigated the mechanistic connection of AP site damage and binding of APE1 in a PQS on the stable formation of G4 structures using a 28-mer c-MYC promoter G4 (WT MYC) oligo. To test the effects of AP site damage in G4 sequences, we used MYC promoter G4 oligos containing AP site analog (tetrahydrofuran) at G12 (MYC G12AP) or two AP site analogs at G12 and G18 (MYC G12AP/G18AP) (Fig. 7A). Circular dichroism (CD) spectroscopy was performed to determine the secondary structure of DNA (43). The presence of a strong positive peak at 265 nm with a weak negative signal at 240 nm is indicative of a parallel G4 structure. We found that WT MYC and MYC G12AP G4 formation increased in the presence of KCl (Fig. 7A). The MYC G12AP/G18AP oligo with AP sites in both the third and fourth G tracks served as a negative control, as it was unable to form a G4 structure in the presence of KCl. To test whether APE1 can induce the formation of stable G4s, we incubated MYC G12AP and G12AP/G18AP oligos with recombinant APE1 (rAPE1). The addition of APE1 stimulated the folding of the MYC G12AP oligo to a G4 structure even in the absence of KCl, which suggests that APE1 promotes the formation of G4 folding in vitro (Fig. 7B). In contrast, addition of APE1 to MYC G12AP/ G18AP oligo, which cannot form G4, showed no effect on CD signals. We also observed that in vitro AcAPE1 also enhanced the formation of G4 (Fig. 7C). To examine whether stable coordination/ binding with AP site or catalytic activity of APE1 is required for promoting G4 folding in vitro, we incubated WT APE1 or H309A or N212A APE1 mutants with MYC G12AP oligo. X-ray B C. D crystallography and molecular modeling studies of APE1 bound to substrate AP site DNA and in vitro activity assays suggest that H309 stabilizes the preincision complex by forming a hydrogen bond with a nonbridging phosphate oxygen atom of the AP site to orient and polarize the phosphate backbone for nucleophilic attack mediated by N212A (33,35,36,44,45). Therefore, H309 in active pocket primarily plays a role in stabilization of APE1-AP site preincision complex, while N212 is crucial for AP site cleavage or catalysis. Our The APE1 level in these cell extracts was examined by Western blot analysis with α-APE1 and α-HSC70 (as loading control); mRNA, messenger RNA. (F) HCT116 cells expressing APE1shRNA were treated with Dox (2 μg/mL) for 2 days and then transfected with FLAG-tagged WT, or acetylation-defective K5R (Lysine 6, 7, 27, 31, and 32 to arginine), or repair-defective H309A APE1 plasmid constructs for 24 h. RT-PCR was performed to measure KRAS expression. The P value was calculated using unpaired Student's t tests (****P < 0.0001, ***P < 0.001, **P < 0.01, *P < 0.05, n.s. (nonsignificant) = P > 0.05). Error bars denote +SD. CD results demonstrate that, while both WT APE1 and catalytically inactive N212A APE1 mutant can promote the MYC G12AP folding, H309A mutant was unable to stimulate the G4 folding ( Fig. 7D and SI Appendix, Fig. S6A), suggesting that formation of a stable APE1-AP site preincision complex, rather than AP site cleavage or catalytic activity of APE1, is required for promoting G4 formation in vitro. Interestingly, we found that, while APE1 has very little effect on WT MYC G4 folding, APE1 can stimulate its folding in the presence of KCl (SI Appendix, Fig. S6B), which indicates APE1 can stabilize a preformed G4 structure in vitro. We compared the binding affinity of APE1 to AP site containing MYC duplex (DS) and preformed G4 MYC promoter oligos by electrophoretic mobility-shift assay (EMSA) assay. We found that rAPE1 has an equal binding affinity for an AP site when present in MYC duplex or G4 (Fig. 8A); however, APE1 could not stably bind to a MYC G4 oligo without an AP site (SI Appendix, Fig. S6C). Interestingly, our EMSA data show that the binding ability of H309A and N212A mutant APE1 proteins to an AP site containing DNA oligo is comparable to that of WT APE1 protein (SI Appendix, Fig. S6D). A previous study observed that APE1 could bind to AP sites within a G4 structure, but the cleavage rate is attenuated (46). Consistent with this, we also found that rAPE1 cleaves an AP site less efficiently when it is present in a G4 (Fig. 8B). We recently showed that APE1 is acetylated after binding to an AP site in chromatin (20). Our current study suggests that both stable interaction or coordination with AP site and the acetylation of APE1 are essential for G4 (Fig. 3E) formation in cells; therefore, we examined the effect of acetylation on endonuclease activity of APE1 on a G4 substrate. Recombinant AcAPE1 (rAcAPE1) had similar activity compared to unmodified rAPE1 when the AP site was present in a folded G4 structure (Fig. 8B and SI Appendix, Fig. S6E). It was previously shown that APE1 remains bound to the cleaved AP site and coordinates recruitment of the downstream BER enzyme polymerase β (47). To test whether acetylation of APE1 increases residence time at AP site damage in chromatin, we measured the mobility of WT APE1 GFP and RR-APE1 (lysine 6 and 7 acetylation-defective) Lysine 6 and 7 acetylation-defective RR-APE1 GFP at damage sites by performing fluorescence recovery after photobleaching (FRAP) assay with or without induction of AP site damage in cells. We found that WT APE1 GFP protein has a higher residence time at damage sites and a less mobile fraction compared to RR-APE1 GFP in control or damage-treated cells (Fig. 8C). The FRAP results suggest that acetylation may impede the complete repair of an AP site and increase the residence time of APE1 on a G4 structure to coordinate transcriptional activation or repression via regulating loading of TFs to promoters. Discussion Oxidative DNA damage is conventionally viewed as detrimental to cellular processes. However, mounting evidence supports the interplay between oxidative stress signaling, formation of 8-oxoG and AP sites, binding of cognate repair protein OGG1 and APE1 in promoters, and transcriptional activation or repression of mammalian genes (48)(49)(50). Studies have shown that oxidized DNA base damage has a strong positive correlation with elevated oncogene and proinflammatory gene expression (51,52). Of note is that many genes with a G4 promoter are known to be regulated by oxidative stress (53). Recent studies have demonstrated that 8-oxoG or AP sites in G4-forming promoter sequences of VEGF, BCL-2, and KRAS induced up-regulation of the respective genes (37,38,54). In this study, using genome-wide ChIP-Seq analyses, cell-based assays, and in vitro biochemical analyses, we have provided a mechanistic framework linking oxidized DNA base-derived AP sites, binding of APE1 to G4 formation/stability, and the control of gene expression. We demonstrate that the occurrence of endogenous oxidized base-derived AP site damage and APE1 binding are not random and are predominant in G4 sequences. Furthermore, our study reveals an essential role of APE1 in the formation of G4s in the genome to regulate gene expression. The observations that loss of OGG1, APE1, or AcAPE1 abrogates G4 formation raise questions about the biological and mechanistic roles of these proteins and their acetylation in the formation of G4s. We propose that cellular oxidants oxidize guanine base in PQSs, which recruit OGG1 to initiate the BER pathway (Fig. 9). OGG1 cleaves an 8-oxoG to generate an AP site and remains bound to the AP site (55). OGG1 is then acetylated by histone acetyltransferase p300, a protein commonly found in promoters/enhancers and gene bodies due to association with TFs and RNA Pol II. We previously demonstrated that the acetylation of OGG1 enhances its catalytic turnover by reducing its affinity for AP sites in DNA (24). The generation of an AP site significantly impacts the thermal stability of duplex DNA, unlike 8-oxoG paired with C (56); therefore, we propose that the generation of an AP site in a PQS after excision of an 8-oxoG by OGG1 destabilizes and opens up the duplex (57,58). Subsequent binding and stable coordination of APE1 with AP site in PQS promotes the formation and stabilization of a G-quadruplex structure with the AP site containing G track looped out. APE1 bound to an AP site is then acetylated by p300, which reduces its dissociation from AP sites (59). AcAPE1 bound to a G4 structure is then primed to stimulate the loading of activator or repressor TFs. Several lines of evidence support this model. We have shown a genome-wide overlap of AP site damage, APE1/AcAPE1, AcOGG1, and G4 (Figs. 1 A and B and 2 A-C). Furthermore, we observed a positive correlation between the occupancy of p300, AcAPE1, and G4 (SI Appendix, reduced MAZ TF loading on the KRAS G4 promoter (Fig. 5 B-D and SI Appendix, Fig. S5C). Our finding that APE1 is required for MAZ TF loading to the KRAS G4 promoter is further supported by several previous studies which have reported the interaction and cooccupancy of APE1 with TFs, such as STAT3, HIF-1α, AP-1, nuclear factor (NF)-κB, and HDAC1, at promoter regions (40,(59)(60)(61). The higher mobility or lower residence time of acetylation-defective APE1 (APE1 K6R/K7R ) in chromatin compared to WT APE1 indicates that acetylation of APE1 delays dissociation from an AP site (Fig. 8C). Furthermore, AcAPE1-bound regions significantly overlap with G4-forming regions that bear active enhancer and promoter histone marks (Fig. 1F). Finally, our in vitro CD experiments show that rAPE1 stimulates the formation/folding of MYC G4 (Fig. 7B). Our in vitro CD data demonstrate that stabilization of APE1-AP site preincision complex but not the cleavage of AP site in the MYC PQS oligo is required for promoting the formation of G4 structures in vitro (Fig. 7 B and D and SI Appendix, Fig. S6A); however, we do not know the exact molecular mechanism by which APE1 mediates the folding and stabilization of G4 structures. Structural characterization of APE1 bound to substrate AP site DNA suggests that initial binding of APE1 to AP site DNA induces DNA bending which causes AP site eversion into the enzyme active site pocket (34). Upon AP site eversion, H309 forms hydrogen bonding with AP sites and stabilizes the APE1-AP site preincision complex (33)(34)(35)(36); therefore, it is likely that H309 is a critical residue in promoting G4 structure mediated by APE1-induced DNA conformational changes. Supporting this idea, we demonstrated that H309A mutant which cannot bind/stabilize AP sites in enzyme active site pocket or by MX which blocks APE1 binding to AP sites abrogates the formation of G4 structures in cells (Fig. 3 B and E). Although less efficiently, APE1 can cleave AP site in G4 but remains bound to cleaved AP site; APE1 is then acetylated by p300, which further reduces the dissociation of APE1 from G4 structures and likely prevents loading of G4 resolving helicases. This is congruent with previous studies which have shown that GnL GnL GnL Fig. 9. Schematic representation linking oxidized G base damage (8-oxoG), AP site formation, and binding of APE1 in a PQS sequence in the genome to promote G4 structure formation. Oxidation of G in G4 motif sequences (GnLGnLGnLGnLGn; where n ≥ 3 and L is a loop region containing any nucleotide) initiates the BER pathway by recruiting OGG1. OGG1 cleaves 8-oxoG to generate an AP site, which destabilizes and opens up the duplex. Subsequent binding of APE1 to the AP sites in PQS promotes the formation and stabilization of a G-quadruplex structure with the AP site containing G track looped out. APE1 is then acetylated by p300, which enhances its residence time and delays its dissociation from AP sites. AcAPE1 bound to a G4 structure is then primed to stimulate the loading of TFs to regulate gene expression. APE1 can interact with and inhibit the activity of WRN, a member of RecQ family human helicases reported to resolve G4 DNA in human telomeres (62). Finally, deacetylation of APE1 by SIRT1 promotes the dissociation of APE1 from G4 structures, and, subsequently, 3′ to 5′ DNA helicases such as WRN or BLM are recruited to resolve the G4 structures. After resolution of G4 structures, downstream BER proteins can then repair the APE1generated DNA nick and restore the G4 sequence. Earlier studies from our laboratory and others have shown that APE1 acetylation/ deacetylation cycles by p300 and SIRT1 regulate endogenous damage repair via BER pathway (63,64). Nonetheless, it is imperative that, in cells, both AP site binding and its AP site cleavage activity are equally important for regulating the spatiotemporal formation and stability of G4 and to prevent mutation in G4 DNA sequences. Further studies are necessary to address this hypothesis. Down-regulation of APE1 abolished the formation of G4 structures in cells, as evident from the loss of G4-specific 1H6 antibody staining in immunofluorescence (Fig. 3 B and C); however, G4 ChIP-Seq in APE1 down-regulated cells revealed that a fraction of the G4 structures can still form in the genome. The apparent disagreement between the two results can be explained by the differences in the experimental approach. Immunofluorescence imaging with the G4-specific 1H6 antibody likely does not have sufficient resolution or sensitivity to detect all G4 structures in the genome. Since PQSs have diverse sequence motifs, and G4 structures can exist in multiple conformations (e.g., parallel, antiparallel, with varying loop lengths) with variable degrees of stability, APE1 may stabilize specific G4 structure conformations. Additional experiments and detailed PQS motif analyses are necessary to explore this area. The impact of oxidative stress on multiple biological processes is well documented (52,54,65). During the onset of reactive oxygen species-associated inflammation, OGG1 and APE1 were shown to augment proinflammatory gene expression by facilitating TF binding to promoters (52). We propose that G4 sequences act as a sensor for oxidative stress. Due to low oxidation potential, G bases of G4 sequences are more susceptible to oxidize and form 8-oxoG damage. Initiation of BER in G4 sequences could serve as an intermediate step in a signal transduction cascade that regulates G4-mediated gene expression and impacts multiple biological processes, including cell proliferation and innate immune response (65). The idea that G4s can function as a sensor of oxidative stress is supported by human genome sequence analyses which have shown that promoter G4s are highly conserved in comparison to other genomic regions (66). Although this study shows that G4 structure formation is coupled with endogenous oxidative DNA damage and subsequent activation of BER, the source of site-specific G oxidation in PQS promoter sequences remains a question. Random oxidation of G bases is too erratic to constitute the mechanism. We have observed a reproducible enrichment of AP sites and binding of BER proteins in specific gene promoter and gene bodies in multiple independent experiments in different cell lines. This discovery raises the intriguing question about whether endogenous oxidative or AP site damage occurs in a targeted or site-specific manner. Although a few recent studies have shown a region-specific distribution of DNA damage (67,68), additional high-resolution (single-base) mapping of base damage in the genome is warranted. Previous studies have indicated the presence of a target-specific DNA base damage mechanism in cells. For example, eliciting targeted DNA base damage appears to be a common first step in hormone-induced activation of many genes. Perillo et al. (69) have shown that estrogen-induced activation of BCL-2 is controlled by LSD1 flavin-dependent demethylation of H3K9me2 in the BCL-2 promoter. Interestingly, the flavin-dependent mechanism by which LSD1 demethylates H3K9me2 generates H 2 O 2 . Local oxidation arising from LSD1 demethylation was shown to produce oxidized G bases in the BCL-2 promoter, which contains a PQS (38). Perillo et al. demonstrated that LSD1-mediated oxidation and OGG1 recruitment in the BCL-2 promoter was essential for BCL-2 gene activation. Pan et al. (49) have also shown that oxidation of G in NF-κB binding sites promotes NF-κB binding and stimulates transcription. Further studies are necessary to address how an indiscriminate oxidant like H 2 O 2 liberated by LSD1 generates specific G oxidation in PQS sequences to induce G4 formation. G4 structures are often formed in the 3′ overhang regions of telomere sequences (70) and can serve regulatory roles by protecting telomere cap structures (4). Madlener et al. (71) showed that the absence of either APE1 endonuclease function or acetylation of APE1 results in telomere shortening and fusion and the formation of micronuclei. Telomere defects in the absence of APE1 acetylation or endonuclease activity could be a result of the destabilization of telomere G4 structures. G4induced replication stress, DNA damage, and genomic instability are linked with many cancers (72). Studies have found that G4-forming sequences are enriched at translocation breakpoints (73). We propose that activation of BER upon endogenous oxidative DNA damage not only repairs damaged bases but also regulates the formation and stability of G4 structures in the genome to coordinate multiple biological processes. Our study introduces a perspective in region-specific endogenous oxidative damage and activation of BER in the regulation of G4 to coordinate multiple cellular processes. This function of endogenous damage and the BER machinery defines a role beyond the well-characterized role as a safeguard for maintaining genomic integrity. Materials and Methods Detailed materials and methods are described in SI Appendix, Materials and Methods, including cell lines, plasmids, reagents, immunofluorescence analysis, Western blot analysis, ChIP method and ChIP-Seq and statistical analysis, RNA-Seq and qRT-PCR techniques, Luciferase assay, CD, EMSA, and AP endonuclease activity assay protocols. Data Availability. The sequence data reported in this paper have been deposited in the Gene Expression Omnibus (GEO) database (GSE142284). All materials generated (such as cell lines) are available to readers upon request.
2020-05-15T13:05:21.277Z
2020-05-13T00:00:00.000
{ "year": 2020, "sha1": "ee44f3648353e80ad40db6d77b9d8553ee8dc2e7", "oa_license": "CCBYNCND", "oa_url": "https://www.pnas.org/content/pnas/117/21/11409.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "be1b1453e1a338045df23d09a2ea08bf64e42f34", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }