id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
2944832
pes2o/s2orc
v3-fos-license
Using Stable Isotope Compositions of Animal Tissues to Infer Trophic Interactions in Gulf of Mexico Lower Slope Seep Communities We analyzed the tissue carbon, nitrogen, and sulfur stable isotope contents of macrofaunal communities associated with vestimentiferan tubeworms and bathymodiolin mussels from the Gulf of Mexico lower continental slope (970-2800 m). Shrimp in the genus Alvinocaris associated with vestimentiferans from shallow (530 m) and deep (1400-2800 m) sites were used to test the hypothesis that seep animals derive a greater proportion of their nutrition from seeps (i.e. a lower proportion from the surface) at greater depths. To account for spatial variability in the inorganic source pool, we used the differences between the mean tissue δ13C and δ15N of the shrimp in each collection and the mean δ 13C and δ15N values of the vestimentiferans from the same collection, since vestimentiferans are functionally autotrophic and serve as a baseline for environmental isotopic variation. There was a significant negative relationship between this difference and depth for both δ13C and δ15N (p=0.02 and 0.007, respectively), which supports the hypothesis of higher dependence on seep nutrition with depth. The small polychaete worm Protomystides sp. was hypothesized to be a blood parasite of the vestimentiferan Escarpialaminata . There was a highly significant linear relationship between the δ13C values of Protomystides sp. and the E . laminata individuals to which they were attached across all collections (p < 0.001) and within a single collection (p = 0.01), although this relationship was not significant for δ15N and δ34S. We made several other qualitative inferences with respect to the feeding biology of the taxa occurring in these lower slope seeps, some of which have not been described prior to this study. Introduction The expansion of oil and natural gas exploration and extraction into ever-deeper waters of the Gulf of Mexico increases the potential threat to the unique animal communities that thrive in natural oil and gas seep environments on the deep continental slope. We still lack some of the most basic knowledge about deep-sea animals, their life histories and interspecific interactions, their role in the wider Gulf of Mexico food web, and how anthropogenic disturbance will affect their long-term survival. Trophic interactions are one of the most basic components of an animal's or community's ecology, but owing to the inherent difficulties in studying deep-sea ecosystems, chemosynthesisbased seep food webs are not very well-constrained. This is especially true for seeps occurring in deeper water, which are historically less well-studied than shallower seeps. During two expeditions in 2006 and 2007, a multidisciplinary team of researchers worked to discover, explore, and sample seep communities on the deep slope, covering a large geographic and bathymetric range (700 km east to west and from 970 to 2800 m depth) (Figure 1). This effort has greatly increased our understanding of the geochemistry, community composition, species distributions, nutritional sources, geology, and population genetics of deep seep communities [1]. In this study, we used stable isotopes as a tool to elucidate the trophic structure of animal communities at these deep seep sites. The community composition of seeps in the deeper Gulf of Mexico differs significantly from shallower communities at high taxonomic levels [2], and there is little published work about the feeding biology of most of the taxa occurring in deeper water. Stable isotope analysis is a valuable tool in the deep sea where direct observation and gut content analysis are difficult (many animals are very small and some thoroughly grind their food) and has led to some of the most significant discoveries about the nutrition of cold seep and related hydrothermal vent animals [3,4,5,6]. We made quantitative collections of the communities associated with two dominant seep taxa: vestimentiferan tubeworms and bathymodiolin mussels. These taxa dominate biomass at seeps, contain symbiotic bacteria that provide their nutrition, and provide habitat for a community of smaller animals. Although there is some overlap in the species associated with vestimentiferans and mussels, there are significant differences in species composition of the associated communities and, due to differences in their symbionts and life histories, they inhabit very different geochemical habitats [2]. Mussels have methanotrophic symbionts in their gills and therefore live in active seep sites where methane is present in the bottom water above the sediment surface, while vestimentiferans have chemoautotrophic sulfide-oxidizing symbionts and can live in less active locations because of their ability to mine sulfide from the sediment through their "roots" [7]. In total, we made 11 vestimentiferan and 20 mussel community collections from 970 to 2800 m depth (see also 8). Additionally, data were included from vestimentiferan communities collected at 530 m depth for comparison with deeper sites [9]. The extensive sampling allowed us to discern qualitative patterns and make inferences about the feeding biology of seep animals. We also tested the long-standing hypothesis that generalist seep animals (those that obtain nutrition from both surface and seep primary production) derive a greater proportion of their nutrition from seeps as depth increases. This hypothesis is based on the fact that organic material produced by photosynthesis at the surface is consumed and degraded as it sinks toward the bottom, and thus, less of it is available for consumption at greater depths. This is supported by a general trend of decreasing benthic biomass with increasing depth in the "normal" deep sea [10], but no data exist that support this trend for vents and seeps, where primary production occurs locally via chemosynthesis. We tested this hypothesis by comparing the stable isotope compositions of alvinocarid shrimp from shallow and deep vestimentiferan communities to determine whether there is isotopic evidence of greater usage of seep-derived nutrition at greater depths. Organic matter from the photic zone has δ 13 C values between -23 and -19‰ [11], and carbon fixed by chemoautotrophy ranges from -75 to -28‰ [12,13], with the more negative values in this range indicating carbon derived Ocean Energy Management (BOEM) lease block designations and consist of a two-letter abbreviation, which stands for the region (AC=Alaminos Canyon, for example), followed by a three-digit number. The yellow text gives the name for the region, while the white points and text signify the specific study sites. The notations inside the brackets indicate how many of each community type was collected at a site: C=vesicomyid clams (Calyptogena ponderosa and an undescribed vesicomyid), M=mussels (Bathymodiolus spp.), and V=vestimenitiferans (Escarpia laminata and Lamellibrachia spp.). The bathymetric depth is given in meters below the site name. from methane, which itself can vary depending on whether it is of biogenic (δ 13 C ≈ -80 to -60‰) or thermogenic (δ 13 C ≈ -55 to -30‰) origin [14]. δ 15 N values in surface-derived organic material are generally greater than 6‰ [15], while seep animals can have δ 15 N that are much lower and even negative [13]. Thus, a trend of decreasing δ 13 C and δ 15 N values with depth could indicate increased usage of seep-derived nutrition. Two hypotheses of specific trophic relationships were also tested. While collecting vestimentiferan communities, clusters of the polychaete "cap worm" Protomystides sp. were observed living within a matrix of tubes affixed to the tops of the vestimentiferan Escarpia laminata (Figure 2). The guts of these worms were filled with a red substance resembling tubeworm blood. It was therefore hypothesized that Protomystides sp. is a blood parasite of E. laminata, and we tested for isotopic evidence of this trophic relationship. A polynoid Branchipolynoe seepensis lives in the gills of the seep mussel Bathymodiolus heckerae, and a nautilliniellid polychaete also lives in the gills of B. heckerae and vesicomyid clams (Calyptogena ponderosa and an undescribed vesicomyid). We looked for stable isotope evidence of a trophic relationship between the commensals and their bivalve hosts, as was previously found for Branchipolynoe symmytilida and the hydrothermal vent mussel Bathymodiolus thermophilus [16]. Study sites The study sites are named according to the Bureau of Ocean Energy Management lease block designations. Each name includes a two-letter abbreviation for the region (e.g. GC for Green Canyon) followed by a three-digit number. The 13 sites in this study are located along the lower continental slope of the Gulf of Mexico from 225 km south of Texas near the Texas-Louisiana border to the south of Alabama (Table 1, Figure 1). Sites ranged in depth from 970 m to 2800 m. Descriptions of the study sites are given in [17] and [18]. Data was also included from a site at 530 m for comparison between shallow and deep sites (Figure 1). Habitat-forming fauna Vestimentiferans and bathymodiolin mussels are the most common and abundant symbiotic taxa in cold seeps on the Northern Gulf of Mexico lower slope. Here, there are three vestimentiferan species: Escarpia laminata and Lamellibrachia sp. 1 are common and frequently co-occur in the same aggregations, and Lamellibrachia sp. 2 is rare and occurs with the other two [19]. The δ 13 C and δ 34 S values of co-occurring species were not found to be significantly different, but δ 15 N values in E. laminata were consistently more enriched than Lamellibrachia sp. 1 by 2.6 ± 0.7‰ [20]. All known vestimentiferans contain sulfide-oxidizing chemoautotrophic endosymbionts. The associated community inhabits the chitinous tubes and interstices of vestimentiferan aggregations, and older worms can extend up to a meter above the sediment surface, providing habitat with very little exposure to seeping fluids [7,21]. In very old vestimentiferan aggregations on the upper slope, seepage may be undetectable even at the sediment surface [21], and the vestimentiferans survive by mining sulfide from deep in the sediment with their buried "roots" [7]. The three bathymodiolin mussel species that occur on the lower slope are Bathymodiolus childressi, which is also common in shallower seeps (overall depth range 525 to 2284 m and collected between 1005 and 2284 m in this study), B. brooksi (collected between 1080 and 2745 m), and B. heckerae (collected between 2180 and 2745 m) [2]. Where the depth ranges of these species overlap, B. brooksi often co-occurs in the same aggregations with either B. childressi or B. heckerae, but the latter two species have never been found to co-occur. B. childressi has only methanotrophic symbionts [22], B. brooksi forms a dual symbiosis with both chemoautotrophic and methanotrophic bacteria [23], and B. heckerae contains four different symbionts: a methanotroph, two chemoautotrophs, and one methylotroph-related phylotype [24]. The associated community inhabits the shells of mussels and interstices between them. Mussel beds can be many layers thick, and because mussels lack binding proteins to transport molecules to their symbionts, they require sufficient concentrations of seep fluid and oxygen in the epibenthic water to support autotrophy. A third, less commonly observed symbiotic taxon at Gulf of Mexico seeps is the vesicomyid clams, which contain sulfide-oxidizing chemoautotrophic endosymbionts [25]. The taxa we collected on the lower slope were Calyptogena ponderosa and an undescribed vesicomyid. In the Gulf of Mexico, vesicomyids are typically found burrowing through the sediment, leaving distinctive trails. It has been hypothesized that since they acquire sulfide through their foot, the clams must move around when there is not sufficient flux to replenish the sulfide in one location [25]. In this study, we analyzed the stable isotope contents of the clams and their commensal nautilliniellid polychaetes. Sample types are Bushmaster (bm), mussel pot (mp), or mussel scoop (ms). Dive numbers are from the DSV Alvin (41##) or ROV Jason II (J2-###). * The majority of these samples were analyzed for community composition in [2] and were already given ID numbers consisting of the site name followed by "t" or "m" for a vestimentiferan (tubeworm) or mussel collection, respectively, and a sequential number. Collections with "t-" or "m-" after the site name were not included in [2], because they were not quantitative samples. Community collections Collections were made in 2006 using the deep submergence vehicle Alvin and 2007 using the remotely operated vehicle Jason II. Quantitative collections of vestimentiferan and mussel communities were obtained using the Bushmaster Jr. and mussel pot or mussel scoop collection devices, respectively [2]. The Bushmaster Jr. is a hydraulically actuated collection device with an open diameter of 0.7 m and lined with a 63-µm mesh [26,27]. The device is placed over an aggregation of vestimentiferans and then closed to capture the vestimentiferans and all animals associated with the vestimentiferan tubes and interstices. This same collection technique was used to collect the vestimentiferan communities from shallower seeps at the GC234 site. Some of the data from these collections that we included in the analysis of alvinocarid shrimp stable isotope contents vs. depth have been published previously [9], and some are from unpublished data. The mussel pot device as modified from the design of Van Dover (2002) was used to collect mussels [2,28]. This cylindrical aluminum pot is 25 cm in diameter, 30.5 cm in height, and has an internal volume of 0.015 m 3 . The submersible or ROV's manipulator places the device over a clump of mussels and turns a handle one full rotation to close a Vectran TM skirt. Because some of the mussel communities contained very large mussels that were not effectively captured with the mussel pot, a "mussel scoop" was used to collect some of the community samples [2]. The mussel scoop is a coarse mesh net with an opening of approximately 700 cm 2 lined with a pillowcase. The mussel communities collected with the two different sampling devices were not significantly different in composition [2]. The manipulator of the submersible dragged the scoop through the mussel bed then placed the entire scoop into an insulated biobox and closed the lid. Vesicomyid clams were sampled using the mussel scoop (AC601 collection) or the submersible's manipulator (GC600 collection). Once onboard the ship, Bushmaster Jr. collections were processed for community composition as in [2]. Associated fauna were rinsed from tubeworm tubes, passed through a 1mm sieve, sorted, and identified to the lowest possible taxonomic level. Up to three individuals of each taxon from each collection were sampled for stable isotope analysis. Tissue samples were obtained from associated fauna by dissecting a piece of muscle tissue from large animals or using whole individuals for smaller animals. The samples were rinsed with deionized water to remove any residual seawater and frozen at -70°C. For the vestimentiferans, vestimentum (muscle) tissue was sampled from up to six individuals of each species from each collection. Mussel pot and mussel and clam scoop collections were handled similarly. All mussels and clams in these collections were opened to check for commensal polynoids (Branchipolynoe seepensis) and nautilliniellid polychaetes, and mantle tissue was sampled from up to six individuals of each species from each collection. We were notified by NOAA through a Letter of Acknowledgement (LOA) that no permissions were needed for our limited collections of invertebrates from the deep Gulf of Mexico for research purposes. This research did not involve any endangered species. Stable isotope analysis All samples were dried at 60°C, homogenized, and acidified to remove inorganic carbonate. Samples were redried and subsamples were analyzed for stable carbon and nitrogen isotopes at the Stable Isotope Facility at the University of California, Davis, using an Integra elemental analyzer coupled with a PDZ Europa 20-20 isotope ratio mass spectrometer (Sercon Ltd., Cheshire, United Kingdom) or by RWL (School of Biological Sciences, Washington State University) using continuous-flow isotope ratio mass spectrometry with a Costech elemental analyzer coupled to a Micromass Isoprime isotope ratio mass spectrometer (EA/IRMS). Data from each of the laboratories are calibrated to NIST (National Institute of Standards and Technology) reference materials. All stable sulfur isotope analysis was performed by SAM at the University of Virginia Stable Isotope Laboratory using continuous-flow isotope ratio mass spectrometry with a Carlo Erba elemental analyzer coupled to a Micromass Optima isotope ratio mass spectrometer (EA/IRMS). Values are expressed using δ (delta) notation and reported in units of permil (‰), where 15 N, or 34 S and R = 13 C/ 12 C, 15 Statistical Analysis To test whether alvinocarid shrimp collected with vestimentiferans show isotopic evidence for incorporating more seep-derived nutrition at greater depths, we conducted a regression analysis in which each data point represents a single collection and the y-component is the difference δ 13 C shrimp -δ 13 C vest or δ 15 N shrimp -δ 15 N vest , where δ 13 C shrimp and δ 15 N shrimp are the mean δ 13 C and δ 15 N values of all shrimp in a single collection, and δ 13 C vest and δ 15 N vest are the mean δ 13 C and δ 15 N values of the vestimentiferans from the same collection, and the x-component is the depth at which the vestimentiferan community was collected. Alvinocarid shrimp were used for this analysis, because they are only known from seep and vent sites, are the most common and numerically abundant taxon in both shallow and deep sites (the shallow species is Alvinocaris stactophila and the deep species is A. muricola) and show isotopic evidence of a generalist feeding strategy that incorporates seep and surface-derived nutrition. Regression analysis was used to test for a linear relationship between the tissue isotope values of the "cap worm" Protomystides sp. and the vestimentiferan Escarpia laminata. If Protomystides sp. is a parasite of E. laminata, it was expected that the tissue δ 13 C values of the cap worms would be ~1‰ higher than the tissue δ 13 C value of the individual E. laminata they were attached to and the tissue δ 15 N would be enriched by approximately 3.4‰, following the average enrichment per trophic level [29,30]. A similar analysis was conducted for commensal polychaetes and their chemosymbiotic bivalve hosts, although this was more qualitative due to small sample sizes. Results and Discussion Evidence for increased reliance on seep primary production with depth There was a significant negative relationship between δ 13 C shrimp -δ 13 C vest and depth (p = 0.02; Figure 3A) as well as between δ 15 N shrimp -δ 15 N vest and depth (p = 0.007; Figure 3B). By examining the patterns of tissue stable isotope values of Alvinocaris spp. and vestimentiferans separately, one can see that δ 13 C for both shrimp and vestimentiferans show a Ushaped pattern with depth, though the shrimp generally had higher δ 13 C values than vestimentiferans at shallower depths (some of which were well into the range of photosynthetic production; δ 13 C = -22 to -19‰) and lower δ 13 C values than the vestimentiferans at greater depths, even though the overall δ 13 C values increased from the 2200-2300-m sites to the 2800m one ( Figure 3C). The δ 15 N values of vestimentiferans remained relatively constant with depth, while the shrimp, albeit variable, showed a trend of decreasing δ 15 N values with depth ( Figure 3D). The difference between the shrimp tissue stable isotope values and the values of vestimentiferans from the same local environment was calculated, because there is substantial spatial variation in the δ 13 C values of vestimentiferan tubeworms, which can be considered primary producers in this system since they derive the bulk of their nutrition from autotrophic symbionts [20]. This variation is reflected in the associated fauna in general [8] and the alvinocarid shrimps in particular (Figure 4A), which suggests that there are marked differences in the stable isotope composition of the local dissolved inorganic carbon (DIC) pool. On the other hand, δ 15 N values vary between collections, but there does not seem to be a relationship between the δ 15 N values of vestimentiferans and alvinocarid shrimp, suggesting that their respective variabilities are driven by different processes (Figure 4B). These data provide evidence that the shrimp, and potentially other generalist animals, derive more of their nutrition from seeps at greater depths owing to the increasing scarcity of photosynthetically produced material. It is, however, important to acknowledge that there may be other biological or geochemical processes that could lead to this pattern. The shrimp species present at 530 m depth was Alvinocaris stactophila, while A. muricola was the species sampled from the deeper sites between 1400 and 2800 m, so it is possible that the observed trend is due to differences in feeding habits between species. Indeed, if the A. stactophila data from the shallowest site are removed, the δ 13 C shrimp -δ 13 C vest and δ 15 N shrimp -δ 15 N vest vs. depth relationship becomes statistically nonsignificant. Also, since sampling was not conducted at different depths within regions (e.g. all 2300 to 2800-m sites are in the AC region, 2200-m in the AT region, and 1000 to 1400-m in the GB and GC regions), we cannot rule out region-specific or depth-related changes in geochemistry as potential causes for the observed pattern. Also, the nitrogen source used by seep vestimentiferans (e.g. ammonia or nitrate) is not known (could be different from that of the shrimp) nor is the δ 15 N contents of the inorganic nitrogen sources at seeps. Nonetheless, the data presented here show a compelling trend that warrants future investigation, though we remain cautious in interpreting the cause of the observed pattern. In future work, it would be interesting to test this hypothesis using mussel communities from the shallow and deep seeps, since Alvinocaris are also abundant in these habitats and, unlike vestimentiferan communities, the tissue δ 15 N contents of mussels and associated communities are apparently affected by the same inorganic nitrogen pool [8], and to sample from sites in the depth range between 530 and 1000 m, since these were not captured in our study. Additionally, stable isotope analysis of specific amino acids can be more reliable in preserving the isotope values of the source [31]. General inferences about feeding biology Very little variability among tissue stable isotope values of individuals in a collection could suggest a feeding strategy in which either animals specialize on an isotopically consistent food source, or they consistently integrate an isotopically heterogeneous food source across their feeding range. Conversely, large variability among individuals in a collection could indicate that individuals specialize on different food sources or feed in isotopically distinct microhabitats in a heterogeneous "landscape" (for most species, the "landscape" would be a single vestimentiferan or mussel aggregation) [32,33]. Since we made many collections, we often had the advantage of observing whether a pattern or relationship in tissue stable isotope values is consistent in more than one collection. Taxa that have low variability in their stable isotope values within a collection are the brittle star Ophioctenella acies (Figures 5B,C and 6A,B,C,E,L), the sipunculid Phascolosoma turnerae (Figure 5E,H, Figure 6B,K and Figure 7A,J), the anemones (Actinaria; Figures 6F,G,H,I,K,L and 7B,C), and the polynoid polychaete Harmothoe sp. (Figure 5A,D,E,F, Figure 6A,J and Figure 7C,J), although the two Harmothoe sp. individuals in one vestimentiferan collection had very different δ 13 C and δ 15 N values (Figures 7F, 1). There were many other examples like this for species only found in one collection. In fact, most species tended to group together within collections on plots of δ 15 N vs. δ 13 C and δ 34 S vs. δ 13 C (Figures 5, 6, and 7). Alvinocaris muricola The shrimp Alvinocaris muricola was one of the most common and numerically abundant animals in both mussel and vestimentiferan habitats [2]. Alvinocaris muricola's tissue stable isotope values were sometimes variable and sometimes similar to one another within collections (Figures 5, 6, and 7). In vestimentiferan collections, A. muricola were often among the most depleted in δ 15 N relative to other animals in the same collections (Figure 7), sometimes even more depleted than the vestimentiferans (Figure 7B,C,G). In mussel collections, A. muricola were sometimes enriched and sometimes depleted in δ 15 N relative to other animals, but were always enriched relative to the mussels (Figures 5 and 6). Some A. muricola individuals had δ 15 N compositions greater than 6‰, which is similar to the δ 15 N composition of surface-derived particulate organic matter (POM) [15], although these same individuals still had relatively depleted δ 13 C and δ 34 S compositions. The overall ranges for δ 13 C (-63.7 to -20.8‰) and δ 34 S (-18.5 to +19.1‰) in A. muricola could reflect a combination of seep-and surfacederived nutrition. The variability in A. muricola's tissue stable isotope values suggests that individuals specialize on different food items or on the same food item in isotopically distinct microhabitats. The lowest δ 15 N values may reflect the shrimp grazing upon freeliving bacteria that are fixing local inorganic nitrogen, and the enriched values may reflect some feeding upon animals at higher trophic levels, such as small predatory meiofauna, or consumption of surface-derived material. It is also plausible that individual shrimp specialize in isotopically distinct microhabitats as they are frequently observed near the tops of vestimentiferan tubes, but can also tolerate the chemical conditions near the sediment surface in vestimentiferan and mussel habitats. Hesiocaeca methanicola Previous work has shown that the methane ice worm Hesiocaeca methanicola is a bacterivore, and therefore its tissue isotope values reflect the isotope compositions of the free-living microbial community [34]. In the previous study, the worms occupied depressions in a sulfide-containing methane hydrate and had tissue δ 13 C values of -24.7 to -23.8‰, δ 15 N of 5.3 to 6.3‰, and δ 34 S of 1.9 to 3.6‰. These values strongly indicated that the primary food source utilized by H. methanicola was not methanotrophic bacteria as originally hypothesized, but rather chemoautotrophic sulfur bacteria on the hydrates [34]. In mussel habitats in our study, H. methanicola δ 13 C values ranged from -62.9 to -29.0‰, δ 15 N ranged from -6.1 to +5.3‰, and δ 34 S from 0.7 to 20.8‰ (Figure 8), which indicates large variability in the isotope compositions of the microbes on which they feed and suggests that they do not specialize on a single type of bacteria. Sediment-dwelling sipunculids and holothurians The sediment-dwelling sipunculid Phascolosoma turnerae collected with vestimentiferans and mussels had enriched δ 15 N values relative to other animals in the same collections, but had δ 13 C and δ 34 S values that were quite depleted (δ 13 C = -58 to -30‰ and δ 34 S = -18.2 to +14.3‰) relative to surface POM (δ 13 C = -22 to -15‰ [11] and δ 34 S = 10 to 20‰ [5,35]) ( Figures 5E,F,H, 6B,C,H,K and 7A,J). In most of the collections, P. turnerae had δ 15 N values that were approximately 6-8‰ higher than the mussels or vestimentiferans with which they were collected. Previous work on the related species Phascolosoma vulgare showed that these sediment feeders demonstrate some specificity in the grain size of sediment they ingest and that fecal pellets account for 92% of this species' caloric intake [36]. In a study of marine copepods, a major meiofaunal taxon in marine sediments, feces were enriched in δ 15 N by about 8‰ and δ 13 C by about 1‰ relative to their food source [37]. Thus, a diet primarily of fecal pellets is consistent with tissue stable isotope values that are enriched in δ 15 N while remaining relatively depleted in δ 13 C and δ 34 S. The sediment-feeding holothurian Chiridota heheva was occasionally more enriched in δ 15 N than other animals in the same collections, including the predators, but not as consistently as P. turnerae (Figures 6C-H,J-L and 7B,C,D). C. heheva's diet may consist of a more variable mixture of fecal pellets, meiofauna, and sediment microbes than that of P. turnerae. Ridge study sites (1200-1900m depth). (X.1) δ 15 N and (X.2) δ 34 S vs. δ 13 C, where X = a sequential letter representing a single collection. For example, A.1 and A.2 display the δ 15 N vs. δ 13 C and δ 34 S vs. δ 13 C, respectively, for one collection, B.1 and B.2, another, and so on. Different animal taxa are represented by different symbols. The site information, collection coordinates, and reference to the quantitative community study [2] are shown in Table 1. Atwater Valley study sites (2200-2800m depth). (X.1) δ 15 N and (X.2) δ 34 S vs. δ 13 C, where X = a sequential letter representing a single collection. For example, A.1 and A.2 display the δ 15 N vs. δ 13 C and δ 34 S vs. δ 13 C, respectively, for one collection, B.1 and B.2, another, and so on. Different animal taxa are represented by different symbols. The site information, collection coordinates, and reference to the quantitative community study [2] are shown in Table 1. Protomystides sp. is split into three categories with different symbols representing the habitat the worms were found in. "Protomystides sp. (on E. lam)" represents individuals attached to the obturaculum of Escarpia laminata, "Protomystides sp. (dead tube)" refers to individuals collected inside empty vestimentiferan tubes, and "Protomystides sp. (outside tube) refers to individuals attached to the outside of vestimentiferan tubes. In all cases, the worms had inhabited a thin flexible tube attached to a surface. The site information, collection coordinates, and reference to [2] quantitative community study are shown in Table 1. Bristle worms The most depleted δ 13 C value of all species in all collections was found in the bristle worms Notomastus sp. (-80.3‰) and Eurythoe sp. (several individuals ranged from -77.3 to -75.4‰ and max. -59.5‰) from mussel collections. These values were often more depleted than the δ 13 C values of the mussels with which they were collected (Figures 5G and 6K,L). Notomastus species in other ecosystems are sub-surface deposit feeders [38]. Since both methane and isotopically depleted DIC are more abundant below the sediment surface than above, the overall δ 13 C value of the food available to Notomastus sp. would be more depleted than the food available to surfacefeeding animals. Other Eurythoe species are omnivorous and scavengers and can feed by extending their pharynx to feed under the sediment or to ingest larger food items such as dead animal tissue [39]. We cannot make a true inference about the food source of this Eurythoe species associated with bathymodiolin mussels, but its low δ 13 C values and high δ 15 N values in the two collections from AT340 stand apart from the almost linearly arranged δ 15 N vs. δ 13 C data points for the other fauna in these collections (Figure 6K,L). This could be consistent with a scavenger or predatory lifestyle, possibly feeding on isotopically depleted organisms below the sediment surface. Suspension feeders The suspension-feeding stoloniferans, hydroids, and zoanthids that colonize the tops of vestimentiferan tubes up to a meter above the sediment surface had δ 13 C values around -50‰, similar to most sediment-dwellers in the same collections (Figures 2 and 7). Stoloniferans, hydroids, and zoanthids are all sessile cnidarians and feed on POM and small swimming animals that come in contact with their nematocysts. Because seep fluids escape from the sediment slowly and can be rapidly diluted by seawater, dissolved reduced chemicals from seep fluids are likely not present in sufficient concentrations at the plume level of vestimentiferan aggregations to fuel chemoautotrophy or influence the δ 13 C of DIC [21]. Therefore, it is more likely that organic material is produced via chemoautotrophy or methanotrophy in the sediment and subsequently transported to the tube tops. We did not sample meiofauna (animals between 63 µm and 1 mm), but they are a possible link between microbial primary production and heterotrophic macrofauna. Resuspension of isotopically depleted POM, including bacteria, from shallow sediments could also contribute to the depleted tissue isotope signal in the cnidarians. Protomystides "cap worms" on Escarpia laminata hosts The phyllodocid polychaete Protomystides sp. was frequently found on top of the vestimentiferan Escarpia laminata (Figure 2). The Protomystides sp. build a matrix of tubes, forming a casing (or "cap") that affixes them to the surface of the obturaculum (the anterior-most part of the vestimentiferan). This casing can house more than 20 individual Protomystides sp. on a single E. laminata individual, many of which are tiny (personal observation). A blood-sucking (hematophagous) lifestyle was previously suggested for the related phyllodocid Galyptomystides aristata found in the Galapagos Rift and East Pacific Rise [40]. The guts of the Protomystides sp. in our collections contained a red substance, which, given the location of the polychaetes and the lifestyles of close relatives, was hypothesized to be vestimentiferan blood. If there is a parasitic relationship between Protomystides sp. and E. laminata, we would expect to see a correlation between the tissue stable isotope values of Protomystides sp. and the E. laminata individual upon which they were living. There was a strong linear relationship between the tissue δ 13 C of Protomystides sp. and their paired E. laminata (p<0.001; R 2 =0.77; Figure 9A). This relationship was not significant for tissue δ 15 N (p=0.73; R 2 =0.01; Figure 9B) or δ 34 S (p=0.62; R 2 =0.02; Figure 9C), but 14 out of 18 Protomystides sp. δ 15 N values were 2-5‰ higher than their paired E. laminata, consistent with trophic enrichment in tissue δ 15 N [29,41]. Since δ 13 C values tended to vary by collection, it was possible the apparent correlation simply reflected the use of the same carbon pool by both Protomystides sp. and E. laminata. To examine whether there was in fact an individual-level correlation, we examined a single collection that contained 6 paired Protomystides sp. and E. laminata samples. The total range in E. laminata δ 13 C values in this collection was -52.6 to -42.7‰, and the regression revealed a strong and significant correlation between paired individuals (R 2 =0.83, p=0.01; Figure 9A). Whether or not Protomystides sp. feeds on vestimentiferan blood, these data suggest fidelity to the nutrition of a single E. laminata individual. Overall, the δ 13 C data suggest that Protomystides sp. obtains the bulk of its carbon requirements from E. laminata, but the δ 15 N and δ 34 S suggest that bulk nitrogen and sulfur demands may not be met by this nutritionally limited food source. It is also possible that the vestimentiferan soft tissue we sampled and the blood, which we did not sample, have similar δ 13 C but different δ 15 N and δ 34 S contents. Some Protomystides sp. individuals also inhabited the outsides of vestimentiferan tubes or insides of empty tubes. The isotope compositions of these individuals were within the same range as those found on top of the live vestimentiferans ( Figure 7F,G), and therefore no apparent difference in isotopic niche from their counterparts atop E. laminata obturacula. The individuals found on the outsides of tubes were attached via the same casing we found on the tops of E. laminata obturacula, but these casings contained only 1-3 larger individuals and no tiny ones. The detailed morphology of the digestive system in the related vent species G. aristata showed the presence of a "septum" that could facilitate the long-term storage of a blood meal [40]. If these seep congeners share this adaptation, they may feed on vestimentiferan blood for the early stages of their life and then disperse to other locations at later stages, either feeding on stored blood or ingesting other materials. Another possibility is that blood is most important for females during reproduction and, like female mosquitoes, they need a continuous supply of blood before laying eggs. This could account for the large number of tiny worms present only in the obturacula casings. Commensal polychaetes inside mussel and clam hosts. Another relationship that was of a priori interest was that of polychaetes living commensally within the gills of mussel and clam hosts. In a study of the commensal polynoid Branchipolynoe symmytilida living inside the body cavity of the hydrothermal vent mussel Bathymodiolus thermophilus, there was a strong correlation between the tissue isotope compositions of individual polynoids and their hosts for both δ 13 C and δ 15 N [16]. Furthermore, the average enrichment in B. symmytilida tissue δ 15 N relative to the host mussel tissue was 3.2‰, which is what is expected for single trophic level enrichment [29]. In the present study, the polynoids Branchipolynoe seepensis were collected from inside the mantle cavity of B. heckerae and a species of nautilliniellid from inside the mantle cavity of Bathymodiolus heckerae, Calyptogena ponderosa and an undescribed vesicomyid. In general, δ 13 C and δ 34 S values were similar between the polychaetes and their bivalve hosts (Figure 10A,C). However, the similarity in isotopes could simply be due to shared location as with other associated fauna. The linear relationship between paired B. seepensis and B. heckerae δ 13 C was the strongest evidence for a trophic relationship, because the 4 samples from 2 collections had enough variability to show a correlation (R 2 =0.99, p=0.003). As is typical for seep animals, the overall variability in δ 15 N was much less than that of δ 13 C and δ 34 S, making correlations more difficult to discern. Overall, there seemed to be a positive relationship between the δ 15 N of the commensal polychaetes and their paired bivalve hosts, and differences between most of the polychaetes and hosts were between 3.4‰ and 0‰ (shown as most of the points lying between the 1:1 line and the line that signifies a 3.4‰ trophic enrichment; Figure 10B). The δ 15 N values do not rule out a trophic relationship, but the small data set makes it difficult to discriminate trophic interaction from variation between local nitrogen pools. The nature of a trophic relationship between the polychaetes and bivalves could be partial predation of the bivalve tissue or consumption of a product such as mucus, gametes, or pseudofeces. Conclusion The aim of this study was to elucidate trophic interactions in seep communities as part of a greater effort to understand how seeps function and how they tie into the functioning of the greater Gulf of Mexico ecosystem. Analysis of bulk tissue stable isotope content is a logical first step in this system given the efficiency and cost effectiveness of this method compared with others. This method alone, however, cannot reliably be used to infer trophic niches or food web structure until we attain a much greater understanding of the isotopic compositions of the inorganic sources and the abiotic and biotic processes that affect isotopic fractionation in seep environments. Even in more well-studied ecosystems, researchers have cautioned against applying data from different systems without experimental and environmental data from the system of interest [42], and some have shown using empirical data and modeling that isotopic and trophic niches do not always align (e.g. high isotopic variability among individuals does not necessarily imply There are fewer data for δ 34 S, because some samples did not have enough remaining material after δ 13 C and δ 15 N analysis. individual specialization when dealing with an isotopically heterogeneous landscape [32,33]). The data presented here and in related studies show very high spatial variability in tissue stable isotope contents between locations [8,17,20], and there is some indication from the current study that there is also micro-scale spatial heterogeneity in the isoscape of single aggregations, especially in vestimentiferan habitats, since their tall tubes create a structurally and chemically heterogeneous habitat. In moving forward toward understanding stable isotopes and food webs in seep environments, a better understanding of the sources and processes affecting nitrogen isotopes at seeps is urgently needed, since spatial and taxonomic variability in tissue δ 15 N values preclude reliable assignment of trophic level based on bulk tissue δ 15 N content. Analyzing the nitrogen stable isotope content of specific amino acids could be very fruitful, since phenylalanine shows no change with trophic level, and therefore indicates the isotopic composition of nitrogen sources at the base of the food chain, while glutamic acid shows an increase of ~7‰ with each trophic level, allowing for definition of trophic level [31,43]. This method is considerably more expensive, but some researchers are already beginning to apply it to deep-sea food webs (http://www.jamstec.go.jp/ biogeos/j/elhrp/isotope/index_e.html). Such an analysis could help us to better calibrate the bulk tissue isotope values presented in this study. Additionally, the missing links in the seep food web, namely bacteria and meiofauna, should be sampled from the local environments along with the macrofaunal community. Together with our data, these proposed analyses would help to build a more complete picture of hydrocarbon seep food webs in the Gulf of Mexico. Data availability The original stable isotope data presented in this manuscript are available on the U.S. Geological Survey Ocean Biogeographic Information System found at http:// www.usgs.gov/obis-usa/
2016-05-04T20:20:58.661Z
2013-12-06T00:00:00.000
{ "year": 2013, "sha1": "57eccf0c30776216aabcd361627097f3af813585", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0074459", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57eccf0c30776216aabcd361627097f3af813585", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
24886233
pes2o/s2orc
v3-fos-license
Comparison of Er:YAG Laser and Ultrasonic Scaler in the Treatment of Moderate Chronic Periodontitis: A Randomized Clinical Trial Introduction: Periodontitis is an inflammatory periodontal disease that leads to tooth loss. Recently laser has been introduced as an alternative treatment for periodontitis. The aim of the present study was to compare the effect of Erbium-doped Yttrium Aluminum Garnet (Er:YAG) laser with ultrasonic scaler in patients with moderate chronic periodontitis. Methods: In this randomized single-blind clinical trial, 27 patients with moderate chronic periodontitis were selected. One quadrant of the patients was treated by Er:YAG laser and the other one by ultrasonic scaler. Clinical parameters, including periodontal pocket depth (PPD), papillary bleeding index (PBI) and clinical attachment level (CAL) were measured before, as well as 6 and 12 weeks after treatment. Data were analyzed by SPSS 20 software using Friedman test, paired t test, independent t test and Mann-Whitney test. The significance level was set at 0.05. Results: The means of clinical parameters in both groups were significantly improved in the first and second follow-ups (P < 0.001). Although the means of PPD, PBI and CAL were slightly higher in the laser group than in the ultrasonic group, the differences were not statistically significant between these two groups (P > 0.05). Conclusion: Although both ultrasonic scaler and Er:YAG laser could effectively improve clinical periodontal parameters, the results did not reveal the superiority of Er:YAG laser over ultrasonic scaler or vice versa. Introduction Periodontitis is an inflammatory bacterial disease that leads to the destruction of supporting tissues and tooth loss. 1 Periodontal diseases are treated by surgical and non-surgical procedures. The non-surgical procedures include scaling and root planing (SRP) and dental plaque control by the patient. 2 Recently, laser has been introduced as an adjunct way for periodontal treatment. The benefits of laser therapy include antimicrobial properties, removal of calculus and endotoxins from root surface, smear layer elimination, wound healing and bleeding control. [3][4][5][6][7] Dental lasers are classified based on the difference in wavelength, lasing medium and clinical applications. A wide range of lasers are available for clinical use. 5,8 It seems that Erbiumdoped Yttrium Aluminum Garnet (Er:YAG) affects both soft and hard tissues with no serious thermal damage, and can remove the bacteria biofilm and calculus from root surface, which makes it an appropriate technique for periodontal treatment. [9][10][11][12][13] Er:YAG laser has a wavelength of 2940 nm with bactericidal effects. Also, due to high its ability to absorb water during the removal of bacterial endotoxin and calculus, this laser has less thermal risk for mineralized tooth surfaces. 9 17,19,[23][24][25][26] The results of a clinical trial which evaluated the clinical and microbiological parameters indicated that Er:YAG laser was as effective as conventional scaling or ultrasonic tools for the treatment of chronic subgingival periodontitis. 1 In a 6-month study, Schwarz et al 18 reported no increase of CAL or reduction of packet depth in Er:YAG laser group in comparison with conventional SRP. In a review article, Schwarz et al 16 reported that Er:YAG laser therapy was able to yield similar clinical results than conventional treatments in the long-and short-term periods. Poor study design, lack of proper control group and high variation of laser parameters in different studies are some of the reasons for uncertain findings of laser compared to conventional scaling technique. 18,24,27 Hence, the current study was an attempt to compare the therapeutical effect of Er:YAG laser with ultrasonic scaler among patients with moderate chronic periodontitis, via split-mouth method. Patient Recruitment A total of 27 patients with moderate chronic periodontitis were included in this study. The patients were selected from the Department of Periodontics, Isfahan University of Medical Sciences. The exclusion criteria were: presence of systemic disease, pregnancy, periodontal treatment within the last 12 months, use of antibiotics over the past 6 months and smoking. Study Design In this randomized split-mouse clinical trial, a total of 54 quadrants and 648 sites were selected from 27 patients, and were equally divided into left and right sides. While the teeth of one side were treated by ultrasonic scaler, the teeth of contralateral side underwent laser therapy. Moreover, all patients received oral hygiene instructions. Data Collection The following clinical parameters were measured and recorded at baseline, 6 and 12 weeks after the intervention, by the calibrated and blinded researcher. The parameters included periodontal pocket depth (PPD by calibrated Michigan 'o' probe with standard pressure), CAL and papillary bleeding index (PBI by Saxer and Muhlemunn). The parameters were evaluated at four levels for each tooth, mesiobuccal, midbuccal, distobuccal and midlingual. Further, visual analogue scale (VAS) for each patient was recorded between 0 and 10 immediately after interventions. Interventions In the control group, UDS-K ultrasonic scaler (Guilin Woodpecker Medical Instrument Co. Ltd, Guilin, China) with an output half-excursion force of 2 N, output tip vibration frequency of 28 ± 3 kHZ and water pressure of 0.1-5 bar was used. For SRP, the G1×2 and G2 tips were used with to-and-fro motion on the tooth under constant water irrigation. In the case group, Er:YAG laser (Fotona, Fidelis plus, Ljubljana, Slovenia) with a wavelength of 2490 nm, energy level of 160 mJ/pulse and pulse frequency rate of 10 Hz was used. The laser beams were radiated on the packet by R14-C hand-piece. The optical fiber tip (chisel shape, Product code: 72561) was used in apicocoronal movements with a 15-20΄ angle with respect to the root surface, until it reached the end of packet. While the tip was in contact with the tooth, sites were irrigated with water spray. Both interventions were continued until the operator felt a smooth surface on the root. Both interventions were carried out by one operator. Examiner Reliability Ten patients with two teeth presenting > 4 mm probing depth were used in two different quadrants, for calibration by the examiner. The examiner examined the patients twice with an interval of seven days. If the data in the first and seventh days were similar more than 90%, the calibration was accepted. Statistical Analysis The results were analyzed by SPSS 20 software (IBM Corp, Armonk, NY). CAL, PPB and PBI before treatment, 6 and 12 weeks after interventions were compared in both groups. Paired t test and Friedman test were used for intragroup comparison of parameters at different intervals. Independent t test was applied to perform inter-group comparisons of parameters. Furthermore, Mann-Whitney test was used to compare the mean VAS immediately after interventions. The significance level was set at 0.05. Considering 1 mm as the significant difference between groups, the power of study was calculated to be 0.99. Data were expressed as mean ± standard deviation (SD). Results The PPD, CAL and PBI were measured at three times: before, 6 and 12 weeks after treatment. The paired t test revealed no significant difference between the groups in terms of the means of parameters before treatment ( Table 1). The means of PPD, CAL and PBI are presented in Table 1. The inter-group comparisons 6 and 12 weeks after interventions showed no significant difference (P ≥ 0.05, independent t test). However, the intra-group difference Comparison of Er:YAG Laser and Ultrasonic Scaler of parameters mean was statistically significant over time (P < 0.000, paired t test and Friedman). In addition, the means of VAS immediately after intervention in ultrasonic and laser groups were 2.67 ± 0.46 and 3.58 ± 0.50, respectively, indicating no significant difference between the groups (P = 0.169, Mann-Whitney test). Discussion The aim of periodontal treatment is biological restoration and reattachment of periodontal tissues to the root surface. So, in the first phase of periodontal treatments, mechanical debridement and root surface planing are performed by manual or ultrasonic tools. Due to the inefficiency of the abovementioned instruments, new systems like the use of laser have been introduced. 5 In the current study, the clinical parameters of PPD, CAL and PBL were measured before, 6 and 12 weeks after treatment. The findings indicated significant improvement of these indices in both ultrasonic and laser groups. Although improvements were more evident in the laser group, the difference was not statistically significant, which is in agreement with the results of some of the previous studies. 14,18,23,28 Rotundo et al 23 showed that CAL in the Er:YAG laser group was similar to the conventional group. In a clinical trial, Soo et al 28 argued that PBI and PPD reduction and CAL increase were higher in the conventional method than Er:YAG laser in the short term. On the other hand, some studies 13,17,18,24,26 stated that laser improved clinical periodontal parameters compared with conventional method. These controversial results may be attributed to different laser radiation parameters (density, wavelength and energy level), different devices manufacturers, shape of laser tips, expertise of operator and different cut-points. Given the increased connection of fibroblasts and periodontal ligaments to the tooth surface following the creation of more surface roughness after laser therapy, 29 and the antibacterial effects on periodontal microorganisms, laser can be considered an adjunct therapy to other conventional treatment methods. 30,31 However, in our study there was no significant difference between laser and ultrasonic scaler, in term of clinical parameters. In the present study, VAS was assessed immediately after intervention. Although the mean level of this parameter was a little higher in the laser group, the difference was not statistically significant. Studies have shown different results in this regard. However, Rotundo et al 23 Conclusion According to the results of this study, the use of Er:YAG laser for SRP, causes more improvement of clinical parameters compared to ultrasonic scaler. However, these differences were not significant. So, Er:YAG laser can be used as an appropriate device for periodontal treatments. Ethical Considerations This randomized clinical trial was approved by the ethical committee of Isfahan University of Medical Sciences (#392408) and clinical trial site (IRCT201402164877N18). Also, following a detailed explanation of the study to the patients, informed consent was taken from them.
2018-04-03T02:13:49.651Z
2017-01-25T00:00:00.000
{ "year": 2017, "sha1": "ce0a37c10935ad283ccdd851eb95f505d0dae6ed", "oa_license": "CCBYNC", "oa_url": "http://journals.sbmu.ac.ir/jlms/article/download/11808/11808", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ce0a37c10935ad283ccdd851eb95f505d0dae6ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
95507
pes2o/s2orc
v3-fos-license
Thromboprophylaxis of elderly patients with AF in the UK: an analysis using the General Practice Research Database (GPRD) 2000–2009 Objective To assess use of thromboprophylaxis in UK general practise among patients with atrial fibrillation (AF); to investigate whether elderly patients are less likely to receive anticoagulation therapy than younger patients. Design Retrospective cohort study Setting UK General Practice Research Database (GPRD) Patients Aged ≥60 years with a new diagnosis of AF (2000–2009). Interventions None. Main outcome measures The main outcome measure was initiation of warfarin in the first year following diagnosis. Patients were categorised by stroke risk (CHADS2 score) and bleeding risk (HAS-BLED score). Results 81 381 patients were identified (21% aged 60–69 years, 37% aged 70–79 years, 42% aged 80+ years). Patients aged 80+ years were significantly less likely to be initiated on warfarin than younger patients, adjusted for gender, practice and comorbidities; 32% of patients aged 80+ years received warfarin compared with 57% aged 60–69 years (p<0.0001), and 55% aged 70–79 years (p<0.0001). For all strata of CHADS2/HASBLED scores, patients aged 80+ years were significantly less likely to be treated with warfarin than younger patients. Logistic regression showed that female sex, low Basal Metabolic Index (BMI), age over 80 years, increasing HAS-BLED score and dementia were independently associated with reduced use of warfarin. Stroke/Transient Ischaemic Attack (TIA), hypertension, heart failure and left ventricular systolic dysfunction were associated with increased use. Patients with HAS-BLED>CHADS2 were less likely to be initiated on warfarin. Higher CHADS2 scores were associated with increased anticoagulation use. Conclusions Anticoagulation is being under-used in patients with AF aged 80+ years, even after taking into account increased bleeding risk in this age group. INTRODUCTION Atrial fibrillation (AF) is the most common cardiac arrhythmia, and is associated with high morbidity and mortality, with stroke being the most significant complication. 1 AF increases the risk of stroke 5-fold, and accounts for around 15% of all strokes. 2 While AF can affect adults of any age, the prevalence increases with age: 3.8% among people aged >60 years rising to 9.0% among those aged >80 years. 3 AF is a growing problem, projected to increase with the ageing population and the increased survival of patients with chronic cardiac disorders, such as ischaemic heart disease and congestive heart failure (CHF) that predispose to AF. 4 Oral anticoagulation treatment with a vitamin K antagonist, traditionally warfarin, has been demonstrated to be highly effective, reducing the relative risk of stroke in patients with AF by around two-thirds, with a typical absolute annual risk reduction of 2.7%. 5 Guidelines recommend that the decision to use anticoagulation is primarily based around an assessment of stroke risk in atrial fibrillation. 6 Older age is recognised as one of the key risk factors. With regard to the two risk stratification schemes in common use, the CHA 2 DS 2 -VASc score recommends that all people in AF age ≥75 years should be anticoagulated, and the CHADS 2 score that anticoagulation is considered for all people in this age group, but is recommended in the presence of an additional risk factor. 7 However, recent studies have found that warfarin prescription was unrelated to CHADS 2 score. 8 9 Recent National Institute for Health & Clinical Excellence (NICE) guidance recommends use of anticoagulation for all people aged ≥75 years in AF. 10 Despite this, less than half the patients aged over 80 years receive warfarin among both hospitalised and outpatient populations. [10][11][12][13][14][15][16] A UK study found that between 1994 and 2003, patients with AF aged 85 years and above were five times less likely to be treated with anticoagulants than patients aged 55-64 years. 17 Bleeding risk is often cited as a reason for non-use of warfarin among elderly patients, in which case, aspirin is often used as an alternative. 11 14 However, the Warfarin versus Aspirin for Stroke Prevention in Octogenarians with AF (WASPO) trial showed that in patients aged 80-89 years there were significantly more adverse events including bleeding in patients treated with aspirin compared with warfarin. 18 This is consistent with the Birmingham Atrial Fibrillation Treatment of the Aged (BAFTA) study which found no significant difference in risk of major haemorrhage between warfarin and aspirin in people aged ≥75 years. 19 In the light of the stronger evidence base for using anticoagulation in the elderly, 19 the development of scores to quantify bleeding risk in atrial fibrillation, 20 and the emergence of new anticoagulants, it is timely to examine whether the underuse of anticoagulation in the elderly persists, and the extent to which this can be explained by risk of bleeding. This study sought to examine anticoagulation treatment of elderly patients (80+ years) compared with younger patients (60-69 years, 70-79 years) within a cohort of patients with AF from the UK population, and to determine the extent to which any differences in treatment prescribing among different age groups might be explained by bleeding risk. Study design This was a cohort study of patients from the General Practice Research Database (GPRD) 21 with a first diagnosis of AF, between 2000 and 2009. The GPRD includes approximately three million residents in the UK registered with over 600 general practitioners (GPs). The database includes demographics, medical diagnoses, referrals and prescriptions. AF diagnoses were identified using the GPRD Read codes (see appendix 1). To be eligible, patients had to be flagged as having data of an acceptable quality (as defined by GPRD), and be registered with practices whose data quality met the criteria for an 'up-to-standard' practice. Each patient had to have at least 12 months of data between registering with the practice and their first diagnosis of AF. Patients had to be over the age of 60 years at the time of first diagnosis of AF. From this cohort, patients who were initiated on warfarin in the year following the AF diagnosis were identified. Warfarin initiation was defined as at least one prescription for warfarin within the first year following AF diagnosis (see appendix 2 for warfarin codes). Data analysis Descriptive statistics were recorded at baseline for the AF cohort at first diagnosis of AF, and for the cohort of patients treated with warfarin at first prescription for warfarin (if within 12 months of diagnosis). Comorbid conditions were defined using GPRD Read codes (see Appendix 1). Patients were split into three age groups: 60-69 years, 70-79 years, and 80+ years based upon age at AF diagnosis. Differences between groups were tested using χ 2 tests, with the group of patients aged 80+ years as the reference group. Patients were split between warfarin-treated and warfarinuntreated, based on whether they were initiated on warfarin within their first year following AF diagnosis. Patients within the AF cohort were categorised into risk groups at baseline using two commonly used risk scores: CHADS 2 and CHA 2 DS 2 -VASc. CHADS 2 score allocates one point each for CHF, hypertension, age >75 years, diabetes mellitus and two points for a prior stroke/TIA. The CHADS 2 score was used to stratify patients within the analysis, as this method is most widely used. The CHA 2 DS 2 -VASc score incorporates the additional risk factors of vascular disease, age 65-74 years, and female gender, and gives two points each to age ≥75 years and prior stroke/TIA/ thromboembolism, and one point each to all other factors. The HAS-BLED score (hypertension, abnormal renal/liver function, stroke, bleeding history or predisposition, labile international normalised ratio (INR), elderly (>65 years), drugs/ alcohol) is recommended to assess the bleeding risk of patients with AF when deciding whether to prescribe anticoagulation. 22 Hypertension was defined as a diagnosis of hypertension, or a systolic blood pressure reading of at least 160 mmHg in the last year. Abnormal renal function required a patient to have a Read code for chronic dialysis, renal transplant, chronic kidney disease stage 5, or a serum creatinine level of 200 mmol/l or above. Abnormal liver function included chronic hepatic disease, cirrhosis or significant hepatic derangement. Bleeding history or predisposition was defined as patients with a record of a serious bleed or anaemia in the previous year, and a labile INR required that the patient was prescribed warfarin in the year prior to AF diagnosis, and had a time in therapeutic range lower than 60% in that year. Drugs refer to Non-steroidal Antiinflammatory Drugs (NSAID) or antiplatelet use, and patients were allocated one point if they had at least two prescriptions for either of these in the latest year, and another point for a diagnosis of alcoholism in the latest year. Pisters et al proposed that if HAS-BLED score is greater than CHADS 2 score in patients with CHADS 2 ≥2, then anticoagulation should not be given due to risk of bleeding. 22 The percentage of patients treated with warfarin in each age group was split by HAS-BLED > CHADS 2 and HAS-BLED≤CHADS 2 . Logistic regression was used to identify the factors which affected whether patients were initiated on warfarin. Results were found to be significantly different between sexes, so men and women were modelled separately in order to produce clinically useful estimates. The results were adjusted for practice, to take into account differential prescribing practices between practices, as well as regional variation, by including dummy variables for each practice in the model. Logistic regression models were fitted using SAS software, V.9.2 (SAS Institute Inc, Cary, North Carolina, USA) using PROC LOGISTIC. Further logistic regression models were used to investigate whether stroke risk (measured using CHADS2 score) had an effect on whether men and women were treated with warfarin, adjusted for age and practice. Over the 10-year study period (2000-2009), there was a trend towards increased prescribing of warfarin in patients with AF, which was consistent across the three age groups. The proportion of patients aged 80+ years initiated on warfarin following AF diagnosis increased from 25% to 37% between 2000 and 2009, but was still much lower than the proportion in younger patients (48% to 61% in patients aged 70-79 years, and 54% to 55% in patients aged 60-69 years). Logistic regression models of whether warfarin was initiated in the year following AF diagnosis are presented (table 2). For both men and women, age was the strongest independent predictor of warfarin use. A patient aged 60-69 years, or 70-79 years, was more than twice as likely to be initiated on warfarin following a diagnosis of AF, than a patient with the same BMI, gender and comorbidities aged ≥80 years (table 2). Having adjusted for other factors, patients with BMI <20 kg/m 2 were significantly less likely to receive warfarin treatment than patients with BMI 20-25 kg/m 2 . Patients with higher BMIs were increasingly likely to be treated with warfarin than patients with BMI 20-25 kg/m 2 . Increasing bleeding risk, as measured using HAS-BLED score, reduced the probability that a patient was treated with warfarin. In men and women, hypertension, heart failure, reduced left ventricular ejection fraction, thromboembolism and a history of stroke or TIA, all independently increased the likelihood that a patient received warfarin. Paradoxically, men with diabetes were less likely to be anticoagulated, and presence of diabetes was not associated with use of anticoagulation in women. In both sexes, dementia halved the chance that warfarin was used. Stroke and bleeding risk analysis As would be anticipated, CHADS 2 scores rise with age, with 76% of patients aged 80+ years having a CHADS 2 score of 2 or above compared with 56% of patients aged 70-79 years, and 24% of patients aged 60-69 years. Patients in the 80+ years age group had higher HAS-BLED scores than patients aged 60-69 years; 68% of patients aged 80+ years had a HAS-BLED score ≥2 compared with 39% of patients aged 60-69 years (and 66% patients aged 70-79 years) (table 1). Patients with HAS-BLED>CHADS 2 were slightly less likely to be initiated on warfarin. This effect was greater in patients with CHADS 2 ≥2, and in patients aged 60-69 years (table 3). For all strata of CHADS 2 /HAS-BLED scores in table 3 (bar one, due to small numbers), patients in the 80+ years age group were significantly less likely to be treated with warfarin than those of younger ages. Logistic regression models investigating CHADS 2 (table 4) found evidence in both men and women of a significant increase in the chance of being prescribed warfarin as CHADS 2 score increased, when adjusted for age group and practice. DISCUSSION Patients with AF, aged 80 years or over, are much less likely to be treated with warfarin than younger patients. This holds true if the data are adjusted to take into account factors that might deter a clinician from prescribing warfarin, such as frailty (indicated by low BMI), bleeding risk and Alzheimer's disease. While the proportion of people over 80 years treated with warfarin has increased moderately over the study period (2000-2009), it remains substantially lower than the proportion treated in the younger age groups. Logistic regression analysis demonstrated that a patient aged 60-79 years is more than twice as likely to be initiated on warfarin following a diagnosis of AF, than a patient with the same gender, BMI, comorbidities and bleeding risk aged over 80 years (table 2). Our finding of low warfarin use among elderly patients in the UK is consistent with findings of US studies in hospitals and in primary care, which found warfarin prescribed in only 40%-45% of patients with AF, with age increasing the risk of not being treated. [11][12][13][14] Our findings are also consistent with an earlier analysis of patients with AF from the GPRD database in 1996 that found among potential candidates for anticoagulation, only 22% of those aged 70+ years were prescribed warfarin compared with 49% among patients aged 40-60 years. 23 While a trend towards increasing warfarin prescribing practice in recent years has been demonstrated in our study, the results show that current prescribing practice is not in step with the current evidence base, and that anticoagulation therapy is particularly under-used in elderly patients. This is important, since there is now a clear evidence base that anticoagulation is effective for stroke prevention in elderly people in atrial fibrillation. 19 Indeed, a recent non-randomised study found that warfarin use in this age group not only was associated with reduced stroke risk, but also with improved life expectancy. 9 This study found that in the UK, women with AF are less likely to be prescribed warfarin than men with the same risk factors for stroke, even though female sex has been associated with increased risk of stroke in AF. 4 This is consistent with findings in Scotland that women with AF were 25% less likely to receive warfarin than men, 24 and a Canadian study which showed that women were 54% less likely to receive warfarin, but only in the subgroup of patients aged ≥75 years. 25 However, a more recent Canadian study found no evidence of reduced usage of warfarin in women compared with men. 26 It is difficult to explain the disparity of use of anticoagulation in women as compared with men. Gender inequalities have been The factors that determine whether warfarin is prescribed in clinical practice are complex, and our study was not designed to investigate the reasons behind clinical decision making. Physicians often avoid anticoagulation in elderly patients due to fear of bleeding, fall risk, non-adherence and monitoring concerns. [13][14][15] While the efficacy of warfarin in stroke prevention is established, warfarin has many limitations, including a narrow therapeutic index, slow onset and offset of action, multiple drug and food interactions, and a requirement for close laboratory monitoring of coagulation via the International Normalised Ratio (INR) and subsequent dose adjustments. 28 Close monitoring necessitates regular clinic visits with increased financial burden and inconvenience to patients; thus, many eligible patients choose not to use warfarin. 29 However, patient education and self-monitoring may promote better compliance and INR control among elderly patients with AF. 30 Unlike recent Swedish and Canadian studies, in this study, CHADS 2 scores predicted anticoagulation use in a British population. 8 9 The difference between these findings may reflect international variation in practice, or may be related to issues of study design: for instance, the present study was restricted to patients aged 60 years and over; and the Swedish study was smaller, so it cannot exclude associations of a similar magnitude to the present study. The recent development of new anticoagulants, such as dabigatran, rivaroxaban and apixaban, represent potential new therapies for patients with AF that may circumvent many of the inconveniences of warfarin, such as regular INR checks, dietary restrictions and drug interactions. How new agents will be used in the management of elderly patients with AF in everyday practice remains to be established; however, recent NICE guidance recommends the use of dabigatran in atrial fibrillation under the licensed indication, which includes patients aged >75 years, and those aged >65 years with an additional risk factor. 10 Study limitations In this study, patients with at least one prescription for warfarin in their GP record were assumed to have been initiated on warfarin. The GPRD records prescriptions issued rather than dispensed, thus, it would not be possible to confirm whether a patient was taking the medication from an initial prescription alone. However, as this study aimed to investigate the prescribing decision rather than the treatment, this will not have introduced major misclassification. As discussed above, clinical practice is driven by other factors than are in the clinical guidelines such as patient preference, that may affect the decision as to whether warfarin is initiated, which are not recorded in GPRD. It might be that these factors have confounded the associations that we observed between age and sex and use of warfarin. Socioeconomic factors were not taken into account in our analysis, however, an earlier analysis of anticoagulation use in AF using general practice data suggests that these were not significant confounders of any association with anticoagulation use. 17 While this study was able to look at the extent to which warfarin use was influenced by bleeding risk, as assessed using the HAS-BLED score, this tool does have limitations in terms of accuracy. 20 Therefore, it is possible that we have not fully accounted for bleeding risk in our models. Nevertheless, we did find that higher HAS-BLED scores were associated with lower use of warfarin, suggesting that this tool does have reasonable utility as a means of adjusting for bleeding risk in this analysis. CONCLUSIONS Our analysis has demonstrated that age is much the strongest single predictor of whether or not anticoagulation is used in AF. The low use of warfarin in people aged 80 years is not explained by increased comorbidity or increased bleeding risk, since marked differences in use of warfarin were observed when we compared use in people aged 80+ years with other ages, after we stratified by these factors, or adjusted for them. This suggests that there is genuine under-use of anticoagulation in the elderly. Strategies need to be developed to improve the uptake of anticoagulation in this age group.
2016-05-12T22:15:10.714Z
2012-10-19T00:00:00.000
{ "year": 2012, "sha1": "315295b92b3a4ab4f1a7a373eea781af7c99e034", "oa_license": "CCBYNC", "oa_url": "https://heart.bmj.com/content/99/2/127.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "315295b92b3a4ab4f1a7a373eea781af7c99e034", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4811039
pes2o/s2orc
v3-fos-license
Defocus Blur Detection and Estimation from Imaging Sensors Sparse representation has been proven to be a very effective technique for various image restoration applications. In this paper, an improved sparse representation based method is proposed to detect and estimate defocus blur of imaging sensors. Considering the fact that the patterns usually vary remarkably across different images or different patches in a single image, it is unstable and time-consuming for sparse representation over an over-complete dictionary. We propose an adaptive domain selection scheme to prelearn a set of compact dictionaries and adaptively select the optimal dictionary to each image patch. Then, with nonlocal structure similarity, the proposed method learns nonzero-mean coefficients’ distributions that are much more closer to the real ones. More accurate sparse coefficients can be obtained and further improve the performance of results. Experimental results validate that the proposed method outperforms existing defocus blur estimation approaches, both qualitatively and quantitatively. Introduction Blur is an image degradation that commonly appears in consumer-level images obtained from a variety of image sensors [1][2][3][4]. Defocus blur is one type of blur degradation that results from defocus and improper depth of focus. For scenes with multiple depth layers, however, only the layer on a focal plane will focus on the camera sensor, which leads to others being out of focus. This phenomenon may sometimes strengthen a photo's expressiveness, while, in most cases, it will lead to loss of texture details or incomprehensible information. In many scenarios, detecting and estimating the blur pixels can benefit a variety of image applications including but not restricted to image deblurring, image segmentation, depth estimation, objection recognition, scene classification and image quality assessment. Assume that the defocus process can be modeled as a thin lens imaging system. Figure 1 illustrates the focus and defocus processes. Only the rays emitting from a focal plane will converge to a single point on a sensor plane and a sharp scene will appear, while the rays emitting from other planes will reach different points on a sensor plane and form circle regions. The circle region is called the circle of confusion (CoC) that results in defocus blur. From Figure 1, it is easy to verify that the larger the distance between the focal plane and the non-focal plane is, the greater the strength of defocus. A number of methods for image blur analysis have been recently proposed; however, most of them focus on solving deblurring problems. On the contrary, there are a limited number of methods to explore defocus blur detection and estimation and the application is still far from practical. They assume that the defocus blur caused by multiple depth layers can be modeled by a latent image convolving with a spatial-variant kernel. In addition, the spatial-variant kernel is commonly assumed to be a disk or a Gaussian kernel. Therefore, the estimation of defocus blur map can be regarded as a deconvolution task. Cheong et al. [5] modeled a defocus blur kernel to be a Gaussian point-spread-function (PSF) and proved the amount of blur depends on the squared variance of the Gaussian PSF. In this method, the blur amount can be calculated from the first and second order derivatives. Oliveira et al. [6] defined the out-of-focus as a uniform disk kernel. This method is based on the assumption that the defocus blur kernel is characterized by its radius and can be provided by parametric models for each pixel efficiently. Zhang et al. [7] supposed that the defocus blur kernel to be a Gaussian function with standard σ and estimated the blur map by utilizing edges information. Then, a full blur map can be generated by utilizing a K nearest neighbors (KNN) matting method. The performances of these methods relay heavily on the accuracy of the PSFs, which is a challenging task in practical application. There have been a series of methods proposed to handle a defocus blur problem. Conventional methods deal with defocus blur by utilizing a set of images of the same scene [8,9]. Using multiple focus settings, the defocus blur can be estimated during a deblurring process. Levin et al. [10] proposed modifying the aperture of the camera lens by inserting a patterned occluder so that they can recover a refocus image from a single input defocus image. Zhou et al. [11] used two coded apertures to complement each other and obtained a batter defocus blur measure for the two captured images. Xiao et al. [12] estimated a defocus blur kernel and restored a sharp image from time-of-flight depth camera. However, all of these methods require additional information or additional equipment that limit their applications in practice. Since the level of defocus blur is intimately related with depth variation, depth information is of great value to defocus blur detection and estimation. A number of recently introduced methods aim at obtaining high quality in-focus images via scene depth from a single input image. Hu et al. [13] estimated and removed defocus blur from single images by utilizing depth estimation. However, the fine performance depended on the precisely separated depth layers and preassigned average depth of each layer, which may engender high computational cost and estimation error. Xiao et al. [12] proposed a joint optimization problem based on a model that the spatial-varying defocus blur kernel can be calculated for a given depth map. In this method, the defocus blur kernel matrix can be updated according to a currently estimated depth map. In recent years, a variety of gradient and frequency based methods have been proposed to handle defocus blur analysis [14][15][16][17][18]. Elder and Zuker [19] firstly proposed a method for local blur estimation by utilizing the first and second order gradient information. Then, numerous methods have been proposed to detect and estimate defocus blur. Gradient-based methods [20][21][22] relied on a heavy-tailed distribution, which can be interpreted as an observation that the gradient distribution in a clear region should have more primarily small or zero gradient magnitudes. Frequency based methods [22,23] modeled defocus blur analysis exploiting the fact that the blur process decreases high-frequency components. Liu et al. [22] developed spectrum information and several blur features to classify blur images. Combining with the spectral and spatial domains, the method [24] utilized local power spectrum slope and total variation to assess image sharpness and estimate defocus blur. In [25], Shi et al. addressed the blur detection problem by constructing a combination of three local blur feature representations including image gradient distribution, Fourier domain descriptor, and local filters. Then, the blur map is formed in a discriminative way by utilizing a naive Bayesian classifier. Sparse representation [26][27][28][29] has been known to be a very powerful statistical image modeling technique and successfully used in various low level restoration tasks. Shi et al. [30] has recently proposed a just noticeable defocus blur detection (JNB) method. However, it tends to inaccurately estimate the parametric distributions for the sparse coefficients and decrease the performance of defocus blur detection and estimation. In this paper, a new method for blur detection and analysis is proposed. First, we use the principal component analysis (PCA) [31] technique to learn a set of compact dictionary and propose an adaptive domain selection scheme for sparse representation. Second, the proposed method learns nonzero-mean parametric distributions for coefficients based on the observation that nonzero-mean I.I.D Laplacian distributions do not fit the real coefficients' distributions. Lastly, a blur strength measurement method is presented to evaluate the degree of defocus blur. Experimental results on various images show that the proposed method achieves better results than other approaches both in visual quality and evaluating indictor. The paper is organized as follows. Section 2 introduces the general sparse representation models. Section 3 describes the details of adaptive domain selection, coefficient distributions learning and strength estimation for defocus blur detection and estimation, respectively. In Section 4, experimental results and comparison with other approaches are presented. Blur Detection Model via Sparse Coding Sparse representation is a powerful technique that has been widely used in signal processing or image restoration tasks. Recently, most of the approaches defined that natural images can be modeled with sparse representation over an over-complete dictionary. Using an over-complete dictionary D ∈ R n×l that contains l dictionary atoms, an image patch can be represented as a sparse liner combination of these atoms where y ∈ R n is a given image patch, D is an over-complete dictionary, α is a coefficient vector corresponding to D and n is a residual vector. As illustrated above, D ∈ R n×l is an over complete dictionary, which means that n < l. This problem becomes untraceable because many different coefficients give rise to the same y. Hence, additional information is required to constrain the solutions [32]. The sparsest representation coefficient has been proposed to be the solution or min where • 0 is a l 0 norm that counts the number of the nonzero entries of vector α and ε is a small constant controlling the approximation error. Equations (2) and (3) use a l 0 norm in the constraint and this induces sparsity and indicates that any signal can be described by a sparse number of dictionary atoms. However, a l 0 norm is nonconvex, which results in l 0 -minimization of an NP-hard optimization problem. Thanks to [33], it has been proved that the l 1 norm is equivalent to the l 0 norm under certain conditions. Another sparse representation based method is proposed and can be expressed as where • 1 is a l 1 norm. Besides the sparse coefficient, the selection of the dictionary also influences the performance of sparse representation based methods. The constructions of dictionary can be generally categorized into iteratively updating dictionary [34] and the universal one [32]. In the iteratively update construction manner, the minimizing model of Equation (4) involves simultaneously computing two variables: α and D. It can be solved by the alternating minimization scheme, which is commonly adopted when dealing with multiple optimization variables. In each iteration, dictionary D is fixed to estimate the coefficient α of each image patch where α (n+1) is the coefficient at iteration n + 1 and D (n) is the dictionary at iteration n. Then, in the step of updating dictionary, each atom d where E is the residual component. In the universal construction manner, the K-SVD algorithm [32] designed a learning method based on the K-means clustering process and obtained over-complete dictionaries to achieve sparse signal representation. Given a set of training image patches, the K-SVD algorithm iteratively updates the sparse coding of the current dictionary and atoms of the dictionary. Defocus Blur Detection and Estimation by Adaptive Domain Selection and Learning Parametric Distributions for Coefficients In this section, the proposed method first presents the dictionaries learning method, which learns a series of compact dictionaries and adaptively assigns the optimal dictionary to each local patch as the sparse domain. All compact dictionaries are learned offline, and the proposed method online selects the dictionary. Then, we introduce sparse parametric distributions by nonlocal structural similarity for sparse coefficients. The improved method can be modeled as where y i is a patch extracted from an input defocus image and each patch is vectorized as a column vector of size n × 1. D k i is the optimal compact dictionary that is adaptively selected for the given patch y i . The training method for D k i is described in Section 3.1. α i is the sparse coefficient for patch y i over D k i . β i and θ i denote the mean and standard derivation for α i , respectively. In addition, ε is a small constant. Sparse Representation by Adaptive Domain Selection The sparse representation based approaches can achieve better performance in image restoration applications. However, many sparse decomposition models rely on learning an universal and over-complete dictionary to represent all image structures. The structures and contents vary remarkably across different images or different patches in a single image and the universal dictionary cannot satisfy all circumstances for defocus blur detection via sparse representation. In addition, it has been proved that sparse decomposition over a set of highly redundant basis is potentially unstable [35]. Therefore, an improved defocus blur estimation scheme, which prelearns a set of compact dictionaries, and adaptively assigning optimal dictionary to each local patch is proposed. In order to learn the compact dictionary set for representing image structures, we first construct a dataset of blur local image patches by collecting images slightly blurred by Gaussian kernel with standard deviation σ = 2.5 and cropped from them a rich amount of patches with size √ n × √ n. Let W = [w 1 , w 2 · · · w M ] ∈ R n×M be selected M blurred image patches. For better training performance, only pitches whose intensity variance, denoted by IntVar(w i ), that are within the range of Θ 1 and Θ 2 , i.e., Θ 1 < IntVar(w i ) < Θ 2 , are selected. In order to adaptively assign a dictionary to each local patch, the proposed method learns K compact dictionaries D k from the patch set W and generate K clusters from the patch set W by utilizing the K-means algorithm. Then, a dictionary can be learned from each of the K clusters and represent K distinctive patterns by the K dictionaries. Denote by {C 1 , C 2 , · · · C K } the K clusters and µ k the centroid of cluster C k . Meanwhile, K subsets W k are obtained by partitioning W, where W k is a matrix of dimension of n × l k and l k is the number of patches in W k . Now, we aim to learn a dictionary D k from the cluster W k , which indicates that all the elements in W k can be exactly represented by D k . Typically, the learning model can be formulated as where • F is Frobenius norm and A k denotes the coefficient matrix of W k over dictionary D k . λ denotes a parameter that balance the relationship between the data fidelity term and the regularization term. Minimizing model of Equation (8) involves simultaneously computing two variables: A k and D k . It can be solved by the alternating minimization scheme, which is commonly adopted when dealing with multiple optimization variables. However, utilizing Label (8) to learn the dictionary D k is stopped by some major issues. First, the optimizing task in Equation (8) involves simultaneously computing two variables: D k and A k , which is computational challenging and time consuming. More importantly, the result of Equation (8) is commonly assumed to be an over-complete dictionary, which is redundant in the signal representing process and may not take advantage of similar patterns after K-means clustering. Specifically, W k is constructed via K-means clustering and can be treated as that all elements in W k share the similar patterns. Therefore, we prefer a compact dictionary rather than an over-complete one. Here, the principal component analysis (PCA) [31] is applied to each subset W k , so that each compact dictionary D k can be constructed via elements with similar pattern. Let Φ k be the co-variance matrix of subset W k . Then, the proposed method can obtain an orthogonal matrix P k by applying PCA to Φ k . For the purpose of reducing dimensionality of dictionary D k , only the v eigenvectors corresponding to the first v largest eigenvalues in P k are selected to form the dictionary D v . Denote by D v = [p 1 , p 2 , · · · p v ] the constructed dictionary and let A v = D v W k . It is obvious that a decrease of v will lead to an increase of data fidelity term W k − D v A v 2 F and a decrease of sparse term A v 1 . The optimal dimension v r of v can be determined as In addition, D k = [p 1 , p 2 , · · · p v r ] is the compact dictionary corresponding to subset W k . Following this procedure, all K compact dictionaries D k from K subsets W k can be obtained. Figure 2 shows an example of the learned dictionary from a training dataset. With each compact dictionary D k = [p 1 , p 2 , · · · p v r ] learned, the proposed method can continue to assign an example y i to the most relevant compact dictionary in the dictionary set. Recall that the centroid µ k is available, and the most relevant dictionary can be selected by (10) Figure 2. One example of the K compact dictionaries learned by PCA. Learning Coefficient Distributions with Nonlocal Structural Similarity The JNB model [30] can achieve better results. However, due to the lack of nonlocal structural correlation [36], the JNB model tended to inaccurately estimate the parametric distributions for the sparse coefficients. It is easy to verify that the distribution of the common l 1 norm in Equation (4) equals an I.I.D zero-mean Laplaican distribution. Figure 3 shows the coefficients' distributions obtained by the JNB method and real distribution of a test image. As illustrated in Figure 3, the I.I.D zero-mean Laplaican distribution can not fit the real coefficient distribution. Based on this observation, we generalise the nonlocal structural similarity and propose a nonzero-mean I.I.D Laplaican distribution to estimate the distribution of sparse coefficient for defocus blur detection. First, the sparse model is extended based on the rich repetitive structures in blurred images. For each exemplar patch y i , a patch set Y i = [y i,1 , y i,2 , · · · , y i,h ] ∈ R n×h is built via a patch matting algorithm in a larger window centered at i to group patches similar to y i (including y i itself). Each column of Y i corresponds to a patch similar to y i . As the patches share similar structures, hence, we characterize the sparse representation of each patch in Y i as the same parametric distribution where m = 1, 2, · · · , h is the index from patch set Y i . y i,m and α i,m represent the m th patch in patch set Y i and the corresponding sparse coefficient, respectively. D k i denotes the pre-trained compact dictionary that adaptively selected for y i,m , and k i can be obtained following Equation (10). ε is a small constant. β i and θ i represent the mean and standard derivation, respectively. Next, the nonlocal similar patches are used to accurately estimate the distribution parameters β i and θ i . The expectation of patch y i is estimated by where w i,m = (1/c 1 )exp(− y i,m − y i /c 2 ), wherein c 1 and c 2 denote the normalization coefficient and a predefined constant, respectively. Then, the more accurate mean β i is estimated as With the grouped patch set and the mean β i estimated in Equation (13), the standard derivation θ i for α i,m (m = 1, 2, · · · , h) can be estimated as whereα i,m = D k i y i,m , is a small positive number to ensure that θ i is a non-zero value. Figure 4 shows the comparison of the coefficients' distributions of the proposed method, the JNB method and the real distribution of the same test image. It is clear that the coefficients' distributions learned by the proposed method is closer to the real distribution. Strength Estimation for Defocus Blur Denote by s = α i 0 the sparse coefficient value. To estimate the strength of defocus blur, the proposed method first collects images with different blurriness levels. The images are blurred with the Gaussian kernel of standard deviation σ ranging from 0.2 to 2.5. Then, the statistical relationships between the sparse coefficient value s and the corresponding blur standard deviation σ can be obtained and fitted into a logistic regression function s = 33.2071 1 + exp(6.5125 σ − 4.1029) Figure 5 shows the statistical relationships between the sparse coefficient value s and the corresponding blur standard deviation σ. With each sparse coefficient value calculated for an image patch, Equation (15) can be used to estimate the degree of defocus blur for each patch from a single defocus blur image. Experimental Results The performance of the proposed method was tested on defocus images dataset from [25]. The blurry regions in all tested images are masked out as ground-truth, which indicates the clear regions with respect to the defocus blur regions. In addition, the proposed method is also tested on 150 natural defocus blur images taken by consumer-level cameras or from the Internet with different defocus blur regions. Then, we compared the proposed method with several approaches including the JNB method [30], Vu's method [24] and Shi's method [25]. All the comparisons are performed by directly utilizing the public codes. In the experiments, each image patch is extracted with size 8 × 8 to form a 64D vector. The compact dictionary set is learned over 125,000 patches cropped from 1250 blurry images, which blurred by a Gaussian kernel with σ = 2.5. The parameters of the proposed algorithm are set as follows: n = 64, ε = 0.175, M = 125,000, K = 240 and h = 24. The performance of the proposed method was evaluated on the visual quality, the precision-recall (PR) and execution time. In the comparisons of visual quality, all of the compared results are normalized to [0, 1]. Figure 6 shows a set of experimental results using an input defocus blur image in which blur amount changes continuously. Vu's method [24] combines both spectral and spatial sharpness to assess image sharpness. As shown in Figure 6c, Vu's method [24] can roughly separate in-focus foreground from defocus background. However, it cannot handle flat regions and intends to smooth the boundaries because of total variation, such as facula and grass. Shi's method [25] constructs a combination of three local blur feature representations including image gradient distribution, Fourier domain descriptor, and local filters. Then, the blur map is formed in a discriminative way by utilizing a naive Bayesian classifier. From Figure 6d, it shows that the results of Shi's method [25] contain several estimation errors, which lead to a difficulty in separating the clear regions from the blur regions. In addition, a much longer processing time is required because of the combination of three local features, which cannot be satisfied in practical. Although the JNB method [30] can achieve a better result in detecting flat regions as shown in Figure 6e, it cannot detect defocus blur at strong edges' regions and results in clear errors. The performance of the proposed method is shown in Figure 6f. The proposed method can result in much less artifacts and clear errors. Therefore, the proposed method performs better than the others both in separating clear regions from blur regions and representing details. (e) (f) Figure 6. Comparison of different defocus blur detection and estimation using an input image form dataset [25], whose blur amount changes continuously. Experimental results for defocus blur images whose blur amounts change abruptly are shown from Figures 7-9. Vu's method [24] assigns incorrect clear regions in the defocus blur regions of the background. The JNB method [30] contains too many clear errors and cannot produce significant differences between clear and blur regions. The proposed method is superior and the results of the proposed method are much closer to the ground-truth than that of others. In additional, experimental results for defocus blurred image generated with a HUAWEI mobile phone are shown in Figure 10. Our method provides better detection and estimation performance than the others. Although successfully extracting the ground-truth from the blur regions, the method in Figure 10c also has errors in outliers' regions. Figure 10d shows that the estimated local blur results have clear errors in separating the ground-truth from the blur regions. In Figure 10e, there are some artifacts in the result and the outline does not produce a significant difference to separate the ground-truth from the blur background. In contrast, Figure 10f shows that the proposed method produces favourable results in distinguishing ground-truth from the blur background regions and representing image details. To further evaluate the effectiveness of the proposed method, we compare our method with other approaches via precision-recall (PR) in Figure 11. Forty defocus blur images (20 from dataset [25] and 20 from the naturally blurred images) are collected to test the proposed method. Figure 11 shows that the proposed method achieves the highest precision within almost the entire range of recall in [0, 1]. Table 1 shows the comparison of execution time by using images from the dateset [25]. All experiments were performed under the same computer configuration. The proposed method outperforms other defocus blur detection approaches by requiring much less computational time. Figure 11. PR for different methods tested on defocus images from the dataset and naturally blurred images. Conclusions In this paper, we integrate the sparse representation model with adaptive domain selection and learning coefficient distribution for defocus blur detection and estimation. Compared with other defocus blur detection and estimation methods that rely on learning a universal and over-complete dictionary, the proposed method is helpful in adaptively selecting the optimal compact dictionary for each local patch and thus can much improve the accuracy and execution time of the defocus blur estimation. Based on the observation that the distributions of coefficients generally cannot be fitted with a I.I.D zero-mean Laplaican distribution, the proposed method learns parametric distributions from the gathered similar patches via nonlocal structural similarity. More accurate sparse coefficients can be obtained and further improve the the quality of the defocus blur detection. To estimate the strength of defocus blur, a criterion is defined to estimate the degree of defocus blur for each patch. Extensive experimental results show the superiority of the proposed method, both in visual quality and evaluation indexes.
2018-04-26T23:46:28.703Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "e79f985730ece2f16d12f5658184f67af70279f9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/18/4/1135/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e79f985730ece2f16d12f5658184f67af70279f9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
118604838
pes2o/s2orc
v3-fos-license
Braneworld non-minimal inflation with induced gravity We study cosmological inflation on a warped DGP braneworld where inflaton field is non-minimally coupled to induced gravity on the brane. We present a detailed calculation of the perturbations and inflation parameters both in Jordan and Einstein frame. We analyze the parameters space of the model fully to justify about the viability of the model in confrontation with recent observational data. We compare the results obtained in these two frames also in order to judge which frame gives more acceptable results in comparison with observational data. INTRODUCTION Although the standard big bang cosmology has great successes in confrontation with observation, it suffers from some shortcomings such as the flatness, horizon and relics problems. It has been shown that an accelerating stage during the early time evolution of the universe withä > 0 (p < −ρ/3) has the capability to solve these problems. This is the early time inflationary stage. The inflation also provides a mechanism for production of density perturbations needed to seed the formation of structures in the universe. It has been shown that a simple scalar field (usually dubbed inflaton) whose energy dominates the universe and whose potential energy dominates over the kinetic term (the slow-roll conditions) gives the required inflation [1,2,3,4,5,6,7,8]. Despite the great successes of the inflation paradigm, there are several problems with no concrete solutions: natural realization of inflation in a fundamental theory, cosmological constant and dark energy problem, unexpected low power spectrum at large scales and egregious running of the spectral index are some of these problems [9]. Another unsolved problem in the spirit of the inflationary scenario is that we don't know how to integrate it with ideas of the particle physics. For example, we would like to identify the inflaton, the scalar field that drives inflation, with one of the known fields of particle physics. Also, it is important that the inflaton potential emerges naturally from underlying fundamental theory [6]. Braneworld scenarios open new windows to address at least part of these difficulties [10,11]. One of the various braneworld scenarios, is the model proposed by Dvali, Gabadadze and Porrati (DGP). This setup is based on * knozari@umz.ac.ir † n.rashidi@umz.ac.ir a modification of the gravitational theory in an induced gravity perspective [12,13,14,15]. This induced gravity term in the brane part of the action, leads to deviations from the standard 4-dimensional gravity over large distances. In the DGP model, the bulk is a flat Minkowski spacetime, but a reduced gravity term appears on the brane without tension. Some aspects of the braneworld inflation in the pure DGP setup are studied in [16,17]. Maeda, Mizuno and Torii have constructed a braneworld scenario which combines the Randall-Sundrum II (RS II) [18] and DGP models [19]. In this combination, an induced curvature term appears on the brane in the RS II model. This model has been called the warped DGP braneworld in literatures [20,21,22,23]. Some aspects of the inflation on the warped DGP setup are studied in Refs. [20,21,22,23]. We note that in a braneworld setup, the induced gravity on the brane arises as a result of quantum corrections. For instance, in the Randall-Sundrum II braneworld scenario quantum corrections arise due to induced coupling between brane matter and the bulk gravitons. The induced gravity leads to the appearance of terms proportional to the 4-dimensional Ricci scalar in the brane part of the action. While the RS model gives high-energy modifications to general relativity, the DGP braneworld produces a low energy modification that leads to latetime acceleration of brane universe even in the absence of dark energy. The RS II braneworld scenario modifies certainly the high energy, ultra-violet (UV) sector of the general relativity. Also the DGP gravity is essentially a low-energy, infra-red (IR) modification of the general relativity. Since the warped DGP scenario contains both UV and IR modifications simultaneously, inflation in a warped DGP setup is physically more reasonable than the pure RS II or DGP case. An important issue we are interested in this paper, is that whether high-energy inflation is subjected to the induced gravity effect. If the induced gravity correction takes the dominant role, then there is no RS-type high-energy regime in the early universe and we recover the DGP model. From another perspective, as the energy scale of inflation grows, the induced gravity correction acts to limit the growth of amplitude relative to the 4D case [24,25,26,27]. Although induced gravity is an IR modification of General Relativity and it seems that these modifications have nothing to do with inflation, however the mentioned points are important enough to be the reason for study of the warped DGP-braneworld inflation. We note also that as has been shown in [16], brane assisted inflation may be equally successful beyond general relativity. It has been proved that this is the case in the RS and DGP models provided certain conditions hold. Since we considered the normal branch of solutions, as has been shown in [16] the conditions for the occurrence of inflation are less restrictive. On the other hand, considering a braneworld setup has the advantage that bulk fields such as Radions (for stability purposes) can have projection(s) on the brane that is a suitable candidate for inflaton field on the brane. The projection of the bulk inflaton on the brane behaves just like an ordinary inflaton field in four dimensions in the low energy regime. While the origin of inflaton field in standard 4D case is not so trivial, in a braneworld picture we can imagine this field as a projection of bulk field(s). This may help to reduce at least part of lacuna of standard scenario. We note also that as has been shown in [11], inflation in warped de Sitter string theory geometries bypasses the difficulties of computing corrections to η slow-roll parameter relative to the effective four dimensional perspective. Since inflaton can interact with other fields such as the gravitational sector of the theory, in the spirit of scalar-tensor theories, we can consider a non-minimal coupling (NMC) of the inflaton field with intrinsic (Ricci) curvature on the brane. Braneworld model with scalar field minimally or non-minimally coupled to gravity have been studied extensively (see [28] and references therein). We note that generally the introduction of the NMC is not just a matter of taste. The NMC is instead forced upon us in many situations of physical and cosmological interest. There are compelling reasons to include an explicit non-minimal coupling in the action. For instance, non-minimal coupling arises at the quantum level when quantum corrections to the scalar field theory are considered. Even if for the classical, unperturbed theory this non-minimal coupling vanishes, it is necessary for the renormalizability of the scalar field theory in curved space. In most theories used to describe inflationary scenarios, it turns out that a non-vanishing value of the coupling constant cannot be avoided [29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55]. Nevertheless, incorporation of an explicit non-minimal coupling has disadvantage that it is harder to realize inflation even with potentials that are known to be inflationary in the minimal theory [29,30,31]. Using the conformal equivalence between gravity theories with minimally and non-minimally coupled scalar fields, for any inflationary model based on a minimally-coupled scalar field, it is possible to construct infinitely many conformally related models with a non-minimal coupling [32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57]. However, an important question then arises: are these conformally related frames really equivalent from physics viewpoint? This issue has been considered by several authors [58,59,60,61,62,63,64,65,66,67] and as a part of our primary goal, we are going to address this issue from a detailed comparison of the inflationary parameters in these two (Einstein and Jordan) frames. Based on the mentioned preliminaries, in this paper we study cosmological inflation on a warped DGP braneworld where inflaton field is non-minimally coupled to induced gravity on the brane. We present a detailed calculation of the perturbations and inflation parameters both in Jordan and Einstein frame by adopting quadratic and quartic potentials. We analyze the parameter spaces of the models with details to have a comparison between two frames and also in order to constraint these models in confrontation with recent observational data. II. BRANEWORLD INFLATION WITH INDUCED GRAVITY IN JORDAN FRAME The action of a warped DGP model in which a single scalar field is non-minimally coupled to induced gravity on the brane can be written in the following form where κ 2 5 is the five dimensional gravitational constant, R is the induced Ricci scalar on the brane, R (5) is 5dimensional Ricci scalar, λ is the brane tension and Λ 5 is the bulk cosmological constant. Also q is the trace of the brane metric, q µν . We remind that the mentioned action results in pure DGP model [12,13,14] if λ = 0 and Λ 5 = 0, and pure RSII model [18] if µ = 0 where µ is a mass scale which may correspond to the 4D Planck mass [19]. Also f (ϕ) shows an explicit non-minimal coupling of the scalar field with induced gravity on the brane. We note that the fields and their interactions on the brane at the classical level will be determined by the bulk physics through boundary conditions on the brane. For instance, if Φ is assumed to be a bulk scalar field, as has been shown in [68,69,70,71,72], the effective field on the brane will be ϕ = √ r c Φ and V (ϕ) = rc 2 V ( Φ √ rc ) through junction conditions on the brane. Also as we will show (see Eq. (6) below), Λ 5 = − κ 4 5 6 λ 2 . So, these parameters cannot be freely adjusted and are influenced by bulk physics. The generalized cosmological dynamics in this setup is given by the following Friedmann equation where ρ ϕ , the energy-density corresponding to the nonminimally coupled scalar field is defined as follows and the corresponding pressure is given by We note that in this paper a prime represents the derivative with respect to the scalar field and a dot marks derivative with respect to the cosmic time. Now let's to introduce the effective cosmological constant on the brane as Since we are interested in the inflationary dynamics driven by a scalar field with a self-interacting potential, we put the effective cosmological constant equal to zero. In this way, we find So, we can rewrite the Friedmann equation (2) as follows Also, the second Friedmann equation iṡ Variation of the action (1) with respect to the scalar field gives the following equation of motion In the slow-roll approximation, whereφ 2 ≪ V (ϕ) and ϕ ≪ |3Hφ|, energy density and equation of motion for scalar field take the following forms respectively 3Hφ Also, the Friedmann equation now takes the following form Now, we define the slow-roll parameters as follows In the slow-roll approximation and by using equation (12) we find and where by definition and As we will show, these parameters which reflect the braneworld and non-minimal nature of our model, in the large field regime intensify the increment of the slow-roll parameters. Inflation can be attained only if {ǫ, η} < 1; once one of these parameters reaches unity, the inflation phase terminates. We note that A(ϕ) and B(ϕ) are contributions originating from braneworld nature of the setup and also the non-minimal coupling of the scalar field and induced gravity on the brane. The number of e-folds during inflation is given by which in the slow-roll approximation can be written as where ϕ i denotes the value of ϕ when the universe scale observed today crosses the Hubble horizon during inflation and ϕ f is the value of ϕ when the universe exits the inflationary phase. For a warped DGP model with nonminimally coupled scalar field on the brane, this quantity in Jordan frame becomes After presentation of the main equations of the setup in Jordan frame, in the next section we consider the scalar perturbation of the metric since the key test of any inflation model is the spectrum of perturbations produced due to quantum fluctuations of the fields about their homogeneous background values. III. PERTURBATIONS IN JORDAN FRAME In a warped DGP braneworld model, the effective covariant equations on the brane for an arbitrary brane metric and matter distribution is given by [73] where τ µν is the total stress-tensor on the brane and is defined as where T µν , the energy-momentum tensor of a scalar field non-minimally coupled to induced gravity on the brane is given by Also we have where C N MRS is the five dimensional Weyl tensor and n A is the spacelike unit vector normal to the brane. Depending on the choice of gauge (coordinates), there are many different ways of characterizing cosmological perturbations. In longitudinal gauge, the scalar metric perturbations of the FRW background are given by [74,75,76] where a(t) is the scale factor on the brane, Φ = Φ(t, x) and Ψ = Ψ(t, x) are the metric perturbations. For the above perturbed metric, one can obtain the perturbed field equations as follows The anisotropic stress perturbation is defined as δπ ij = [∂ i ∂ j + (k 2 /3)δ ij ]δπ, where π is the trace of π ij . So, δπ E is the anisotropic stress perturbation. In the Eqs. (28) and (29), ρ ef f and p ef f can be obtained from the standard Friedmann equation 3 ρ ef f as follows By using the continuity equation,ρ ef f + 3H(ρ ef f + p ef f ) = 0, one can deduce So, the perturbed effective density and pressure can be written as and The (gauge-invariant) scalar perturbations of E µ ν can be parameterized as an effective fluid with density perturbation δρ E , isotropic pressure perturbation 1 3 δρ E , anisotropic stress perturbation δπ E and energy flux perturbation δq E (see [77,78]). Also δρ ϕ and δp ϕ take the following forms where and where Equations (37) and (39) in the minimal case and within the slow-roll conditions reduce to δρ ϕ = dV dϕ δϕ and δp ϕ = − dV dϕ δϕ respectively. By perturbing the equation of motion of the scalar field (11), one obtains Now the scalar perturbations can be decomposed to an entropy or isocurvature perturbation (the projection orthogonal to the trajectory), and adiabatic or curvature perturbations (projection parallel to the trajectory). The isocurvature perturbations are generated if inflation is driven by more than one scalar field [24,25,79,80] or it interacts with other fields such as the induced gravity on the brane [26,27]. The adiabatic perturbations are generated if the inflaton field is the only field in inflation period [26,27,79,80,81]. Here, since the inflaton field is non-minimally coupled to the induced gravity on the brane, the entropy perturbations are presented in this setup [81,82]. A gauge-invariant primordial curvature perturbation ζ, can be defined as follows [83] This definition is valid to first order in the cosmological perturbations on scales outside the horizon. On uniform density hypersurfaces where δρ = 0, the above quantity reduces to the curvature perturbation, Ψ. In the warped DGP model and within the Jordan frame, we should redefine Eq. (42) as Now, by using the energy conservation equation for linear perturbations (in an arbitrary gauge) (44) we can find the variation of ζ with respect to the conformal time aṡ whereρ ef f andṗ ef f are given by time derivatives of equations (32) and (33) respectively. One can split the pressure perturbation (in any gauge) into adiabatic and entropic (non-adiabatic) parts (see for instance Ref. [84]) where c 2 s =ṗ ef ḟ ρ ef f is the sound effective velocity. The nonadiabatic part is δp nad =ṗ ef f Γ , where Γ represents the displacement between hypersurfaces of uniform pressure and density. From equations (34)-(40) we can deduce Using the equations (28)-(30) we can rewrite this relation as where K, J and I are defined as and respectively. Now we can rewrite the equation governing on the variation of ζ versus the time in terms of the model's parameters. From equations (44)-(48) we finḋ In the minimal case and within the standard model, the entropy perturbation vanishes for long wavelength; we haveζ = 0 and the primordial spectrum of perturbation is due to adiabatic perturbations. But, it is obvious from equation (48) that in a DGP-inspired non-minimal setup, there is a non-vanishing contribution of the non-adiabatic perturbations, leading to non-vanishingζ, which affects the primordial spectrum of perturbation. We note that isocurvature perturbations are free to evolve on superhorizon scales, and the amplitude at the present day depends on the details of the entire cosmological evolution from the time that they are formed. On the other hand, because all super-Hubble radius perturbations evolve in the same way, the shape of the isocurvature perturbation spectrum is preserved during this evolution [85,86]. Here we are going to obtain scalar and tensorial perturbations in our model. We take into account the slow-roll approximation at the large scales, k ≪ aH, where we need to describe the non-decreasing modes. Then by using the relation between Ricci scalar and H andḢ, we find from equation (41) 3Hδφ We note that the reason for large scale assumption is that the scales of cosmological interest (e.g. for largescale CMB anisotropies) have spent most of their time far outside the Hubble radius and have re-entered only relatively recently in the Universe history. In this respect, in the large scale the condition k ≪ aH is an acceptable assumption. As has been shown in Refs. [87,88], when this condition is satisfied,Φ,Ψ andΦ can be neglected. In fact, for the longitudinal post-Newtonian limit to be satisfied, we require that ∆Ψ ≫ a 2 H 2 × (Ψ,Ψ,Ψ), and similarly for other gradient terms [87,88]. For a plane wave perturbation with wavelength λ, we see that H 2 Ψ is much smaller than ∆Ψ when λ ≪ 1 H . The requirement thatΨ be also negligible implies the condition d log Ψ dζ ≪ 1 (λH 2 ) 2 (with ζ = log a), which holds if condition λ ≪ 1 H is satisfied for perturbation growth. This argument can be applied forΨ and the other metric potential, Φ too. By adopting a similar reasoning, form Eq. (30) we have In writing the above equation we used the relation (T 0 i ) nmc dx i = 2f HΦ +Ψ . By using equation (53) and (54), we can deduce By defining a function F as equation (55) can be rewritten as where C is an integration constant. So, from equation (56) we find For simplicity we define the following quantity As we have stated, brane parameters cannot be determined freely and are influenced by bulk physics through boundary conditions (see for instance [89] for details). In our case, the term 1 (58) and (59) which is a non-trivial contribution of the bulk on the brane is neglected in our forthcoming arguments. This means that we assume backreaction due to metric perturbations in the bulk can be neglected (we refer the reader to [73,90,91,92,93] for details and justification of this assumption). Based on the arguments provided in [73], our assumption of neglecting the bulk-brane interactions in this study is viable. Now with definition (59), Eq. (58) can be rewritten as So, the density perturbation is given by where the effects of the non-minimal coupling of the scalar field and induced gravity on the brane are hidden in the definition of G. The scale-dependence of the perturbations is described by the spectral index as The interval in wave number is related to the number of e-folds by the relation So we obtain . (63) The running of the spectral index in our setup is given by The tensor perturbations amplitude of a given mode when leaving the Hubble radius are given by In our setup and within the slow-roll approximation, we find The tensor spectral index is given by that in our model it takes the following form where Σ is defined as In terms of the slow-roll parameters, the tensor (gravitational wave) spectral index can be expressed as The ratio between the amplitudes of tensor and scalar perturbations (tensor-to-scalar ratio) is given by After a detailed calculation of the perturbations in Jordan frame, now we present an explicit example to see how previous equations work. In this part, we take a monomial form of f (ϕ) as where ξ is a constant parameter. Also we choose the following form of the original scalar field potential in Jordan frame with constant b. In which follows, we intend to study two types of potentials: quadratic potential with m = 1 and quartic potential with m = 2. Further, we shall compare the outcomes of these two cases. By using equations (72) and (73) we rewrite the slow-roll parameters (Eqs. (13) and (14)) as and Other inflation parameters such as n S , n T and r can be expressed in terms of ǫ and η. We neglect presentation of these quantities here due to very lengthy structure of these equations. In which follows we perform an analysis on these parameters space. A. Quadratic Potential: The first inflaton potential we analyze is the quadratic potential, the case with m = 1 in equation (73). The following figures are created as the outcome of our analysis of the model parameter space (we note that in all figures we have set κ 4 = κ 5 = b = 1). Since R = 6(Ḣ + 2H 2 ) and H is nearly constant in the inflation epoch, we can consider in our numerical analysis R to be approximately a constant and we set it to unity for simplicity. Nevertheless, we will consider a more general case by ignoring this assumption and adopting some reliable ansatz in our numerical analysis. We note also that all of our numerical analysis in this paper are performed for normal branch of this DGP-inspired model since this branch is ghost-free [94,95,96,97]. In the left panel of figure 1, behavior of A(ϕ) as a correction factor to the standard result is depicted versus the scalar field. In this figure (and almost in all figures of this paper) we consider three values for ξ: 1 12 , 1 8 and 1 6 . We note that ξ = 1 6 is the conformal coupling of the standard general relativity [29,30,31,98]. The left panel of figure 1 shows that as the scalar field decreases from the initial large values, A(ϕ) increases toward a maximum and then decreases. This maximum has different values for different ξ. As ξ increases, the value of the maximum decreases and occurs in smaller values of the scalar field. Also, for each value of ϕ, the value of A(ϕ) decreases as ξ increases. The behavior of A(ϕ) affects the behavior of the first slow-roll parameter ǫ. This can be seen in the right panel of figure 1. At large scalar field regime, the value of ǫ in warped DGP model is smaller than the corresponding value in the standard four dimensional model (here we note that in all of our figures the solid, black line curve represents the evolution of cor-responding parameter in the standard 4D model). As the scalar field decreases, ǫ increases. For some value of the scalar field, ǫ takes the same value in both warped DGP and the standard four-dimensional model. For this value of the scalar field, A(ϕ) = 1. But, at some value of scalar field, ǫ reaches its maximum and then decreases. During this evolution, the behavior of ǫ in the warped DGP model is similar to the standard 4D case. With more reduction of the scalar field, ǫ deviates from 4D behavior and as the scalar field decreases, ǫ decreases similar to the correctional factor A(ϕ). This deviation from the 4D behavior is due to the presence of the brane tension. If there is no brane tension (also, with the zero effective cosmological constant), we attain the pure DGP model and the slow roll parameter always behaves as what it does in 4D model. In high energy regime, the effect of scalar field dominates the brane tension, but in low energy regime, where the scalar field becomes small, the brane tension's effect becomes dominant in the dynamics of the model and so we can see the deviation of the standard 4D model. During the reduction of ǫ, in some value of scalar field where A(ϕ) reaches to unity, the value of ǫ becomes equal to the 4D one again. We note that as for A(ϕ), the maximum value of ǫ depends on the value of ξ too. As ξ increases, the maximum becomes smaller and take places in smaller value of the scalar field. It means that for larger ξ, the 4D behavior lasts in wider domain of the scalar field values. For all values of ξ, it is possible for ǫ to reach unity and so the inflation has a graceful exit in this setup without need to any additional mechanism. In our setup, the slow-roll parameter reaches to unity twice. But, we know that the inflation occurs when ǫ, η ≪ 1. So, the first reaching of ǫ to unity, which take places in larger scalar field value, is the end of inflation since it reaches to unity from values smaller than 1. The behavior of the second correctional factor, B(ϕ), is more or less similar to A(ϕ). While the scalar field decreases, B increases to a maximum and then decreases (see the left panel of figure 2). From the right panel of figure 2, we can see the effect of the evolution of B on the second slow-roll parameter, η. η in the warped DGP FIG. 1. The evolution of the correctional factor A (left panel) and the first slow-roll parameter ǫ (right panel) versus the scalar field with a quadratic potential. The presence of the correctional factor, A, causes the ǫ to behave as the standard 4D case in the large field regime. In the small field regime, the behavior of ǫ deviates from the standard 4D behavior. FIG. 2. The evolution of the correctional factor B (left panel) and the second slow-roll parameter η (right panel) versus the scalar field with a quadratic potential. The effect of the correctional factor causes the η to follow a behavior which deviates from the standard 4D behavior in the small field regime. There is a maximum value of η at ϕ = 0. model always increases by reduction of the scalar field. This is similar to the behavior of η in the standard fourdimensional case. However, due to the presence of the correctional factor B, η in the warped DGP model does not increase strictly as it does in 4D model (see the right panel of figure 2). There is a maximum value for η at ϕ = 0. This maximum, for smaller ξ, has larger value. Since η can attain the unit value too, the graceful exit from the inflationary phase in this model is guaranteed. We notify that in non-minimal inflation on the warped DGP brane within Jordan frame with a quadratic potential, both ǫ and η are always positive. The next parameters that we consider are the scalar and tensorial spectral indices (shown as n s and n T respectively). In figures 3 (the left panel) and 4, we have shown the behavior of the scalar and tensorial spectral indices versus the scalar field. One can realize the effect of first and second slowroll parameters in the behavior of spectral indices. In the large values of the scalar field, both parameters behave similar to the corresponding parameters in the standard 4-dimensional model. It means that both scalar and spectral indices decrease by reduction of the scalar field strength. However, at some values of the scalar field, n s and n T reach a minimum and after that they increase, in contrast with the standard 4D case. The minimum value of these parameters decreases by reduction of ξ and take places in larger values of the scalar field. So, for larger values of ξ, the standard behavior of n s and n T last in larger domain of ϕ values. The general behavior of n s and n T is very similar to ǫ and η: similarity with the standard four-dimensional case in the large scalar field regime and deviation from it in the small scalar field regime. In the right panel of figure 3 we see the evolution of the running of the scalar spectral index, α, versus the scalar field. In the large scalar field regime, the behavior of α is similar to the corresponding parameter in the standard 4D case and decreases by decreasing the scalar field value. But, at some value of the scalar field, α reaches its minimum value and then increases toward a maximum and after that, it decreases again. The minimum value of α take places in smaller scalar field values by increasing ξ. So, as ξ increases, the 4D behavior of α lasts in larger domain of the scalar field. The last parameter that we are going to consider, is the ratio between the amplitudes of the tensor and scalar perturbations (r). We have shown the behavior of this ratio versus the scalar field in figure 5. Its behavior is similar to the behavior of ǫ in general. As the scalar field decreases, r increases toward a maximum in some values of the scalar field. Then, it begins to decrease. In other words, its evolution in the large scalar field region obeys the standard 4D behavior and in the small scalar field region, it evolves differently. Similar to other parameters, the extremum value of r depends on the value of ξ. For larger ξ, the extremum value of r becomes smaller and take places in smaller value of ϕ. So, for larger value of ξ, the ratio between the amplitudes of the tensor and scalar perturbations in warped DGP model, in larger domain of large ϕ, behaves as 4D model one. Now we proceed to calculate some inflation parameters with a quadratic potential at the time that physical scales crossed the horizon. To find the value of the scalar field at the end of inflation, we set one of the slow-roll parameters, ǫ or η, equal to unity to get ϕ f . To find the value of the scalar field at the time of horizon crossing, we have to adopt another strategy: the horizon crossing occurred about 60 e-folds before the end of the inflation. So the definition of the number of e-folds helps us to find the value of the scalar field at the horizon crossing time, ϕ hc . Now we rewrite the Friedmann equation (12) in the high energy limit (ρ ≫ λ) as follows FIG. 4. The evolution of the tensor to scalar spectral indices ratio versus the scalar field with a quadratic potential. The behavior of r in the large field regime is similar to the standard 4D one. FIG. 5. The evolution of the tensor to scalar spectral indices ratio versus the scalar field with a quadratic potential. The behavior of r in the large field regime is similar to the standard 4D one. So, the number of e-folds by using equation (21) can be expressed as We must solve the above integral in order to find ϕ hc . In appendix A, we have presented the solution of the integral (77), where we assumed ϕ hc ≫ ϕ f . Then we found ϕ hc from that solution and substitute it in the equations (63), (64) and (71) in order to find the values of these parameters at the time of the horizon crossing. Our analysis shows that although for all values of 0 ≤ ξ ≤ 1 6 , in the warped DGP model with a quadratic potential in Jordan frame we have 0.966 ≤ n s ≤ 1 (so, the spectrum of the scalar perturbation is nearly scale invariant and red-tilted), but just for 1 8 we arrive at r ≈ 0.22 which is observationally more reliable [99]. In this case the value of r at the time of horizon crossing, decreases by decreasing ξ. Table I shows the value of the n s , r and α when the physical scales crossed the horizon for three different values of ξ. For comparison we have listed also the corresponding recently realized observational data. Note that the observational parameters are defined at k 0 = 0.002 Mpc −1 where k 0 denotes the value of k when universe scale crosses the Hubble horizon during inflation. Also these parameters are obtained via WMAP+BAO+H 0 Mean data, where Mean refers to the mean of the posterior distribution of each parameter. The quoted errors for n s show the 68% confidence levels (CL) (see [99] for details). As the table shows, there is relatively good agreement between our results and recent observation. But note that the running of the spectral index in our setup is extra-ordinary close to zero. It is negative and in this respect viable. n s and r are in good agreement with observation. The second potential we consider is the quartic potential i.e. the case with m = 2 in equation (73). The left panel of figure 6 shows the behavior of the correctional factor, A versus the scalar field in this case. In the large scalar field regime, A increases by reduction of the scalar field. So, in this situation ǫ increases and its behavior mimics the behavior of ǫ in the standard 4D case (see the right panel of figure 6). However, the growth of A by reduction of the scalar field stopes at some value of the scalar field (which attains larger values for smaller ξ) and then it decreases. Similarly, by reduction of the scalar field ǫ reaches a maximum and its growth stopes. This maximum has larger value for smaller ξ. By further reduction of the scalar field, it deviates from the 4D behavior and decreases by reduction of the scalar field strength. In contrast with the quadratic potential where for small values of the scalar field the minimum of both A and ǫ were located at ϕ min = 0, here both A and ǫ have minimums located at some non-vanishing values of the scalar field. In fact, for quartic potential in this setup, ǫ has relatively more complicated structure than the quadratic case in the small scalar field regime. In the scale adopted in figure 6, this behavior is not so evident, but it shall be more evident in figures of n s and n T versus the scalar field (as we will see later). Note that as ξ increases, ǫ mimics the 4D behavior in a relatively wider domain of ϕ values. In the next step, we consider the evolution of the correctional factor B and the second slow-roll parameter η versus ϕ as shown in figure 7. The general behavior of B and η is similar to A and ǫ. But, since the minimum value of these parameters occurs at ϕ = 0, only in the large scalar field regime these parameters evolve similar to the corresponding parameters in 4D case. It should be noticed that for ξ = 1 6 (the conformal coupling), the correctional factor B is always less than unity. It means that for this value of ξ, the value of η in warped DGP model is always smaller than the value of this parameter in 4D model. We note also that in contrast with the quadratic potential case, the slow-roll parameters can be negative in some values of the scalar field. In figures 8 (the left panel) and 9, we have shown the behavior of the scalar and tensor spectral index versus the scalar field. As we expected from the evolution of ǫ, n s and n T at two extremal regimes of the scalar field evolve as they do in the standard four-dimensional model. At these two extremal regimes, n s and n T evolve from larger values to the smaller values by reduction of the scalar field. For other (intermediate) values of the scalar field, these parameters increase as the scalar field decreases. The behavior of the running of the scalar spectral index is shown in the right panel of figure 8. In the large scalar field regime, α behaves as it does in 4D and decreases by reduction of the scalar field. This 4D behavior lasts in a wider domain of the scalar field for the larger values of ξ. But, at some value of the scalar field, α reaches its minimum and then increases to a maximum. After In the large and small scalar field regime, the scalar spectral index and its running decrease by reduction of the scalar field (as the 4D case). FIG. 9. The evolution of the tensor spectral index versus the scalar field with a quartic potential. In two extremal region of the scalar field, the tensor spectral index decreases by reduction of the scalar field (as the 4D case). The right panel shows the behavior of nT in very small values of the scalar field as a special feature of the model with quartic potential. that, as scalar field decreases, there are other minimum and maximum values for α, providing a relatively complicated structure relative to the quadratic potential case. This feature is shown in the right panel of the figure 9 by adopting a smaller scale than the left panel one. Evidently there is a different structure of n T relative to the quadratic potential case where there was no minimum other than ϕ min = 0 in the small field regime. Next we consider the tensor-to-scalar ratio, r. The result of this consideration is shown in figure 10. For quartic potential, r has more complicated behavior relative to the quadratic potential case in the small scalar field regime similar to the behavior of ǫ, n s and n T in this regime. In two extremal regimes of the scalar field (large and small scalar field regimes), r in the warped DGP model increases as the scalar field decreases. This is the same as the behavior of r in the standard 4D case. For other (intermediate) values of the scalar field, it decreases by reduction of the scalar field. Some inflation parameters calculated for quartic potential at the time that physical scales have crossed the horizon are shown in table II. Similar to the quadratic potential case, the Friedmann equation and the number of e-folds are given via equations (76) and (77) but now with quartic potential. The solution of integral (77) with a quartic potential is presented in appendix B. By using that result and finding ϕ hc for this case, we obtain the values of the scalar spectral index, its running and the tensor to scalar ratio at the time of horizon crossing. The results for three values of ξ is shown in table II. We see that although for different values of ξ the scalar spectral index is nearly scale invariant and red-tilted, the run- ning of the spectral index and the tensor to scalar ratio increase by reduction of ξ. We note that the corresponding observational data and the conditions for calculations of these quantities are the same as what we have done for production of table II. Before presenting our analysis in the Einstein frame, we note that in our previous numerical analysis we argued that since H is nearly constant in inflation epoch, the Ricci scalar R = 6(Ḣ+2H 2 ) is also nearly constant in this epoch. With this assumption, we have set R = 1 in our numerical analysis. Now we consider a more general case to have more generic results: we consider the following ansatz for scale factor and scalar field a(t) = a 0 e νt , ϕ = ϕ 0 e −ϑt where ν and ϑ are positive constants. Note that these ansatz are chosen by taking into account the inflationary nature of the solutions for scale factor and a decreasing nature of the scalar field. Applying these ansatz to equation (11) and performing our numerical analysis for quadratic and quartic potentials with ν = 10, a 0 = ϕ 0 = 1 and ϑ = 1, we find for n s the results that are shown in the left panel of figure 11. These results are more generic than the case that we set the Ricci scalar to be a constant due to constancy of H in inflation era. Also, the right panel of figure 11 shows the results of our numerical calculation of the tensor-to-scalar ratio, r, for quadratic and quartic potentials adopting the above ansatz. Comparison of these more general results with the corresponding results obtained by a constant Ricci scalar shows that the results obtained by assumption of a constant Ricci scalar are actually reasonable in some sense. In fact, this comparison shows that the assumption of a constant Ricci scalar due to constancy of the Hubble parameter in inflation epoch is relatively a viable assumption. We have checked also the situation with ansatz where we assume ν < 1, t 0 > 0 (see for instance [100]) and δ > 0. Although this is not an exponentially solution of the scale factor, but the previous argument is applicable more or less even with this ansatz (for instance with ν = 0.9 and δ = 3). We note that the general case without adopting ansatz is far more difficult to find analytical or even numerical results. V. INFLATION ON THE WARPED DGP BRANE IN EINSTEIN FRAME Up to now, we have considered the situation in Jordan frame. We can pass from Jordan to Einstein frame by making the following conformal transformation [22,24,41] q µν = Ω 2 q µν , where the parameter Ω is defined as Under this transformation, action (1) in Einstein frame becomes FIG. 11. The evolution of the scalar spectral index (left panel) and tensor-to-scalar ratio (right panel) for quadratic and quartic potentials with adopted exponential ansatz and ξ = 1 6 . Now, we define a new scalar fieldφ in Einstein frame as follows and the corresponding potentialV defined in Einstein frame isV The general condition for flatness of the potential at the large field limit is The is required for the potential to be bounded from below and the location of the global minimum is well localized around the small field value. Even though the condition (83) actually determines the flatness of the potential at the large field limit, it is not necessarily required in generic inflation models. Depending on the shape of the potential, it might still be possible to have sufficient time of exponential expansion for some finite region of field value, ϕ [101]. The generalized cosmological dynamics of this setup in Einstein frame is given by the following Friedmann whereˆrefers to parameters written in Einstein frame. In Friedmann equation (84) we definedλ = 1 (1+κ 2 4 f (ϕ)) 2 λ andâ = (1 + κ 2 4 f (ϕ)) 1/2 a. Also, ρφ the energy-density corresponding to the now minimally coupled scalar field in Einstein frame is defined as follows and the corresponding pressure is given by wheret = (1 + κ 2 4 f (ϕ)) 1/2 t . In this step, similar to the Jordan frame case, we introduce the effective cosmological constant on the brane in Einstein frame as followŝ By putting the effective cosmological constant equal to zero, we find So, we can rewrite Friedmann equation (84) as followŝ and the second Friedmann equation can be expressed as The equation of motion of the scalar field in Einstein frame now is given by In the slow-roll approximation where dφ dt 2 ≪V (φ) and d 2φ dt 2 ≪ |3Ĥ dφ dt |, energy density and equation of motion for the scalar field take the following forms respectivelŷ Now the Friedmann equation can be expressed as followŝ We define the slow-roll parameters in Einstein frame aŝ In the slow-roll approximation, from Eq. (94) we find and In equations (97) and (98) In the next section we consider the scalar perturbation of the metric in Einstein frame. VI. PERTURBATIONS IN EINSTEIN FRAME The effective covariant equations on the brane in a warped DGP braneworld scenario and in Einstein frame are given byĜ wherê andτ µν is the total stress-tensor on the brane and is defined asτ T µν , the energy-momentum tensor of the scalar field in Einstein frame which now is minimally coupled to the induced gravity on the brane, is given by (compare this result with corresponding equation in Jordan frame, Eq. (25)) Also we havê Since in Einstein frame dŝ 2 = Ω 2 ds 2 , the scalar metric perturbations of the FRW background (Eq. (27)) is translated to whereâ(t) is the scale factor on the brane in Einstein frame,Φ =Φ(t, x) andΨ =Ψ(t, x) are the metric perturbations. For the above perturbed metric one can obtain the temporal part of the perturbed field equations in Einstein frame: In the last equation, δπÊ is anisotropic stress perturbation in the Einstein frame. In Eqs. (106) and (107),ρ ef f andp ef f can be obtained from the standard Friedmann where δÊ 0 0 can be calculated from the following relation that is written in Einstein frame. Also δρφ and δpφ take the following forms These equations in the slow-roll regime reduce to δρφ = dV dφ δφ and δpφ = − dV dφ δφ respectively. By perturbing the equation of motion of the scalar field (91) one can find In Einstein frame and within the warped DGP model, we should redefine equation (43) aŝ whereΨ is an Einstein frame quantity. Now, by using the energy-conservation equation for linear perturbations, (119) we find the variation ofζ with respect to the conformal time as Similar to the Jordan frame case, we split the pressure perturbations into adiabatic and entropic parts as follows The non-adiabatic part is δp nad = dp ef f dtΓ , whereΓ is defined asΓ . Using equations (106)-(108) we rewrite this relation as follows whereK ,Ĵ andÎ are defined aŝ Here we are going to obtain scalar and tensorial perturbation in our model. First let's rewrite equation (117) in the slow-roll approximation at the large scales as follows Also for equation (108) equation (131) can be written as followŝ equation (134) can be rewritten as So, density perturbation is given bŷ The scale-dependence of the perturbations is described by the spectral index aŝ The interval in wave number is related to the number of e-folds by the relation d lnk(φ) = dN (φ), so we obtain The running of the spectral index in our setup, in Einstein frame, is given as followŝ The tensor perturbations amplitude of a given mode when leaving the Hubble radius are given bŷ In our setup and within the slow-roll approximation, we find The tensor spectral index is given bŷ which in Einstein frame, it takes the following form whereΣ is defined aŝ Finally, the tensor-to-scalar ratio in Einstein frame is given byr Once again and similar to previous section, in which follows we present an explicit example to see how our equations in Einstein frame work. In this section, we use the same form of f and V defined in equation (72) and (73). From equation (82), the potential in Einstein frame takes the following form With this form of the potential, we get the flat potential in the large field regime in Einstein frame (see figure 23). As the Jordan frame case, we study two types of potential: quadratic potential with m = 1 and quartic poten-tial with m = 2. In the large ϕ regime, the variation of ϕ versus ϕ attains the following forms (149) Now, in order to obtain the slow-roll parameters in Einstein frame, we should rewrite these parameters in terms of the scalar field and corresponding potential in Einstein framê where by definition and where we definedλ = and Onceǫ orη reach the unity, the inflationary phase terminates. A. Quadratic Potential: Similar to our analysis in Jordan frame, we firstly consider a quadratic potential to analyze the outcome of the model in Einstein frame. We show that with this potential in Einstein frame, there are some differences with the Jordan frame case. These differences may be a footprint of the physical non-equivalence of these two frames in this braneworld setup (we return to this issue later). In figure 12 we depicted the behavior of the correctional factor andǫ versus the scalar field. The left panel of this figure shows that as the scalar field decreases, in two regimes (large and small scalar field regimes) decreases. But there is an intermediate regime where increases by reduction of the scalar field. The behavior of affects the evolution ofǫ. This can be seen in the right panel of figure 12. This panel shows that only in intermediate regime of the scalar field,ǫ obeys nearly the 4D behavior and in the large and small scalar field regimes, it deviates from 4D behavior drastically. However, as we have shown in the previous section, in Jordan frame with this type of potentialǫ in the large scalar field regime obeys the 4D behavior. It should be noticed that there is a relative maximum for andǫ which its value and location depends on ξ. As ξ increases, this maximum becomes smaller and take places in smaller values of the scalar field. This maximum is larger for smaller values of ξ. Also, there is a minimum value for the slow-roll parameter. As ξ increases, this minimum decreases and take places in larger values of the scalar field. We note that for smaller ξ, the 4D behavior lasts in wider domain of the scalar field values. The behavior of the correctional factorB andη versus the scalar field is shown in figure 13. Due to the effect of B, the second slow-roll parameter in the large and small scalar field regime deviates from the standard 4D behavior. But, in the intermediate regime of the scalar field, η behaves similar to what it does in 4D case. As ξ decreases, this 4D behavior lasts in wider domain of the scalar field values. Bothǫ andη can reach unity and therefore with a quadratic potential in Einstein frame the inflation can ends gracefully. We note that in contrast with Jordan frame case, with a quadratic potential in Einstein frame, the second slow-roll parameter can be negative in some values of the scalar field. Other important parameters in an inflationary paradigm are the scalar and tensor spectral indices. We have depicted the evolution ofn s andn T versus the scalar field in figures 14 (the left panel) and 15 respectively. As these figures show, in an intermediate regime of the scalar field these parameters decrease by reduction of the scalar field as what they do in the standard four-dimensional model. However, in the large and small scalar field regimes,n s andn T increase as the scalar field decreases. As ξ decreases, the 4D behavior of n s and n T last in larger domain of the scalar field. In the left panel of figure 14 we have shown the evolution of the running of the scalar spectral index versus the scalar field. As this figure shows, in an intermediate field regime, α decreases by reduction of the scalar field similar to what it does in 4D case. This behavior stopes at some value of the scalar field where α reaches a relative minimum. Notice that, as ξ decreases, the 4D behavior of α lasts in wider domain of the scalar field. The last parameter we consider is the tensor to scalar ratio,r. Figure 16 shows the behavior of this parameter versus the scalar field. As the scalar field decreases,r decreases until a minimum at some value of the scalar field is reached and then it increases again. The increment of r stopes at some value of the scalar field and after that, r decreases again. This means that in an intermediate field regime, the behavior of the tensor to scalar ratio in Einstein frame is similar to the corresponding parame-ter in 4D case and in the large and small field regime, it deviates from the standard 4D behavior drastically. We note that as ξ gets smaller, the 4D behavior ofr lasts in a wider domain of the scalar field. Now, as the Jordan frame case, we proceed to calculate some inflation parameters with a quadratic potential at the time of horizon crossing. To find the value of the scalar field at the time of horizon crossing, we treat as what we have done in the previous section. We rewrite the Friedmann equation in high energy limit (ρ ≫λ) as followsĤ So, the number of e-folds by using equation (99) can be expressed aŝ (157) In appendix C, we have presented the solution of integral (157) (where as before, we have assumedφ hc ≫φ f ). Then we foundφ hc from that solution and by using equations (139), (140) and (146) we find the values of the scalar spectral index, its running and the tensor to scalar ratio at the time of horizon crossing. Our analysis shows that although for different values of ξ, the scalar spectral index is nearly scale invariant, it is red-tilted for ξ = 1 6 and ξ = 1 12 and blue-tilted for ξ = 1 8 . Also, by reduction of ξ, the running of the spectral index increases and the tensor to scalar ratio decreases. Table III shows Now, as what we have done in Jordan frame, we analyze the model parameter space with quartic potential in Einstein frame. Figure 17 shows the behavior of correctional factor,Â, and the slow roll parameter,ǫ, versus the scalar field. As the scalar field decreases, increases to a maximum value and then decreases. As we have mentioned previously, the behavior of the correctional factor affects the evolution of the corresponding inflation parameters. Nevertheless, in spite of the presence of this factor, in the large scalar field regimeǫ behaves similar to what it does in 4D case and increases by reduction of the scalar field. It should be noticed that for larger values of ξ, the 4D behavior ofǫ lasts in wider domain of the scalar field values. In figure 18 we have depicted the behavior ofB and η versus the scalar field. The general behavior ofB and η is similar to the behavior of andǫ. In spite of the effect of the correctional factor, the behavior of the slowroll parameter in the large scalar field regime is similar to the behavior of the corresponding parameter in 4D case. In other words, in large scalar field regime in this case, the braneworld nature of the model can be neglected approximately. As ξ becomes larger, η in larger region of the scalar field has the 4D behavior. Also, in some values of the scalar field whereB is negative,η has the negative values. Bothǫ andη can attain the unit value. So, the graceful exit from the inflationary phase in this model is guaranteed. In the left panel of figure 19, we have shown the evolution of the scalar spectral index versus the scalar field. As figure shows, in the large and small scalar field regimes, n s decreases by reduction of the values of the scalar field similar to what it does in 4D case. In the intermediate regime of the scalar field, this quantity deviates from the 4D behavior and increases as the scalar field decreases. Figure 20 shows the evolution of the tensor spectral index versus the scalar field. In the large scalar field regime, n T evolves similar to the evolution of the corresponding parameter in 4D model and decreases by reduction of the scalar filed. But in the small scalar field regime, it increases as the scalar field decreases. We note that as ξ increases, the 4D behavior of both scalar and tensor spectral indices last in wider domain of the scalar field values. In the right panel of figure 19, we have depicted the evolution of the running of the scalar spectral index versus the scalar field. Similar to other considered parameters in this subsection,α has the 4D behavior in the large scalar field regime. For larger ξ, this 4D behavior lasts in larger domain of the scalar field. By more reduction of the scalar field,α reaches a maximum and then deviates from the 4D behavior. The last parameter which we consider is the tensor to scalar ratio,r (figure 21). As the scalar field decreases,r increases similar to what it does in 4D. The increment of r stopes at a maximum value which for larger ξ is smaller and take places in smaller values of the scalar field (this means as ξ increases, the 4D behavior ofr lasts in wider domain of the scalar field values). After that, it deviates from 4D behavior and decreases by reduction of the scalar field. Now we calculate some inflation parameters with a quartic potential at the time of horizon crossing. The Friedmann equation and the number of e-folds are given by equations (156) and (157), but here the potential is a quartic potential. The solution of integral (157) with a quartic potential is presented in appendix D (by assumingφ hc ≫φ f ). By findingφ hc from that solution, we obtain the value of the scalar spectral index and the tensor to scalar ratio at the time of horizon crossing. The results for three values of ξ are shown in table IV. With a quartic potential in Einstein frame, the value of n s at the time of horizon crossing is nearly scale invariant and red-tilted. Also, the tensor to scalar ratio at the time of horizon crossing decreases by reduction of ξ and the running of the spectral index increases by reduction of ξ. VIII. A SPECIAL CASE The curvature perturbation in Einstein frameζ, remains constant on large scales, but only so long as the condition (dρ ef f )/(dt) ) is satisfied. This means that in this situation, the perturbations are adiabatic [84]. In the warped DGP model and within the slow-roll approximation, this condition is satisfied only when we neglect the contribution of the dark radiation term in our analysis. If we work in the large field regime, (φ ≫ κ −1 4 andf (φ) ≫ κ −2 4 ) , so this term is really negligible. One can find the curvature perturbation on uniform density hypersurfaces in terms of the scalar field fluctu-ations on spatially flat hypersurfaces as followŝ Also the field fluctuations at Hubble crossing (k =âĤ) in the slow-roll limit are given by [8,81,84] The scalar curvature perturbation amplitude of a given mode when re-entering the Hubble radius is given bŷ So, in the slow-roll approximation, we find FIG. 19. The evolution of the scalar spectral index (left panel) and the running of the spectral index (right panel) versus the scalar field with a quartic potential in Einstein frame. In the large and small scalar field regime, the scalar spectral index and its running decrease by reduction of the scalar field (as the 4D case). FIG. 20. The evolution of the tensor spectral index versus the scalar field with a quartic potential in Einstein frame. In the large scalar field regime, the tensor spectral index decreases by reduction of the scalar field (as the 4D case). The scale-dependence of the perturbations is described by the spectral index aŝ The interval in wave number is related to the number of e-folds by the relation d lnk(φ) = dN (ϕ), so we obtain where the parameter Υ is defined as In terms of the slow-roll parameters, the spectral index becomesn The tensor perturbations amplitude of a given mode when leaving the Hubble radius are given bŷ Therefore, in this warped DGP scenario and within the slow-roll approximation in Einstein frame we find The tensor spectral index that is given bŷ in our framework takes the following form where Σ is defined as (170) In terms of the slow-roll parameterǫ, the tensor (gravitational wave) perturbation finds the following expression The ratio between the amplitudes of tensor and scalar perturbations is given bŷ So, the standard consistency condition between this ratio (i.e, the relative amplitude of the two spectra) and the slow-roll parameterǫ is modified by the factor in the parenthesis. A. large scalar field regime In the large scalar field regime, a quartic potential in Einstein frame tends to a constant (see equation (147)). So, the brane affects the standard form of the slow-roll parameters with a constant factor. In the large scalar field regime, since the denominator of the terms includinĝ λ andâ are negligible, so from equations (89), (95) and (96) we obtain the slow-roll parameters in the large field limit as followŝ In order to find the values ofn s andr at the time of horizon crossing, we should solve the integral (99) in the large scalar field regime (where in this regime a quartic potential in Einstein frame tends to a constant). The solution of the integral iŝ If we assume ϕ hc ≫ ϕ 2 f , then we find the value of ϕ hc as ϕ = 2 N 3κ 4 ξ + 1 6 Now,ǫ andη are defined aŝ ǫ = 27(ξ + 1/6) From equations (139) and (146) and by using equations (177) and (178) we find the scalar spectral index and the tensor to scalar ratio at ϕ = ϕ hc . In figure 22 we have depicted the scalar spectral index and tensor to scalar ratio in a plot for various values of ξ (we started with ξ = 1 10 corresponding to the first point of the left hand side and then in each step, we increased the value of ξ by 1 10 ). This figure shows that as ξ increases, the scalar spectral index becomes larger but the tensor to scalar ratio gets smaller. By increasing ξ,n s andr tend to 0.906 and 0.002 respectively (see a similar treatment for the non-minimal Higgs boson as the inflaton in Einstein frame in [101]). It should be noticed that we don't consider the case with a quadratic potential here, because this potential in the large scalar field regime tends to zero and there is no inflation for the model in this regime. It is due to the behavior of the quadratic potential in Einstein frame (see figure 23). In Einstein frame, there is a maximum for a quadratic potential that the slow-roll conditions cannot be satisfied beyond it. But for a quartic potential in Einstein frame the situation is different. In the large scalar field regime, the quartic potential tends to a constant and of course the slow-roll conditions can be satisfied. IX. CONCLUSION In this paper we have studied the cosmological inflation on the warped DGP braneworld, where a scalar field is non-minimally coupled to the induced gravity term on the brane. We considered the warped DGP setup since this braneworld scenario contains both UV and IR modifications of the general relativity simultaneously. We have studied the inflationary dynamics on the brane both in Jordan and Einstein frame. We have calculated the inflation parameters and perturbations in these two frames with details. In Jordan frame, the brane world nature of the setup and the effects of the non-minimal coupling between the scalar field and induced gravity on the brane is manifest through the existence of some correctional factors in slow-roll parameters. In Einstein frame, the effect of the non-minimal coupling is implicit in the field equations and can be manifested through the conformal transformation between two frames. The perturbations in these two frames are studied with details. The adiabatic perturbations are generated if the inflaton field is the only field in inflation period. But, if there is more than one scalar field in a model or a scalar field interacts with other fields such as the induced gravity on the brane, the isocurvature perturbations are generated. In our case and in Jordan frame, the presence of the non-minimal coupling between the inflaton field and induced gravity on the brane and also the presence of the non-local effects through the projection of the Weyl tensor on the brane lead to a non-vanishingζ which affects the primordial spectrum of perturbations. However, in Einstein frame (despite implicit presence of the nonminimal coupling), isocurvature perturbations are generated due to the presence of the non-local effects through the projection of the Weyl tensor on the brane. If we neglect this term in Friedmann equation, the perturbations become adiabatic since neglecting the non-local effect leads to the condition δφ (dφ)/(dt) = δρ ef f (dρ ef f )/(dt) to be satisfied. By adopting two types of potential (V = b 2m ϕ 2m ; m = 1, 2), we have performed numerical analysis of the model parameters space in each case, the results of which are shown in numerous tables and figures. We note that all of our numerical analysis are done for normal branch of this DGP-inspired model which is essentially ghostfree. In Jordan frame, both for quadratic and quartic potential, all considered parameters (ǫ, η, n s , n T , α and r) in the large scalar field regime evolve similar to what they do in 4D. In this frame, as ξ becomes larger, the 4D behavior of these parameters lasts in a wider domain of the scalar field values. By more reduction of the scalar field, the evolution of the parameters deviate from the standard 4D behavior. It seems that this deviation from the standard 4D behavior is due to the presence of the tension term in the correctional factors. Of course, with a quartic potential, the parameters experience another standard 4D behavior in the small scalar field regime. But, their values is very different from the values of the corresponding parameters in 4D case. In Einstein frame, the situation for two types of potentials is much different. With a quartic potential in Einstein frame, the considered parameters in the large scalar field regime have standard 4D behavior (similar to the quartic potential in Jordan frame). In the small scalar field limit, the evolution of the parameters deviate from standard 4D behavior. In this case, as ξ increases, the parameters mimic the standard 4D behavior in a relatively wider domain of the scalar field values. Due to the shape of a quadratic potential in Einstein frame, it is impossible to have inflation in the large scalar field regime. But, when the scalar field is confined to an intermediate regime, the slow-roll conditions can be satisfied and the inflationary phase can be occurred. We note that with a quadratic potential in Jordan frame, the inflationary phase occurs in the large scalar field regime. In this case, the parameter in the intermediate regime have the 4D behavior and in the large and small scalar field regimes they deviate from the standard 4D behavior considerably. Also, as ξ increases, the 4D behavior of all inflation parameters lasts in a wider domain of the scalar field values. In general, with a quartic potential in both Jordan and Einstein frame and with a quadratic potential just in Jordan frame, by increasing of ξ the 4D behavior of parameters lasts in larger domain of the scalar field values. But, with a quadratic potential in Einstein frame, 4D behavior lasts in a wider domain of the scalar field for the smaller values of ξ. We noticed that our analysis shows that although with a quadratic potential in Jordan frame the slow-roll parameters are always positive, with a quartic potential these parameters can be negative for some values of the scalar field. Of course, in Einstein frame both with quadratic and quatic potential, the slow-roll parameters can get negative values. We have also calculated some inflation parameters at the time that physical scales had crossed the horizon. We note that for this purpose, our analysis have been performed in the high energy limit (ρ ≫ λ). Also, we have considered three values of ξ in each case. The results of our analysis shows that in the warped DGP model with a quadratic potential in Jordan frame, the scalar perturbation is nearly scale invariant and red-tilted (0.966 ≤ n s ≤ 1). In this case, the value of the tensor to scalar ratio at the time of the horizon crossing is smaller than 0.24 (except for ξ = 1 6 which has r ≃ 0.31 ). The running of the scalar spectral index at the time of horizon crossing, is very close to zero but it is negative as usual. So, with a quadratic potential in Jordan frame, there is relatively good agreement between our results and recent observation, specially for ξ = 1 n s = 0.968±0.012, r < 0.24(95%CL and −0.022±0.020). With a quartic potential in Jordan frame, for ξ = 1 6 , the scalar perturbation is quite scale invariant. But, for other ξ, it is nearly scale invariant and red-tilted. With this potential, both the running of the spectral index and the tensor to scalar ratio are very close to zero. Then, we have found the value ofn s ,r andα at the horizon crossing time in Einstein frame. With a quartic potential in Einstein frame, the results are similar to the quartic potential in Jordan frame. The scalar perturbation is nearly scale invariant and red-tilted. Also, the running of the scalar perturbation and the tensor to scalar ratio are very close to zero. With a quadratic potential, α andr are very close to zero too. With this potential in Einstein frame, the scalar perturbation is nearly scale invariant. But, for ξ = 1 8 , it is blue-tilted and for other ξ, it is red-tilted (see table V which summarizes all of these points). A careful inspection of these results shows that by adopting a quadratic potential and working in Jordan frame, the results of our analysis are more reliable in comparison with recent observations. On the other hand, as table V shows, the two frames are not equivalent generally on physical ground. This is an important results. We emphasize that the distinction between the various conformal frames would be unphysical if one were dealing with conformal (Weyl) gravity which is conformally invariant. Since general relativity is not conformally invariant, our discussion entails the use of compensator fields (like the dilaton) whose role is to basically absorb the violations of conformal invariance. Inclusion of such fields in our case helped us to address the comparative analysis of cosmological perturbations in the Jordan and Einstein frames. In section 8 we have considered a special case where we have neglected the dark radiation term in the Friedmann equation and therefore the condition for adiabatic perturbation is satisfied. We have considered the inflation parameters of the model in adiabatic condition. Then we have repeated our analysis in the large scalar field regime. Since in this regime, there was no inflationary phase with a quadratic potential in Einstein frame, we have considered only a quartic potential which is nearly constant in the large scalar field regime. With this choice, the braneworld nature of the model affects the standard form of the slow-roll parameters (and so, other inflation parameters) with a constant factor. For the various values of ξ, we have found the values ofn s andr at the time of horizon crossing. The results are shown in figure 21. By increment of ξ, the scalar spectral index and tensor to scalar ratio is saturated to 0.906 and 0.002 respectively.
2012-07-18T06:32:47.000Z
2012-07-17T00:00:00.000
{ "year": 2012, "sha1": "90016ffe6c7b22f33c6cc3c01f56344e0d881a82", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.3966", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "90016ffe6c7b22f33c6cc3c01f56344e0d881a82", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18084107
pes2o/s2orc
v3-fos-license
A Simple and Fast Non-Radioactive Bridging Immunoassay for Insulin Autoantibodies Type 1 diabetes (T1D) is an autoimmune disease which results from the destruction of pancreatic beta cells. Autoantibodies directed against islet antigens are valuable diagnostic tools. Insulin autoantibodies (IAAs) are usually the first to appear and also the most difficult to detect amongst the four major islet autoantibodies. A non-radioactive IAA bridging ELISA was developed to this end. In this assay, one site of the IAAs from serum samples is bound to a hapten-labeled insulin (GC300-insulin), which is subsequently captured on anti-GC300 antibody-coated 96-well plates. The other site of the IAAs is bound to biotinylated insulin, allowing the complex to be detected by an enzyme-streptavidin conjugate. In the present study, 50 serum samples from patients with newly diagnosed T1D and 100 control sera from non-diabetic individuals were analyzed with our new assay and the results were correlated with an IAA radioimmunoassay (RIA). Using IAA bridging ELISA, IAAs were detected in 32 out of 50 T1D children, whereas with IAA RIA, 41 out of 50 children with newly diagnosed T1D were scored as positive. In conclusion, the IAA bridging ELISA could serve as an attractive approach for rapid and automated detection of IAAs in T1D patients for diagnostic purposes. Introduction Type 1 diabetes (T1D) is an autoimmune disease characterized by the destruction of insulin-producing pancreatic beta cells within the islets of Langerhans. During this autoimmune process, autoantibodies are generated that react against several beta-cell antigens, e.g. insulin, glutamic acid decarboxylase (GAD65), protein tyrosine phosphatase (IA-2) and zinc transporter 8 (ZnT8). These autoantibodies can be present years before disease onset [1], allowing for an early diagnosis before clinical manifestations. Moreover, measuring these autoantibodies allows etiologic diagnosis of a given diabetes case and adaption of treatment accordingly. Insulin autoantibodies (IAAs) are usually the first to appear before T1D development and they are most frequently found in young children, as their level and prevalence at diagnosis inversely correlate with age [2]. One of the current methods for the detection of T1D autoantibodies is enzyme-linked immunosorbent assay (ELISA), in which the immobilized antigen captures autoantibodies from the sample and detection is achieved using labeled antigen [2,3]. However, this method cannot be applicable when measuring IAAs, because it appears that human IAAs cannot react with insulin directly bound to plates [4,5]. IAAs are usually measured by radioimmunoassay (RIA), which is based on immunoprecipitation of 125 I-labeled insulin. However, RIA is expensive, requires newly synthesized radiolabeled antigen for each set of assays, takes more than 24 h to carry out and requires handling and disposal of radioactive products. Recent studies have used electrochemiluminescence (ECL) detection developed by Meso Scale Discovery (MSD) as a method for measuring IAAs [5,6]. Although this technique does not require synthesis of radiolabeled antigens, dedicated equipment is needed, with a relatively high cost compared with most other technologies. Poor correlation between laboratories taking part in international workshops has repeatedly been reported with RIA, with an average low sensitivity for IAA detection [2,7,8]. Clearly, there is a compelling need for new and better methods to measure IAAs in terms of sensitivity, cost and time requirements. We describe the development of a non-radioactive bridging IAA assay, where bivalent IAAs are bound to two insulin moieties in solution, thus forming a bridge. This liquid-phase technique allows most insulin epitopes to be available for binding, which is not the case when insulin is directly bound to plates. For the present study, 50 serum samples from patients with newly diagnosed T1D and 100 control sera from non-diabetic individuals were analyzed. The performance of our IAA bridging ELISA was compared with that of an IAA radioimmunoassay kit (RSR Limited Cardiff, UK) validated by the Diabetes Antibody Standardization Program (DASP). In addition, the sensitivity of our ELISA was compared with that of an electrochemiluminescence assay performed with the MSD technology under the same conditions. Materials and Methods Serum Samples 50 serum samples from newly diagnosed T1D children (26 males, 24 females; mean age 8.8 years; range 0-18) and 100 control sera from non-diabetic individuals (65 males, 35 females; mean age 7.9 years; range 0-18) were analyzed. All samples were obtained before the start of exogenous insulin therapy. Local ethics committees approved the study. Assay Reagents and Equipment Biotinamidohexanoic acid N-hydroxysuccinimide ester (NHS-LC-biotin) and recombinant human insulin expressed in yeast were from Sigma-Aldrich. A mouse monoclonal anti-insulin antibody (IN-05) was from Antibodies-online GmbH (Atlanta, USA). The production and selection of monoclonal anti-microcystin MC159 used for this study were described previously [9]. Sulfo-TAG N-hydroxysuccinimide [NHS]-ester was from MSD. When performing immunoassays, all reagents were diluted in enzyme immunoassay (EIA) buffer, i.e. 0.1 M phosphate buffer pH 7.4 containing 0.15 M NaCl, 0.1% bovine serum albumin (BSA) and 0.01% sodium azide. Plates were washed with washing buffer (0.01 M phosphate buffer pH 7.4 containing 0.05% Tween 20). Labeling of Insulin with Biotin Biotin was covalently linked to insulin in a molar ratio 3:1 and 10:1 by reaction of an activated N-hydroxysuccinimide ester of biotin with the primary amino groups of the protein. The activated ester was dissolved in dimethylformamide (DMF) and added to a 0.1 M (pH 9.0) borax buffer solution of the protein (less than 5% final DMF concentration). After 30-minute incubation at room temperature, 100 mL of 1 M Tris-HCl buffer (pH 8.0) was added for 15 minutes, before completing with 500 mL of EIA buffer. HPLC Purification of Biotinylated and GC300-labeled Insulin After coupling of insulin with biotin and GC300, the products were purified using the HPLC system Ä KTA purifier (GE Healthcare, Piscataway, NJ, USA) with a Chromolith performance RP 18E column (100-4.6 mm; Merck Chemicals). The mobile phase consisted of 0.1% aqueous formic acid: eluent A and 0.1% formic acid in acetonitrile: eluent B. The formic acid-acetonitrile gradient was run from 10 to 50% acetonitrile for 12.5-column volumes with a flow rate of 0.3 mL/min. Each sample (300 mg of either biotinylated or GC300-labeled insulin) was resuspended in 550 mL of 10% acetonitrile and 500 mL of this solution was injected. Elution was monitored at 280 and 220 nm. Fractions were collected, dried using a rotating vacuum concentrator (Eppendorf) and dissolved in 200 mL of EIA buffer. To identify the different coupling ratios of biotinylated insulin separated by IAA Bridging ELISA A series of experiments were carried out varying a range of assay parameters in order to establish the optimal conditions. Different concentrations of GC300-labeled insulin and biotinlabeled insulin were tested, ranging from 10 to 500 ng/mL. Furthermore, different incubation times (1, 2, 4 or 16 h) and temperatures with different serum volumes and dilutions were compared. Additionally, tests were made to verify the effect of human serum on the binding of insulin antibodies. Since acetylcholinesterase (AChE)-labeled streptavidin was used as enzymatic tracer [11], AChE activity was measured by the method of Ellman et al. [12]. Ellman's medium comprises a mixture of 7.5610 24 M acetylthiocholine iodide (enzyme substrate) and 2.5610 24 M 5,59-dithiobis(2-nitrobenzoic acid) (DTNB) (reagent for thiol colorimetric measurement) in 0.1 M phosphate buffer (pH 7.4). Enzymatic activity was expressed in Ellman Units (EUs). One EU is defined as the amount of enzyme producing an absorbance increase of one unit during 1 min in 1 mL of medium, for an optical path length of 1 cm; it corresponds to about 8 ng of enzyme. The final assay set up is illustrated in Figure 1. Briefly, 96-well microtiter plates were coated with 100 mL/well of anti-GC300 monoclonal antibody (mAb) MC159 (10 mg/mL) in 50 mM phosphate buffer (pH 7.4). After 18 h of incubation at 20uC, plates were washed and blocked with 0.1% BSA-phosphate-buffered saline (PBS) for 24 h at 4uC. Serum samples (25 mL) were mixed with an equal volume of a 1:1 mixture of biotinylated and GC300-labeled insulin (final concentration 200 ng/mL/each) in EIA buffer. After incubating for 1 h at room temperature, this solution was transferred into microtiter plates coated with MC159 mAb and reacted for 2 h at room temperature on an orbital shaker. The plates were subsequently washed 3 times and 100 mL of AChE-labeled streptavidin (2 EU/ mL) was added to each well. After 1 h at room temperature followed by 3 washes, 200 mL of Ellman's medium was added to each well for 4 h. The absorbance was measured at 414 nm. As an internal positive control, a mouse anti-insulin mAb (IN-05) was used. Competitive Inhibition Assessment of the competitive inhibition of IAA binding in nine IAA-positive serum samples and a control IAA-negative one was carried out by adding unlabeled insulin during the first incubation step with biotinylated and GC300-labeled insulin. Plate-bound IAAs were subsequently detected with the same ELISA format as described above. RIA All serum samples were further analyzed by an IAA radioimmunoassay kit (RSR, Cardiff, UK), using the supplier's protocol. Briefly, 20 mL of sera was reacted with 25 mL of 125 I-(A14)monoiodinated insulin (20,000 cpm) overnight at room temperature. The next day, 100 mL of anti-human IgG was added to each tube to precipitate any labeled insulin-antibody complex that had formed. After 1 h at 2-8uC, each tube was washed twice with 2 mL of cold assay buffer (50 mmol/L K 2 HPO 4 /KH 2 PO 4 pH 7.0, 1% Tween 20, 0.5 g/L BSA, 0.5 g/L NaN 3 ) and centrifuged at 15006g for 20 minutes at 4uC. 125 I-labeled precipitates were counted for 1 minute on a gamma counter, knowing that the amount of radioactivity in the precipitate was proportional to the concentration of IAA in the test sample. Positive and negative control sera and a set of assay calibrators containing different concentrations of insulin mAb were included in each assay. IAA titers above 0.4 U/mL (which corresponds to 2, 2% of bound 125 I-insulin) were considered positive. Electrochemiluminescence Assay High Bind Sector Imager 2400 96-well plates (MSD) were coated with MC159 mAb (40 mg/mL) in 50 mM phosphate buffer (pH 7.4). After 18-h reaction at 4uC, plates were washed and blocked with 5% BSA-PBS for 3 h. Four IAA-negative serum samples or PBS (25 mL) was spiked with anti-insulin mAb and mixed with 25 mL of GC300-labeled and ruthenium-labeled insulin (prepared with MSD Sulfo-TAG NHS-ester according to MSD instructions) in EIA buffer at a concentration of 200 ng/mL. After 1 h at room temperature, this solution was transferred into MSD plates coated with MC159 mAb and incubated for 2 h at room temperature on an orbital plate shaker. Plates were washed 3 times with washing buffer, 150 mL/well of 16Read buffer (MSD) was added and plates were read with an MSD Sector Imager 2400 reader. Detection Limit of ELISA and ECL Assay The five-parameter logistic fit (5-PL) function (GraphPad Prism 5) was used to model the characteristic curve for the IAA bridging ELISA and the ECL assay. The limit of detection for both assays was calculated by interpolating the average background signal plus 3 standard deviations on the standard curve. Assay Optimization Different assay formats have a significant impact on the optimal parameters used to maximize assay performance [13]. Anti-GAD and anti-IA-2 bridging ELISA tests have recently been developed that have outperformed classic liquid-phase RIA formats [3]. We tested several formats to find one suitable for IAA detection, and liquid-phase ELISA, using a bridging format, was found to be the most appropriate. As mentioned above, it appears that human IAAs cannot react with insulin directly bound to plates. Indeed, we observed no signal when directly coating insulin on the solid phase (data not shown). At least two different haptens are required for the liquid-phase ELISA format. In the current assay, IAAs form a bridge in solution between biotinylated insulin on one antigenbinding domain of IAAs and GC300-labeled insulin on the other. The GC300 hapten allows this complex to be captured on MC159 mAb-coated plates, whereas the biotin allows the complex to be detected by the streptavidin-conjugated tracer (Fig. 1). A GC300 molecule was chosen in this assay as it is a synthetic hapten which is not naturally found in the human body and for which a highaffinity mAb (MC159) is available to us. Insulin was labeled with both haptens at different hapten-to-antigen ratios and purified by HPLC. Using mass spectrometry, three different conjugates were observed for biotinylated insulin, namely insulin coupled with one, two or three biotin molecules. The most abundant conjugate was insulin coupled to 2 biotin molecules (45% of the total product). When performing assays with these different insulin-biotin conjugates, a 40% higher signal was obtained when using insulin coupled to 2 or 3 biotin molecules in comparison with insulin coupled to only 1 biotin molecule (data not shown). Using purified double-biotinylated insulin resulted in a higher signal (,40%) than using biotinylated insulin before HPLC purification. This could be explained by the fact that HPLC purification eliminates unbound insulin (15% of the total product), which could otherwise bind to IAAs and lower the signal. For this reason, insulin coupled to 2 biotin molecules was used for subsequent assays. Other optimization steps included finding the most favorable concentrations of reagents (biotinylated and GC300-labeled insulin) for the assay. Tested concentrations ranged from 10 to 500 ng/mL and the balance for formation of bridging complex was achieved using 200 ng/mL of each reagent. The optimized parameters selected for subsequent experiments are described in Materials and Methods. The intra-assay coefficient of variation was 6.2% (n = 8) and the inter-assay coefficient of variation was 5.8% (n = 5) using an antiinsulin antibody-positive sample. In most studies published to date using different non-radioactive IAA assays, sera are incubated with labeled haptens at 4uC for at least 16 h [5,14,15]. Remarkably, in the present IAA bridging ELISA, the signal did not change when incubating sera of T1D children at 4uC overnight or at room temperature for 3 h. Besides testing different concentrations and temperatures, different serum volumes and dilutions (from 4/5 to 1/10 dilutions) were compared and the optimal conditions were found to be 25 mL of serum diluted K in EIA buffer. Therefore, only 25 mL of the original serum samples is needed for this assay. Diluted sera were incubated with hapten-labeled reagents for 1 h at room temperature, followed by 2-h incubation on the mAb-coated capture plates. In order to ascertain the specificity of the assay, competition experiments were performed. An IAA-positive and an IAAnegative serum sample were incubated with serial dilutions of unlabeled insulin (0, 5, 50 and 5,000 ng/mL) together with biotinylated and GC300-labeled insulin (200 ng/mL/each; Figure 2A). In addition, eight serum samples with diverse titers of IAA were also incubated with 5,000 ng/mL of unlabeled insulin in the first incubation step of the IAA bridging ELISA ( Figure 2B). As expected, addition of 5,000 ng/mL of unlabeled insulin completely inhibited IAA detection in the positive sera; however, the signal of the negative serum sample remained unaffected, indicating that the binding of IAAs to the capture mAb-coated surface is specific (Figure 2). Comparison of the Bridging ELISA with an ECL Assay Several studies have been published comparing ELISAs with ECL assays. ECL assays are reported to be 3 times [16] and up to 8 times [17] more sensitive than ELISAs. The sensitivity of bridging ELISA and the ECL assay was therefore compared, both techniques being performed using the same capture mAb and GC300 hapten coupled to insulin. Another insulin molecule was coupled to biotin for bridging ELISA, while for the ECL assay insulin was coupled to ruthenium. Both ELISA and ECL were separately optimized in terms of concentration of capture mAb, but the same incubation times, reagent concentrations and temperatures were used for both assays (see Materials and Methods). The analysis was done by comparing dilution curves of an IAAnegative serum spiked with anti-insulin mAb. When using the fiveparameter logistic fit to model the characteristic curve for both bridging ELISA and ECL assay, it was found that the limit of detection was very similar for both techniques (0.6-0.85 ng/mL) (Figure 3). A study has been recently published using the MSD technology to assay IAAs from human serum samples. It was found that binding of insulin antibodies, either a mouse anti-insulin mAb or serum IAAs, was inhibited by normal human sera. To decrease this inhibition, a step of acid treatment of sera was introduced [5]. Interestingly, this phenomenon was only slightly observed when performing the MSD IAA assay using our bridging format. As shown in Figure 4A, no significant inhibition of binding of antiinsulin mAb was observed in normal human serum when assaying low concentrations of anti-insulin mAb compared with PBS. Similarly, no significant inhibition of anti-insulin mAb binding was observed in normal human serum compared with PBS for our IAA bridging ELISA when assaying low concentrations of anti-insulin mAb ( Figure 4B). These results indicate that the human serum samples can be directly assayed with our ELISA method, thus eliminating a time-consuming pretreatment step. Assay Sensitivity and Specificity In order to validate our IAA bridging ELISA, IAA levels were assayed in serum samples from new-onset T1D (n = 50) and healthy control children (n = 100). The cut-off value of the IAA bridging ELISA was determined based on the mean plus 3 standard deviations (SD) of the control samples and which corresponded to 64 mAU (milli-absorbance units). Using this cut-off, IAAs were detected in 32 out of 50 (64%) T1D children and in 0 out of 100 (0%) healthy controls ( Figure 5). The positivity and titers of our IAA bridging ELISA were compared with an IAA RIA (RSR). With IAA RIA, 41 out of 50 (82%) children with newly diagnosed T1D were scored as positive. The results obtained with both assays were correlated using regression analysis (R 2 = 0.5492; P,0.001; Figure 6). In addition, receiver operator characteristic (ROC) curves for both IAA RIA and IAA bridging ELISA were drawn resulting from serum samples from 50 children with newly diagnosed T1D and 100 control children (Figure 7). For IAA RIA, the area under the curve (AUC) was 0.92 (95% CI 0.86-0.99) and a cut-off value of 64 mAU corresponded to 99% specificity and 82% sensitivity. For IAA bridging ELISA, the AUC was 0.82 (95% CI 0.73-0.91) and a cut-off value of 64 mAU corresponded to 99% specificity and 64% sensitivity. Out of the 9 samples that were IAA RIA-positive but IAA bridging ELISA-negative, 6 subjects were also positive for 2 other islet autoantibodies (IA-2A, GADA) and 3 subjects for one other islet autoantibody (2 for IA-2A and 1 for GADA). These 9 samples resulted in very weak signals with RIA, and 6 of them were barely above the cut-off limit of the assay. We cannot exclude the possibility that conjugating insulin molecules with GC300 and biotin haptens could potentially inhibit the binding of autoantibodies to the antigen. This could possibly explain why these 9 samples were detected by the IAA RIA kit but not with the bridging ELISA. In addition, this discrepancy could be explained by assuming that some paratopes of IAAs are already occupied by circulating free insulin while two free paratopes are required for the bridging assay. Conclusion For assays detecting autoantibodies against GAD and IA-2, results between laboratories are highly concordant [7] and there exists a World Health Organization serum standard for comparing assays in different laboratories and workshops [18]. In contrast, there is poor agreement between laboratories for IAA assays [2]. We describe herein the development of a non-radioactive IAA assay using a bridging ELISA format, which could be a valid alternative to RIAs routinely used in most clinical laboratories. Using our IAA bridging ELISA, IAAs were detected in 32 out of 50 T1D children compared with an IAA RIA scoring 41 out of 50 T1D children as positive. In addition, our IAA bridging ELISA was also compared with an IAA ECL assay carried out using MSD technology. It was found that the limit of detection was very similar for both techniques. Our bridging ELISA has two key advantages. First, no radioactive tracers are required as compared with IAA RIAs. Second, most laboratories are already equipped and trained to perform ELISA-based assays, which is not the case for the MSD technology, which requires specialized and costly equipment. This means that our IAA bridging ELISA could be easily implemented in most clinical laboratories without any special requirements and without the need to pre-treat samples. Moreover, it could be easily adapted to an automated platform. The other key advantage of our bridging assay is its fast readout (8 h). In summary, our IAA bridging ELISA could be an attractive approach for rapid and automated detection of IAAs in T1D patients for diagnostic purposes. Further validation in at-risk subjects is needed to define its prognostic value for subsequent T1D development.
2018-04-03T05:47:31.532Z
2013-07-29T00:00:00.000
{ "year": 2013, "sha1": "1180307af276c7d729acf62ec4fa00eb918b7b55", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0069021&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1180307af276c7d729acf62ec4fa00eb918b7b55", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215855421
pes2o/s2orc
v3-fos-license
Composing Energy Services in a Crowdsourced IoT Environment We propose a novel framework for composing crowdsourced wireless energy services to satisfy users' energy requirements in a crowdsourced Internet of Things (IoT) environment. A new energy service model is designed to transform the harvested energy from IoT devices into crowdsourced services. We propose a new energy service composability model that considers the spatio-temporal aspects and the usage patterns of the IoT devices. A multiple local knapsack-based approach is developed to select an optimal set of partial energy services based on the deliverable energy capacity of IoT devices. We propose a heuristic-based composition approach using the temporal and energy capacity distributions of services. Experimental results demonstrate the effectiveness and efficiency of the proposed approach. INTRODUCTION T HE concept of Internet of Things (IoT) has emerged as a result of the advance in multiple technologies, including wireless communication, sensors, and embedded systems [1]. Everyday things are being transformed into IoT devices by embedding tiny sensors, actuators, computing resources and network connectivity. The smart IoT devices provide their augmented functionalities, e.g., sensing and actuating as IoT services. This provides opportunities for integrating the physical world with the cyber world, enabling novel applications in several domains including smart cities, smart homes, agriculture, and healthcare. Crowdsourcing is an efficient way to leverage IoT services [2]. IoT users may crowdsource the functions of nearby IoT devices to fulfill their needs. Crowdsourcing enables a mobile ecosystem to share different services among IoT devices [3]. For example, a set of co-located smartphones may provide computing or storage for a nearby resourceconstrained smartwatch to render a map journey [4]. IoT devices can provide various types of crowdsourced services such as WiFi hotspots, wireless charging and environmental sensing [3], [5], [6]. This provides opportunities to build novel applications such as spatio-temporal targeted recommender systems [7] and shared IoT service markets [8]. The wireless energy transfer, i.e, energy sharing as a service is a key service in the dynamic crowdsourced IoT ecosystem [9]. It enables energy sharing between mobile IoT devices seamlessly. The wireless energy sharing provides more convenience to IoT users compared to carrying power banks or finding stationary power sources. Several IoT devices manufacturers have already adopted the wireless charging technology [9]. For example, the inductive coupling for wireless energy transfer between two smartphones only • A. Lakhdari allows a transmission within millimeters or centimeters [10]. Significant research is striving to support a Watt-level energy transmission over a meter-level distance between IoT devices safely [11]. Two companies, Energous 1 and Wi-Charge 2 have already produced their charging devices prototypes which can deliver up to 3 Watts power over 5 meters to multiple receivers. Recently, the concept of wireless crowd charging has been introduced to provide IoT users with ubiquitous power access through crowdsourcing [12], [13]. We focus on crowdsourcing energy as a service which has the potential to create a green computing environment. The crowdsourced energy service (CES) has two main aspects, (a) harvesting energy, and (b) the wireless transfer between IoT devices [12], [14]. The IoT devices are able to harvest energy from different natural sources, i.e., the kinetic movement of IoT users or their body heat [15]. For example, a smart shoe may harvest energy from the physical activity of its user [16], [9]. The harvested energy could be used to charge the nearby IoT devices wirelessly. Energy providers might share energy altruistically to contribute to a green IoT environment. They can also be egoistic since energy is a vital resource for their IoT devices. Therefore, providers would not be interested in sharing their energy unless they receive a satisfying incentive to compensate for their resource consumption. There is a body of research that considers incentives for crowdsourced IoT services [17]. In this paper, we focus on composing crowdsourced energy services. We assume that the providers are incentivized by existing incentive models [17]. The service paradigm is a powerful mechanism to abstract the functionalities of IoT devices [18]. We model the wireless energy transfer as a service. The function of the wireless delivery of energy is represented formally along with its non-functional attributes (i.e., Quality of Service (QoS)). The service abstraction enables some key operations such as the discovery of available energy services, and the composition of energy services based on users' requirements [5]. We focus on composing crowdsourced energy services to fulfill an IoT users' energy requirements in confined areas including coffee shops, restaurants, and waiting areas in airports. A single service provider may not be able to satisfy the energy requirements within the specified time interval. In such a case, we need to combine multiple energy service providers. The composition of crowdsourced energy services is to select the optimal crowdsourced energy services that satisfy the user's energy and QoS requirements. The typical QoS of energy services include availability, provision consistency, and cost. Note that, we only focus on the composition based on functionalities, i.e., fulfillment of energy requirements. We also assume a static environment. i.e., the user and providers do not move during the composition. To the best of our knowledge, there is a limited research work on the composition of energy services. It is challenging to apply the generic service composition approaches directly [19] to compose the energy services due to the following characteristics of the crowdsourced IoT environment: • Real-time composition: In pervasive computing, selecting a set of services across multiple mobile devices presents new challenges that do not occur in traditional settings in service computing [19]. Traditional service composition relies on previously generated composite services to build and complete future compositions. However, in the open environment of crowdsourced IoT, services are independently advertised, deployed and maintained by different IoT devices. Reusing previously composed services is not always possible. • Partial invocation of energy services: One unique feature of the crowdsourced energy services is that they can be invoked partially. An IoT user may consume only a part of the advertised energy from nearby services. • Wireless energy compatibility: Crowdsourced energy services require a novel composability model of energy services. The energy services should consider intensity compatibility between the user's IoT device and the providing devices in a composition. Note that, an IoT device may not receive more than a predefined recharging intensity [9]. • Spatio-temporal service discovery: Special considerations are necessary for the efficiency and performance of the selected set of services [3]. In particular, in the crowdsourced IoT environment. IoT services providers and consumers have different spatio-temporal preferences. Therefore, the selected set of crowdsourced energy services will not fulfil the requirement of a consumer if these services are not composable according to: (i) their spatio-temporal features and (ii) preferences of the energy consumers. • Inconsistent provision of crowdsourced energy services: Crowdsourced energy services deliver energy wirelessly. Wireless communication channels are sensitive to the distance and sometimes unreliable. Additionally, the crowdsourced energy services are provided from IoT devices which are already in use by their owners. Hence, delivering consistent wireless energy from an IoT device to another depends on the usage of the device owners. • Volatile crowdsourced energy services: Service providers establish a wireless network in ad hoc ways. energy service provision relies mainly on the distance between IoT devices. They may offer and drop services at arbitrary times due to their mobility. Predicting the availability of crowdsourced energy services depends on defining the context (i.e., location and time of the day) in addition to the usage behavior of the IoT user [20]. We propose a composition framework of energy services extending the energy service model proposed in [5]. The proposed framework considers all the aforementioned challenges except the volatile behavior of energy services due to the mobility of the crowd. In this paper, we focus on composing static services. In the future, we will consider composing mobile energy services in a crowdsourced IoT environment. The aim of this paper is to answer the questions: (a) how to model energy services and queries in a crowdsourced IoT environment?, (b) what are the composability rules for crowdsourced energy services?, and (c) how to compose nearby crowdsourced energy services to fulfill the energy requirement of an IoT user?. The service model enables the device owners to advertise the harvested energy as services. The query model allows the spatio-temporal service discovery and composition for nearby consumers. The composability model defines the possible candidate services for the composition. Here, we assume that the no-lock in contract service invocation is allowed, i.e., the user can leave a service any time during the advertised service time. We propose a modified temporal knapsack algorithm [21] to compose crowdsourced energy services. The composition technique considers the energy description attributes and the spatio-temporal features of IoT devices to select and compose energy services. The main contributions of this paper are: • Designing a novel composability model for crowdsourced energy services. • Developing an energy-usage aware Quality of Service (QoS) model to evaluate energy services. • Developing a heuristic-based spatio-temporal composition approach to select the best composition of energy services. • Conducting experiments on real datasets to illustrate the performance and effectiveness of the temporal composition approach. MOTIVATION SCENARIO People may gather in different places (i.e., confined areas) in a smart city e.g., coffee shop, restaurant, workspace, theatre, etc. (see figure 1a). They may harvest energy by their wearables [22]. They may also share their spare energy wirelessly with nearby IoT devices. The distance between IoT devices exchanging energy may reach five meters to ensure a successful wireless transmission. The IoT devices and wearables are assumed to be equipped with wireless energy transmitters and receivers e.g., Energous and Wi-Charge. shop. An IoT user 'X' wants to charge the depleting battery of its smartphone in a coffee shop. The user 'X' needs to run some critical applications e.g., to make a call and to check for upcoming appointments. The energy query is processed at the edge i.e., a router in a confined area (see Figure 1b). All local energy queries and advertised energy services are processed at the edge associated to their confined area. The smartphone casts an energy query in the following way the user X is looking for an amount of energy E = 450mAh in the location l during the time [t 1 = 17 : 05 , t 2 = 17 : 35]. Five people in the same coffee shop are willing to share their energy during X's query period. The required energy can be provided from the harvested energy by wearables of these people or from their smartphone batteries. Each crowdsourced energy service CES is defined by its available time and provided energy (see Figure 2). None of the available services can provide the required amount of energy to 'X' as all available energy services provide less than 450mAh. As a result the composition of multiple services is required which may provide the required amount of energy. However, the composition must consider the compatibility of the received current intensity, i.e., the intensity of the aggregated received current cannot exceed the predefined compatibility intensity of the consuming device, if multiple energy services are providing energy at the same time. Hence, the composition should select the optimal combination of services. For example, selecting CES 5 for the entire query interval prevents the composition of other energy services like CES 2, CES 3, and CES 4, because of the intensity compatibility condition. As the no-lock in contract service invocation is allowed and the composability of component services, chunking the query duration and composing the partial invocations of CES 1, CES 3, and CES 4 given in Table 1 can fulfill the required 450mAh energy requirement. represented by the provided amount of energy S j .EN and the start time S j .st and end time S j .et. The energy queries also are described by the required amount of energy Q i .RE and the start and the end time of the energy query, Q i .st and Q i .et respectively. We formulate crowdsourcing energy services into a service composition problem. Composing energy services in a crowdsourced IoT environment needs to consider the spatio-temporal features of services and queries. Providing energy for a query Q i ∈ Q requires the composition of We consider the following assumptions about a confined crowdsourced IoT environment: • The IoT devices in the crowdsourced environment are equipped to receive and transmit energy wirelessly. • The energy service providers are fixed in one location inside the confined area for the whole duration of the energy query. • The selected services deliver energy continuously once invoked until the consumer finishes. The invocation could be finished by switching to another service or the service is closed by the provider. • The energy consumers may receive energy from multiple providers at the same time if their aggregated current intensity is lower or equal to the compatibility intensity of the consuming device. • It is possible to invoke the energy services partially. The crowdsourced energy services do not have any Service Level Agreement (SLA) and the services do not have any lock in contract. • The providers have fluctuating energy usage behavior. We identify four main components to build a composition framework for crowdsourced energy services (CES): • Crowdsourced energy service model: The service model describes the function of delivering energy wirelessly and the associated qualities. This representation facilitates implementing a platform for crowdsourcing energy in an IoT environment. Providers use this model to advertise their energy services. • Energy query: The energy query model is defined to describe the IoT users energy requirements and preferences in the simplest way. A query model represents the spatiotemporal preferences and the energy requirements of an energy consumer. The energy query Q i ∈ Q is the main input to the framework. It defines the filtering parameters for the proposed query dependent composability model. • Filtering crowdsourced energy services: The composability model defines whether two energy services are com-posable according to an energy query. This model uses the query spatio-temporal features to find the available nearby energy services. Crowdsourced energy services are filtered based on the spatio-temporal features and the consuming IoT device features. • Composing crowdsourced energy services: The composition algorithm component finds the optimal composition of energy services which provides the required amount of energy within the query duration. The filtered energy services are composed to provide the required amount of energy to the consumer. The proposed composition algorithm is an extension of the temporal 0/1 knapsack algorithm. the aim of this extension is to improve the performance and the time consumption of the temporal 0/1 knapsack algorithm. CROWDSOURCED ENERGY SERVICE MODEL Crowdsourcing energy from IoT devices relies mainly on sharing IoT services with accessible mobile devices in the proximity. IoT-based energy services are modelled using the spatio-temporal aspects of device owners [3]. We extend the existing energy service model [5] by considering the dynamic energy usage behavior of providers and consumers. We formally define the crowdsourced energy service model as follows. Definition 1: A Crowdsourced Energy Service CES is a tuple of < Eid, Eownerid, F, Q > where: • Eid is a unique service ID, • Eownerid is the unique ID of the IoT device owner, • F is the set of CES IoT device's functionalities. • Q is a tuple of < q 1 , q 2 , ..., q n > where each q i denotes a QoS property of CES, e.g., energy capacity. The energy usage behavior of IoT devices also needs to be modeled to ensure a consistent provision of energy services. The energy capacity of IoT devices changes over time and depends on the user consumption behavior [23]. Several consumption models have been proposed for IoT devices [24] [25]. The energy consumption behavior is represented as a time series of the state of charge SoC by a timestamp { SoC(t) : t ∈ T }. We also need to capture the regularity of the energy usage behavior of the device. We use Kolmogorov − Sinai entropy [26] to define the regularity of energy usage time series. The lower the entropy value, the more regular behavior of energy usage by the IoT device is. We use the entropy to define some QoS attributes of an energy service. • Energy Intensity: Energy Intensity represents the intensity of the wirelessly transferred energy. The energy is transferred under a certain voltage. We assume that all IoT devices that are related to energy services are functioning under a voltage between 3 and 5 volts. These IoT devices are also compatible in term of voltage. • Transmission success rate: Transmission success rate T sr is the ratio between the transmitted energy from the energy provider and the received energy by the energy consumer [27]. T sr is calculated as follows. where G t , G r , L p ,γ, λ, β, θ, and D represent the transmission gain, the reception gain, polarization loss, rectifier efficiency, wave length, short distance energy transmission parameter, path loss coefficient, and the distance between devices, respectively. • Provision consistency parameter: The consistency parameter α is calculated by the formula where Kolent is the approximate entropy. If the energy usage behavior of the device is regular, α increases significantly. The usage time series shows a seasonal behavior which increases the 1 Kolent value. EU B is the energy usage behavior of the device owner. In [23], different usage patterns of IoT devices are defined. According to these patterns, the energy consumption of the device can be estimated. Providers who want to share their energy have to follow one of these patterns as follows: Suspend i.e., not using their devices e.g., EU B = 1, Casual i.e., using them casually with few functionalities e.g., EU B = 0.75, or Regular i.e., using them with a predictable usage behavior e.g., EU B = 0.50. • Deliverable Energy capacity: Deliverable Energy capacity, DEC is the energy capacity that a consumer realistically receives. It is affected by the Transmission success rate T sr and the provision consistency parameter α. DEC is calculated from the advertised EC as follows which is given by milliAmpere hour mAh. • Location: Location, loc is the GPS location of the IoT device providing an energy service. • Start time: Start time st is the time of launching an energy service by an IoT device. It is assumed to be announced by energy service providers. • End time: Given the initial energy capacity EC, the intensity of the transferred current I, and the start-time of the energy service st, the service effective end time et can be estimated by the following formula: The energy query model Definition 2: A Crowdsourced Energy Service Consumer Query is defined as a tuple Q < t, l, Re, I, d, Cl >: • t refers to the timestamp when the query is launched. • l refers to the location of the energy service consumer. We assume that the consumer stays fixed after launching the query. • RE represents the required amount of energy. • I is the maximum intensity of the wireless current that a consuming IoT device can receive. • d refers to a user-defined charging period of time designated by its start time and end time. • Cl is the coordination loss. It is the amount of energy to be spent for the connection establishment between the consumer and the provider. Definition 3: Given a set of crowdsourced energy services S CES = {CES 1 , CES 2 , . . . CES n } and a query Q < t, l, Re, I, d, Cl >, the spatio-temporal crowdsourced energy service composition problem is formulated as selecting the optimal composition of nearby energy services CES i ∈ S CES that can transfer the maximum amount of COMPOSABILITY MODEL OF CES We present a composability model of crowdsourced energy services to check whether two services are composable according to the constraints of the crowdsourced IoT environment. Four new composability rules are defined based on the spatio-temporal constraints of IoT users, the interface heterogeneity of IoT devices, and the energy loss features. We define query dependent composability rules according to the consumer's spatio-temporal preferences. The IoT devicerelated composability rules are defined by intrinsic properties of the IoT device's characteristics and interface. Examples of intrinsic properties include the intensity of the wireless energy and the energy loss while establishing a connection between two IoT devices. We define four composability rules as follows: (a) Spatial composability: Given two crowdsourced energy services CES i and CES j and query location Q.l, CES i and CES j are spatially composable, if and only if the distance between the location l of each energy service and the location Q.l is less than the distance permitting a successful energy wireless transfer ESD. Equation 5 describes the Spatial composability. We assume ESD is fixed for all human-centric IoT devices. In Equation 5, D refers to the Euclidean distance. For example, in Figure 4, CES 1 , CES 3 , and CES 5 are composable for the first query, on the right , and CES 7 and CES 9 are composable for the second query, on the left. (b)Temporal composability: The energy query duration defines whether two energy services are temporally composable. We use Allen's interval algebra [28] to define the temporal composability of energy services. Interval algebra defines services and query duration as intervals delimited by their start and end time < Start time , End time >. If CES i , CES j , and Q are two crowdsourced energy services and an energy query respectively, CES i and CES j are temporally composable, if and only if the duration of each service is within the query duration Q.d . We also define partial temporal composability for energy services. If the duration of an energy service CES i only overlaps with the query duration Q.d in the beginning or at the end, the energy service is considered as a partially available service. We derive a new service called partial service CES i by only the duration which is within Q.d. QoS parameters of CES i have to be recalculated according (c) Intensity compatibility: There are two different rules to check the current compatible composability of energy services according to the composition scenario: simultaneous or sequential. In both scenarios, the received energy should not be higher than the maximum intensity supported by the consuming IoT device. Given two crowdsourced energy services CES i and CES j and an energy query Q, the intensity compatibility is defined as follows: • CES 1 and CES 2 are sequentially composable if: CES 1 .I ≤ Q.I and CES 2 .I ≤ Q.I • CES 1 and CES 2 are simultaneously composable if: The energy services from human-centric IoT devices may scale from very small amounts (provided by tiny IoT devices) to a considerable amounts shared by bigger devices. In this regard, a consumer might spend more energy to receive than the provided small amount of energy from tiny energy services. The energy could be spent in the service discovery and for the connection establishment between the consumer and the provider [29]. Hence, some services may not be fit according to the user queries. it may not be possible to switch from a service to another only after a minimum connection time which allows provisioning energy amount higher than what has been spent in the connection establishment. We define this phenomena as coordination loss. We calculate the coordination loss of a query Q.Cl in Equation 7 In Equation 7, C c is the rate of energy consumption for communicating one unit of data with the cloud. f c is the flow of data interchanged between the consuming device and the cloud. Similarly, C i is the rate of energy consumption for communicating one unit of data with the IoT device providing the energy service i. f i is the flow of data interchanged between the consuming device and the IoT device providing the service i. The service provider also spends energy to coordinate the wireless energy transfer with the consuming device. We call this energy amount P Cl Provider's Coordination Loss. P Cl is defined in the same way with Q.Cl in Equation 8. In Equation 8, C j and f j are the rate of energy consumption for communicating one unit of data with the IoT device requesting energy j and the flow of data interchanged between the consuming device j respectively. An energy service CES is a component service for an energy query Q if and only if the coordination loss for the query Q.Cl is lower than the provided energy by the service CES: COMPOSITION OF CES In this section, we present the composition framework of crowdsourced energy services. The aim of the composition is to maximize the obtained energy by the consumer within the query duration from the available nearby services with respect to their spatio-temporal and energy related constraints. We start by explaining the filtering process of energy service candidates based on their spatio-temporal features. We present then different spatio-temporal composition techniques of crowdsourced energy services. Finally, we propose a heuristic to reduce the search space of candidate energy services. Filtering crowdsourced energy services The first step in the composition is to design the filtering process to select the candidate services. We need an efficient indexing method for the fast discovery of energy services. The location and time are intrinsic parts of energy services. Therefore, we index energy services based on spatio-temporal characteristics. The 3D R-tree is a spatio-temporal index data structure which deals with range queries of the type "report all objects within a specific area during the given time interval" [30]. The time is added as the third axis to spatial axes. When a query Q arrives, an area is defined by the location of the consumer Q.l and a distance allowing the wireless power transmission between IoT devices r. We use 3D R-tree to index services spatio-temporally. We deploy a function called Spatio-temporal selection algorithm (see Algorithm 1) to select energy services spatiotemporally [3]. The function takes input parameters as the position of the consumer Q.l, the start time of the query Q.t, and the duration of the query Q.d. The output of Spatiotemporal selection algorithm is the set of all nearby available services N earbyS between the start time and the end time of the query (Algorithm 1-Lines 1,2). We select services located in a defined area at the time interval [Q.t, Q.t + d] using the spatio-temporal composability rules. Each energy service has a time interval [st, et] and a location loc. Figure 5 represents a query Q and five energy services CES. The services are filtered spatially by selecting just the services inside the area defined by the consumer location Q.l Or 16: and the spatial composability rule (see Figure 4). The services are also filtered temporally by the temporal composability rule, choosing just services having time interval within or overlapping with the query duration [Q.t, Q.t + d] (see Figure 5). Each leaf in the 3D R-tree is considered as CES. A search cube SC is determined by the query location Q.l and duration Q.d. All leaves inside or which overlap with the search cube are selected (Algorithm 1-Lines 3-7). The overlapping services with the query duration like CES1, CES2, and CES5 in Figure 5 are called partially available services. The query duration overlaps only with parts of the time intervals of these services. We define new services only in the parts overlapping with the query duration. We also consider the provided energy services only in the overlapping parts (Algorithm 1-Lines 9-23). Let us assume that given two services CES i and CES j and an energy query Q, the availability time of CES i and CES j are within a query duration Q.d. The waiting time between the two services CES i and CES j , called W t, is defined in Equation 10: If the consuming IoT device has enough energy to sustain until the second service, the two services are not composable. For example, in Figure 6 the candidate services are in two different clusters. There is a time gap between Figure 6. Dividing a query based on the waiting time the two clusters. The consuming device can sustain along this period of time. Hence, the energy query can be divided into two shorter queries for more convenience of the IoT user. Moreover, the first query on the left may be dropped since the consumer has enough energy. Greedy composition of CES IoT users have different preferences for their energy request. An IoT user requires a certain amount of energy Q.RE for a defined query time interval [Q.t, Q.t + d]. The user may prefer to recharge at the earliest possible time. They might also prefer to get Q.RE in the shortest period of time. Another preference is to get the maximum of energy during Q.d. We generate different rankings for energy services based on the different user preferences e.g., earliest time, shortest time, or a maximum of energy. If the energy query can be satisfied by one service, the best service is selected based on the user preference. If an energy request can not be satisfied with one single service, multiple services need to be composed. Note that, the filtered energy service candidates are spatio-temporally composable. Energy services might be composed sequentially if they do not overlap. Services can also be composed simultaneously if they overlap during the query time interval. The intensity compatibility rule must be verified while composition. The assumption of this composition technique is that all energy services within the query duration cannot be decomposed into services of shorter intervals. The greedy composition relies mainly on selecting and composing the best services in terms of the user preferences e.g., maximum energy, earliest service, or shortest service time. The recursive algorithm 2 presents the systematic selection of the optimal set of services among the spatiotemporally composable services. The selected service BestS is verified if it is composable according to the intensity compatibility rule with the already selected services CompositeS (Algorithm 2-Lines 5-9). If the service BestS is composable, the service is added to the set of Composite service CompositeS (Algorithm 2-Lines 10,11). The greedy algorithm stops composing services when the user query is satisfied or no service is available for further consideration. Multiple local knapsack-based CES Composition One of our assumption is that energy services can be consumed partially. The user may switch to other energy services within the query duration. We define all the possible timestamps where the user may need to switch to better Algorithm 2 Greedy Composition of Energy Services Simply, the user may switch to a newly available service if the new service is better than the current service. If no better service is found, at the end of a service, the user needs to switch to the best available service. We divide the query duration into several time slots based on these timestamps (see vertical lines in Figure 5). The time slots represent the arrival time of a new service or the exit time of an existing service (Algorithm 3-Lines 5-13). For example, the start time of CES3 defines the first time slot. The start time of CES4 defines the second time slot. After dividing the query duration based on the defined timestamps, some chunks may have a very short period of time which might not respect the composition eligibility rule. Energy consumers may loose energy more than the provided energy by the selected service within this thin chunk due to the connection establishment. We propose a smoothing technique for the thin chunks in three situations as follows: • The thin chunk is created by two consecutive end times et of services. We propose to eliminate the latest timestamp defined by et and widen the next chunk on the right. We propose a multiple knapsack composition approach which aims to select an optimal set of partial energy services based on their deliverable energy capacity DEC and the defined chunks. Partial energy services which are available within the chunk are constrained by the intensity compatibility rule. The sum of the intensity of the simultaneously composed partial services cannot exceed the compatible intensity of the consuming device Q.I. In this situation, we have a local partial services composition for each chunk. The resulting composite services from all chunks are grouped into a global composite service which provides the maximum amount of energy from all the composable service candidates (Algorithm 3-Lines 15-27). The composition problem in each chunk is formulated as a 0/1 knapsack problem. A knapsack problem is the selection of a set of items having weights and values by maximizing the total value of selected items considering the limited weight capacity of the knapsack (Algorithm 3-Lines 19-25). We interpret each partial energy service within the chunk as an item. The service intensity I is considered as a weight of the partial services. DEC is considered as a value. The knapsack weight capacity is defined by the maximum capacity of the intensity compatible service. Heuristic-based spatio-temporal CES composition Our objective is to improve the multiple local knapsack based composition technique. Solving a 0/1 knapsack algorithm at each chunk provides an exhaustive exploration of all the possible compositions which has exponential runtime efficiency. The heuristic aims to merge some chunks to reduce the number of local optimizations. An approximate way to preserve the optimal compositions is to merge two consecutive chunks having the same service providing the maximum energy. Most probably, this latter service is selected by the 0/1 knapsack algorithm in addition to other composable services. The heuristic selects only two miniservices, the first miniservice providing the maximum energy amount and the Algorithm 4 Heuristic based space reduction next one providing the maximum amount of energy at each chunk. The selected miniservices have to be composable in terms of compatibility. If there is no composable service with the service providing the maximum energy amount at that chunk. the heuristic takes only this latter. First, we eliminate all the delimiters where the service providing maximum energy does not change over two consecutive chunks (Algorithm 4-Lines 15-21). A 0/1 knapsack algorithm is applied then for all the new chunks (Lines 22-31). Table 2 presents the provided energy by each service at each chunk after chunking the query duration (see figure 5). Chunks 2, 3, and 4 are merged together because the service CES3 provides the maximum amount of energy over these three chunks. Time chunks 5 and 6 are also merged because the service CES4 keeps providing the the maximum amount of energy along these two chunks. In the illustrated example, the number of initial chunks has been reduced from 7 slots to 4 slots (see figure 7). The number of new chunks N nc may decrease slightly or does not change compared to the number of the original chunks N orc . N nc may also decrease significantly. The number of the original chunks N orc can be in a larger or smaller scale compared to N nc . The intuitive explanation of the steep decrease in the number of new chunks N nc is the existence of a small number of services with high intensity along the query duration. Conversely, if N nc is slightly lower than N orc , it means that all the available services along the query duration are short services in terms of availability and comparable in terms of the provided energy. The complexity of the three proposed algorithms can be estimated based on the number of available services and the number of chunks. Since the greedy algorithm is short sighted, it has an efficient runtime complexity with a limited performance. The runtime complexity of the greedy composition can be summarized into the systematic selection of n available services multiplied by n − 1 composability control with that selected service O(n 2 ). The complexity of multiple knapsack-based composition can be estimated based on the number of chunks C and the complexity of the algorithm solving 0/1 knapsack problem at each chunk with respect to the intensity compatibility. We consider a dynamic programming solution for the 0/1 knapsack problem. The runtime complexity of the multiple knapsack-based composition is O(Cm 2 ). If we consider m as the number of available partial services within a chunk. The difference between the heuristic-based composition and the multiple knapsack-based composition is the significant reduction in the number of chunks of the energy query and considering only two partial services at each chunk. The heuristic-based composition complexity becomes O(Cm). EXPERIMENT RESULTS We compare the proposed composition algorithms with two different composition algorithms, priority-based resource scheduling algorithm [31], a QoS-aware time constrained composition algorithm [32]. We consider energy services as resources and maximizing energy as a priority for the priority-based resource scheduling algorithm. This algorithm relies mainly on selecting and composing the best energy services without chunking them. The QoS-aware service composition transforms the composition into a time constrained optimization problem then solve it using genetic algorithms. After chunking the query duration, the algorithm finds the optimal composition which maximizes the provided energy within the query duration. We evaluate the different composition techniques by two sets of experiments. First, we evaluate the effectiveness and feasibility of each composition technique in terms of the number of successfully served queries. We compare how close the completeness (i.e., the ratio of successfully served queries over the number of all queries) given by our heuristic-based composition algorithm to the completeness of the optimal composition given by the proposed bruteforce-like algorithm (Multiple knapsack-based composition algorithm) [33], [34]. Second, we evaluate the scalability of each composition algorithm by measuring the computation time while varying the number of energy services. Energy services have different spatio-temporal features and provided energy amounts. The experiments only test the performance of the composition framework from a single consumer perspective to evaluate the runtime efficiency and the effectiveness of the proposed algorithms in different scenarios. The effectiveness test hypothesis is "The more available energy services in a confined area (e.g., coffee shop), more successfully provisioned energy queries by the proposed composition framework". In the future work, we will extend the composition framework to process multiple parallel energy queries by labeling the already reserved services using different priority strategies. Datasets and experiment scenarios We create a scenario of crowdsourced IoT environment close to the reality to evaluate the performance of the proposed composition approach. To the best of our knowledge, there is no dataset about wireless energy sharing among human-centric IoT devices. We consider that crowdsourced energy services are provided from wearables or the spare energy of the smartphone batteries of IoT users. We create an energy crowdsourcing environment close to reality based on a renewable energy sharing environment 3 . A set of 25 houses daily harvest, consume and provide energy from their solar panels for two years [April 2012 to March 2014]. They are considered either as energy providers or consumers. Energy consumption and production is recorded every 30 minutes. Each house has 730 daily records (365x2). Each daily record has 48x2 fields for the produced and the consumed energy for each day. The crowdsourced energy service QoS parameters are defined based on these records. We normalize all the energy measurement values for all records from Watt hour to miliampere hour (mAh) to mimic the energy provided and consumed by IoT devices e.g., smartphone and wearables. The deliverable energy capacity DEC of IoT energy services S i starting at S i .st and ending at S i .et is defined by randomly matching a daily record of a provider from the renewable energy sharing environment considering only the energy produced during the same period of time. Similarly, the energy requirement of a query Q.RE is also generated and normalized from the daily energy consumption of the houses according to the query duration Q.du. We use Yelp 4 dataset to define the spatio-temporal features of IoT energy services and queries by check-in and checkout timestamps of people to a confined area. The dataset contains people's check-ins information into different confined areas e.g., coffee shops, restaurants, libraries etc. We consider these people as IoT users who are either energy providers or consumers. For example, the start time st of an energy service provided by an IoT user is the time of their check-ins into a coffee shop. Energy queries time Q.t s and duration Q.du are also generated from check-in and checkout timestamps of consumers. We use a uniform distribution 3. https://data.gov.au/dataset 4. https://www.yelp.com/dataset We define different scenarios to set the quality parameters of IoT based energy services. The scenarios are defined by the capacity of the provided energy and the availability duration of services. The service duration varies from 5 minutes to 1 hour to cover different scales of IoT devices. We also define multiple scenarios of energy queries. Energy queries are ranging from short duration queries with a low amount of required energy capacity to queries with long duration and high amount of required energy. The query duration Q.d varies between 10 minutes and 2 hours. Tables 3 and 4 present statistics about the used datasets. We grouped scenarios requiring composition into 5 different meta scenarios. Two meta scenarios (M etaScen 1, M etaScen 2) defined by services with short duration delivering energy to queries with short and long duration. The next two meta scenarios (M etaScen 3, M etaScen 4) are defined by services with long duration delivering also energy to queries with short and long duration. The last Meta scenario (M etaScen 5) represents an aggregated view of all the composition scenarios. We consider another energy quality attribute to verify the intensity compatible composability rule. We modify the intensity of each energy service in the range CES.I ∈ [0.5 , 1.5 A] considering the variety of human centric IoT devices. We also modify the compatible intensity of each energy query within the interval Q.I ∈ [1 , 2.5 A]. We investigate the effectiveness and the computation time of the three proposed composition techniques by a large number of energy services. 50,000 different IoT users have been identified from Yelp dataset which includes multiple check-ins into multiple confined areas. We modify the ratio between the number of queries and the number of services (N.CES/N.Q) among the IoT users from 1 (the number of services is equal to the number of the queries) to 9 (services are nine times the number of queries). In this paper, we only focus on the composition for single consumer's perspective. The scalability is reflected by the run-time efficiency of the composition algorithms. The coordination of parallel energy queries will be considered in the future. Effectiveness We investigate the effectiveness of the proposed composition techniques by measuring the completeness of energy queries versus the ratio N.CES/N.Q. The ratio shows the number of services compared to the number of queries. We define a threshold parameter SQ for energy queries. We consider a query is successfully served by the neighbor services if SQ from the query has been served. For example, we consider the energy query Q i is successfully served if 80% from the required energy has been provided by the neighbor devices e.g., SQ = 0.8. We vary SQ from 0.7 to 1 for the different composition techniques to measure the ratio of the number of successful queries to the total number of queries. We consider the average value for each technique. We define completeness parameter as the number of the successfully served queries to the number of all queries. The highest value of the completeness parameter is 1.0. Figure 9 shows the completeness parameter with respect to the N.CES/N.Q. In terms of performance, the greedy, QoS aware (GA-based), and the priority-based composition algorithm present low completeness score compared to the heuristic-based and the multiple knapsack-based composition. The GA-based composition has the lowest completeness score of regardless the number of services and the length of their duration (see figures 9a, 9b and 9c). This performance can be explained by the inability of QoS-aware composition to consider simultaneous services and the genetic algorithm cannot check the intensity composability. The greedy algorithm and the priority-based composition present competitive result because these two algorithms cannot change a selected service in the midst of a composition unless this service ends. The greedy composition performs better with short services. The short duration services allow the greedy algorithm to combine a large number of services. However, the priority-based composition performs better than the greedy algorithm in long services. Committing to a selected long service will definitely make Overall, the heuristic-based composition presents competitive results as the multiple knapsack based composition (see figure 9) especially for long services (see figure 9b). Because, the new heuristic-based chunking has no big difference with the initial chunking of the query with long services. A query usually has less chunks with long services. Figure 9a reflects the astonishing performance of the multiple knapsack based composition with short services. Chunking the query duration based on short duration services leads to more multiple knapsack optimizations. In contrast, The heuristic is not widely affected by the number of chunks. This latter depends on the length of the service duration. First, it defines the same number of chunks as the multiple knapsack algorithm. Chunks are widened based on services providing the maximum energy amount. Thus, the number of chunks is significantly reduced in the the heuristic-based composition approach. We analyse the distribution of the successfully served queries for the proposed algorithms to ascertain the previous effectiveness evaluation. The previous effectiveness evaluation only considers the average number of successfully served queries. In this set of experiment, we vary the ratio N.CES/N.Q and define the corresponding boxplot for all the proposed composition algorithms (see figure 10). The greedy composition presents sparse distribution of the completeness score even with the increasing number of services (see figure 10a). However, the multiple knapsack and the heuristic-based compositions show a similar behavior. The sparsity of the completeness scores decreases significantly when the number of available energy services increases (see figure 10c and 10b ). This can be explained by The fact that the more available nearby services, the more likely the composition will fulfill the energy requirements. In particular, when the ratio N.CES/N.Q is higher than 6, almost all the compositions are successfully served. Scalability We measure the average execution time of each composition technique conducted in the five different meta scenarios. Figure 11 presents the execution time of each technique with respect to the ratio N.CES/N.Q. We consider the number of nearby services varies from 5 to 50 energy services for each query. The results show that the execution time increases as the ratio N.CES/N.Q increases. This performance behavior is expected from all the composition techniques because of the increase in the number of services from IoT users. The heuristic-based composition technique is performing in lower execution time compared to the other compositions in all scenarios (see figures 11a, 11b,). The multiple knapsack based composition is consuming more CPU time in all scenarios compared to the greedy and the heuristic-based techniques. It starts by chunking the query duration based on the presence and the end of energy services. A 0/1 knapsack optimization then is performed at each chunk. The number of optimization operations performed by each composition technique explains the difference between the heuristic and the multiple knapsack composition techniques. The heuristic based composition aims to minimize the number of optimizations by reducing the number of chunks. Moreover, the heuristic takes less CPU time than the greedy algorithm in all scenarios because it does not apply the 0/1 knapsack algorithm at each chunk. Instead, it selects the mini service providing the maximum amount of energy at that chunk. It adds then the second mini service providing the amount of energy as long as it is composable in terms of compatibility. The behavior of the greedy composition is explained by the fact that it does not perform any optimization while selecting services. However, the priority-based and the GAbased composition techniques have a very poor runtime efficiency in the crowdsourced IoT environment. Even with a small number of available services nearby a consumer, GAbased and priority-based compositions have to find all the possible combination before selecting the optimal composite energy service. The priority-based composition finds all the possible combinations without chunking the available energy services. The GA-based composition finds all the possible combinations after chunking the query duration, which explains its highest CPU time for all the scenarios. RELATED WORK The background of our work comes from three different areas, i.e., energy harvesting and wireless energy transfer, crowdsourcing, and service selection and composition. We describe the related work to our research in each of these domains. IoT based crowdsourcing The mobile crowdsourcing [2] has emerged recently as a new paradigm represented in a number of applications like crowdsencing and crowdcomputing. SignalGuru [35] is a local-based mobile application to detect opportunistically current traffic signals by smartphones. These traffic signals are collobaratively exchanged through an ad-hoc network. The crowdcomputing also leverages mobile devices to compute outsourced data from different sources like sensors or smartphones. Honeybee [36] is a local-based application which outsources face detection tasks to the neighboring devices. Femtoclouds [4] also represent significant exploitation of crowdsourcing smartphones capabilities. Femtoclouds are a cloud-type computing environment which harnesses a set of co-located smartphones as computing service providers for lightweight applications. CaaS [3] is a framework for WiFi hotspot sharing. This framework provides a WiFi covered trip plan by composing sensordata services from public transportation and WiFi coverage services from public and individual hotspots. Energy harvesting and wireless transfer In the context of IoT, body movement and heat provide a significant source of energy for wearables [16]. This energy can be converted to electric power which can satisfy IoT devices. Recently, several research have been conducted to integrate harvesting this energy into designing IoT objects [15], [37]. The Kinetic Energy Harvesting (KEH) for IoT is designed to capture kinetic energy from wearables by exercising different daily activities such as walking, running, and reading [15]. The advent of wireless charging makes the harvested energy from IoT devices more flexible and convenient to be easily shared. Energy sharing helps create self-sustained systems. Different techniques have been developed for the wireless charging in IoT and sensor networks [9]. The most common techniques are magnetic inductive coupling, magnetic resonant coupling and microwave radiation. These techniques are used in wireless sensor networks by deploying charger robots in the network to charge the low battery sensors [29]. A new paradigm of uncoupled wireless charging based on radio waves has emerged [38]. The Energous Wattup applies radio waves to enable wireless energy sharing for IoT devices. Wireless crowd charging is a new paradigm in the wireless transfer technology [12], [14], [13]. This paradigm has been introduced by Bulut al. to provide IoT users with ubiquitous power access through crowdsourcing [12]. Dhungana et al. provide a recent comprehensive survey on the use of peer-to-peer energy sharing in four different applications of mobile networks, namely, wireless sensor networks (WSN), mobile social networks (MSN), vehicular ad hoc networks (VANET) and UAV networks. They explore the technical approaches and mathematical tools adopted for the utilization of energy sharing in the aforementioned domains [14]. Raptis et al. claim that having even limited knowledge on the crowd network properties can be crucial for the design of crowd charging protocols. A key characteristic of such crowds is the active presence and involvement of the users in online social networks. They suggest the exploitation of online social information in order to tune the wireless crowd charging process [13]. In our paper, we harness the service paradigm to abstract wireless energy services and enable energy sharing in the crowdsourced IoT environment. Service selection and composition The service selection and composition is a topical research in different domains such as cloud computing, sensor-cloud services, and social networks [3], [39], [4]. In a sensorcloud framework, services are composed according to their functional properties and the consumer preferences (QoS).The service composition methods usually convert the composition to a resource scheduling or an optimization problem. Resource scheduling in service composition has been extensively researched [40]. The fundamental parameters of resource scheduling algorithms are the optimization target and the scheduling priority. The scheduling target in service composition is defined mostly by the resource utilization maximization, resource utilization fairness, or minimization of scheduling time [41], [42]. The scheduling priority defines which services to be privileged. For example, the priority for Short Job First (SJF) scheduling algorithm is shortest jobs to be scheduled first [31]. Similarly, Directed Acyclic Graphs (DAG) represent one of the major algorithms used in service composition to define the scheduling and priority policies of services. Optimization methods utilize different algorithms to obtain an optimal solution such as integer programming, genetic algorithm (GA), and particle swarm optimization (PSO) [32], [43]. A data-driven service composition approach is proposed based on Petri-nets to meet the need of the business requirement [44]. Service functionalities may be constrained [45]. Wang et al. proposes a constraintaware service composition method [46]. Their solution includes novel prepossessing techniques and a graph searchbased algorithm [46]. Various QoS-aware service selection methods have been proposed in [39]. Deng et al. proposed a service pruning method to address the QoS-correlation problem in service selection and composition [47]. The composition of crowdsourced services [3] should consider two aspects, the spatio-temporal features of the consumers and their preferences. Usually, crowdsourced services are from different sources (mobile and static devices). The functional properties of the provided services should conform the spatio-temporal features of the query. The consumer preferences have to be met by the QoS of the provided services. Neiat et al. [3] designed a spatio-temporal service composition framework to compose WiFi hotspots services and to provide the most convenient trip plan with the best crowdsourced WiFi coverage. The proposed composition method uses the resulting composition of WiFi hotspots services as a QoS for selecting the best combination of line segment services. The advancements in mobile devices and communication technologies enables the new mobile applications which rely on interaction between mobile services [4]. The traditional service composition techniques cannot perform in this pervasive mobile environment [48]. Pervasive Information Communities Organization (PICO) [49] is one of the first proposed middleware to compose services in a pervasive environment. PICO represents dynamic resources as services. The component services are modeled as directed attributed graphs. These basic services are dynamically combined to serve consumer requests. Han and Zhang developed a dynamic source routing based service composition protocol [50]. The proposed composition protocol efficiently consider QoS in real-time systems such as delay and cost. However, the developed composition protocol does not consider the users' mobility. A service composition technique that incorporates the providers' mobility is proposed in [51]. The proposed solution is efficient in term of selecting the most stable service in a dynamic environment. Crowdsourcing energy as a service is converted to a Qosaware service composition problem. The spatio-temporal of energy services are considered as Qos attributes (i.e., start and end time, duration and location of an energy service). Composing energy services relies on finding the optimal selection of nearby services, which fulfils the required energy of a consumer within their query duration. Existing composition techniques may not be directly applicable to compose energy service due to the uniqueness of the crowdsourced IoT environment. An IoT user may consume only a part of the advertised energy from nearby services. Another key issue is the requirement of a novel composability model of energy services. The energy services should consider intensity compatibility between the user's IoT device and the providing devices in a composition. CONCLUSION We propose a novel composition framework to crowdsource wireless energy services from IoT devices. We design a novel composability model considering the energy usage behavior and the spatio-temporal aspects of the IoT devices. We formulate the composition problem as a multi-objective optimization of meeting users' energy requirements in the earliest and shortest time intervals. We conduct a set of experiments to investigate and compare the scalability and the effectiveness of the proposed composition techniques, i.e., a greedy composition technique, the multi-knapsack composition and a heuristic based composition approach. In an IoT environment, the energy services might scale from very small capacity provided by tiny devices to considerable capacity provided by bigger devices. Results show that the proposed approach is scalable and effective in various composition scenarios. Experiment results depict that the greedy and heuristic based composition approaches are more scalable and runtime efficient than the the multiknapsack composition approach. In the future work, we will explore the mobility challenges for composing services in a crowdsourced IoT environment. ACKNOWLEDGEMENT This work was also made possible by NPRP9-224-1-049 grant from the Qatar National Research Fund (a member of Qatar Foundation) and DP160100149 and LE180100158 grants from the Australian Research Council. The statements made herein are solely the responsibility of the authors.
2020-03-19T10:34:08.571Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "4a8999aae35e73e582c32574dbf5a88ad4d28939", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.07528", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4bb40121100536e6ded310e3ebda23fcb7c4207c", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53540435
pes2o/s2orc
v3-fos-license
Recursion Operator and Rational Lax Representation We consider equations arising from rational Lax representations. A general method to construct recursion operators for such equations is given. Several examples are given, including a degenerate bi-Hamiltonian system with a recursion operator. I. Introduction Recently a new method of constructing a recursion operator from Lax representation was introduced in [1]. This construction depends on Lax representation of a given system of PDEs. Let be Lax representation of an integrable nonlinear system of PDEs. Then a hierarchy of symmetries can be given by where t 0 = t, A 0 = A and A n , n = 0, 1, 2 . . . , are Gel'fand-Dikkii operators given in terms of L. The recursion relation between symmetries can be written as L t n+1 = LL tn + [R n , L], n = 0, 1, 2 . . . , where R n is an operator such that ordR n = ordL. This symmetry relation allows to find R n , hence L t n+1 , in terms of L and L tn . In [1], [2] this method was applied to construct recursion operators for Lax equations with different classes of scalar and shift operators, corresponding to field and lattice systems respectively. In [3] the method was applied to Lax equations on a Poissson algebra of Laurent series with the polynomial Lax function. Such equations give systems of hydrodynamic type. They were also discussed in [4]- [7]. The Hamiltonian structure of the Lax equation on a Poisson algebra was studed in [8]. Here we consider the Lax equation on the Poisson algebra Λ with a rational Lax function where ∆ 1 , ∆ 2 are polynomials of degree N and M, respectively, and N > M. The Lax equation is where the Poisson bracket is given by First we study the symmetry relation (3) for the rational Lax function. Then we give some examples. In particular, we find a recursion operator R for equation (6) with the Lax function which leeds to the system [4] S t = P x , The recursion operator is given by In [4] bi-Hamiltonian representation of this equation was constructed with Hamiltonian operators and These Hamiltonian operators are degenerate, so, one can not use them to find a recursion operator. But it turns out that they are related to the recursion operator R. One can easily check that the following equality holds We observe that the degeneracy in the bi-Hamiltonian operators is due to the following fact. Let p ′ = p + F then the Lax function becomes This means that we have two independent variabels P and G, where The equation corresponding to the Lax function (12) has been studied in [3]. To remove degeneracy one can take the Lax function as As an example we shall consider the equation (6) with the Lax function II. Symmetry Relation for Rational Lax Representation. Following [1] we consider the hierarchy of symmetries for the Lax equation III. Examples. Example 2. Let us consider the equation (8) given in introduction. Lemma 3. A recursion operator for (8) is given by (9). Proof. Using (17) for R n , we have R n = A + B p + Q . So, the symmetry relation (16) is To have the equality the coefficients of p and (p + Q) −3 must be zero. It gives the recursion relations to find A and B. Then the coefficients of p 0 , (p + Lemma 5. A recursion operator for (19) is given by Proof. Using (17) for R n , we have Therefore, the coefficients of p, p −2 , and (p + F ) −3 must be zero, it gives recursion relations to find A, B and C. Then the coefficients of p 0 , p −1 , (p + F ) −1 and (p + F ) −2 , give expressions for ∂S ∂t n , ∂P ∂t n , ∂Q ∂t n and ∂F ∂t n . ✷
2018-10-17T23:59:41.800Z
2001-07-23T00:00:00.000
{ "year": 2001, "sha1": "af9453d4838f98ba4b849df0a6cb3354a91d9791", "oa_license": null, "oa_url": "http://repository.bilkent.edu.tr/bitstream/11693/24707/1/Recursion_operator_and_dispersionless_rational_Lax_representation.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "14b2495e5100efbdee7da7a579764f1489938c1f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
53977603
pes2o/s2orc
v3-fos-license
IMPROVED SINGLE-FACE LAPPING BY USING AN AIR BEARING SUPPORTED LAP Lapping is a finishing process extensively used in the engineering world for those parts that require a fine finish surface such as gage blocks or flats for checking production parts. Nowadays, there are several machines in which single-face, double-face or rotating lapping processes can be carried out. Generally, all these machines use a rigid bearing system for supporting the lap shaft and a flexible coupling to connect that shaft to the motor. Therefore, their vibrations can affect the surface finishing of parts to be lapped. One way to reduce that effect and greatly improve the surface quality is to use an air bearing. The present work is aimed at investigating the lapping parameters when using an air bearing system. For comparison purposes, tests using a conventional rigid system were also conducted. Results show vibrations have a negative effect on flatness and roughness, reason for which lapping can take longer time to reach the same results obtained with an air bearing system. INTRODUCTION Lapping is an abrasive process carried out by means of loose abrasive in which the role of the cutting points is played by grains finding momentary support on the laps.Lapping has four main purposes: to obtain superior accuracy as to dimensions, to correct shape imperfections, to obtain smooth polished surfaces and to improve the fit between surfaces.Seating valves in gas engines is a common illustration of lapping, with the valve itself serving as the lap.The purpose of using a pneumatic bearing is to damp out vibrations that can introduce a negative effect on the surface quality.No reports of the application in lapping machines were found in the literature hence the concept appears to be novel. Lapping is considered as a low rate removal process that is carried out by rubbing the surface to be lapped against a mating form called a lap.Three properties are taken into account when a part is going to be lapped.The first one is related to hardness, giving rise to the first lapping rule: "the lap must be softer than the workpiece" [3].The second one refers to the pressure on the workpiece in order to assure contact with the lap.The last one refers to the kinematics of the relative motions between all the elements. The lap may be charged with a fine abrasive moistened with oil or grease.If a part is to be lapped to a final accurate dimension, a mating form of a softer material such as close-grain cast iron, copper, brass or lead is made.Aluminum oxide, silicon carbide, and diamond grits are used for lapping.Lapping is a material removal process similar to grinding.However, lapping requires considerable time to achieve higher surface finish specifications, in most cases more than 100 hours, depending on the lap and parts to be lapped.The material removal rate for a lapping process is lower than that for a grinding process.Material removal rates less that 5 µm/min have been reported [4].No more than 5 to 13 µm should be left for removal by this method.For most applications, grit sizes range from 100 to 800, depending on the finish desired.For most efficient lapping, speeds generally range from 150 to 240 m/min with pressures from 7 to 21 kPa for soft materials and 70 kPa and higher for harder materials [2]. DESCRIPTION In the rotating lapping machine discussed in this paper, a rigid structure supports the mechanism and the drive system, which consists of the lap, the pneumatic bearing and the motor.The lap used in this equipment is a 304.8 mm (12 in) diameter and 19 mm (0.75 in) thick plate made of ASTM 30 soft cast iron [5] and the conditioning rings are cylinders of 114 mm in diameter made of ASTM 4140 steel.The composition of the cast iron is 3.25 % C and 2 % Si with an average hardness of 210 H RB .A soft material is used in accordance with the fact that the lap should always be softer than the material to be lapped in order to avoid charging the work piece and cutting itself [2].Harder laps will often cause glazing and scratching while laps which are too soft cause a loss of flatness and parallelism and will produce grayer finishes.Care was taken to make the plate heavy enough so that it would not distort in use.In order to allow the abrasive fluid to flow, angular and radial grooves were machined and the surface was rectified.Workpieces to be lapped are supported by three holders mounted over the lap at 120º between them.The frictional force can be varied using weights mounted on the workpiece holders.In this prototype, the abrasive and liquid are manually supplied. The pneumatic bearing consists of two complementary conical parts.This geometry allows axial and radial loads in the same plane.The pneumatic bearing has two elements: a static (lower) one and a floating (upper) one [3].The static element of the pneumatic bearing has twelve spreads: six for the axial thrust component and six for the radial force component.The dynamic part, also called the floating element, has a conical surface and a hole in which the shaft couple to the gearbox is connected.The clearance between the two elements is 28 µm, when pressurized air is introduced at 490 kPa. The drive system consists of a 373 W (½ hp) 4-pole induction motor coupled to a gearbox with a 5:1 reduction ratio, thus the maximum rotational lap speed is 350 rpm.In order to adjust the speed for optimum lapping, a frequency changer is directly connected to the motor.Normally, lapping equipment operates at a fixed lapping speed which is of the order of 250 rpm [2].Other operating conditions that were varied to determine their effect on the stock removal rate [4,5] are the abrasive size and lapping pressure.Figure 1 shows a diagram of the rotating lapping machine and the conical air bearing. MATERIALS AND METHODS The test plan was divided into two parts.In the first one, lapping parameters when using an air bearing supported lap were studied.Tests were conducted using workpieces made of AISI 1045 steel of 12.2 mm diameter and 7.0 mm height, which were quenched and annealed so that and average hardness of 50 H RC was obtained.Both sides of these workpieces were surface grounded, which allows starting the tests with a roughness number of 7, equivalent to 6.3 µm.In the second part, a number of tests were performed to examine the effects of a rigid bearing on the surface quality compared to those obtained with an air bearing system.Rings made of AISI 4140 steel of 88.9 mm (3.5 in) outer diameter, 38.1 mm (1.5 in) inner diameter and 19 mm (0.75 in) were used.These rings were also quenched and annealed so that an average hardness of 46 H RC was obtained.After that, they were surface grounded to 6.3 µm. The abrasive compound for all the tests consisted of two parts: abrasive and vehicle.Calcined aluminum oxide was used as abrasive.In order to assess the effect of the grain size on the surface quality, abrasives no. 12 (5 µm) and no. 13 (1 µm) were chosen taking into account the hardness of materials to be lapped.On the other hand, machine oil MOBIL DTE-26 was used as vehicle since it is a light oil that dilutes easily the abrasive and allows having a uniform layer on the surface that remains adhered for a long time.One part of abrasive and 15.85 parts of oil by weight were carefully mixed and this concentration was kept constant for all the tests. Tests were conducted at a lap rotation speed of 300 rpm and three different pressures were used: 39.2 kPa, 98 kPa and, 157 kPa.The lapping times for abrasive no. 12 were 30, 60 and, 120 minutes while for abrasive no. 13 the times were 60, 120, and 240 minutes. Pressure and lapping time In order to determine the amount of stock removed, workpieces were carefully weighted before and after lapping.Figure 2 and 3 illustrate the stock removal amount as a function of lapping time for the two different aluminum oxide grain sizes, respectively.From figure 2, it can be observed that this amount is greater when lapping at high pressures combined with long times, which is convenient for the first lapping stages.A similar behavior is observed in figure 3; however, the stock removal is considerably smaller for a 1 µm grain size than that obtained with a 5 µm abrasive grain size.It is also observed that, for low pressures, the amount of stock removed correlates linearly with lapping time. Roughness Surface quality was evaluated in terms of roughness, for which a stylus surface analyzer was used.The mean height of profile irregularities (Rc) was measured considering 5 points (profile peaks and valleys) over the sampling length.Figures 4 and 5 illustrate roughness Rc as a function of lapping time with aluminum oxide no. 12 and no. 13 as abrasives, respectively.From figure 4, when using abrasive no.12, it can be observed that the surface quality tends to improve when increasing lapping time, but it is clear that a low pressure gives better control on the workpiece roughness, which is very useful for the final lapping stages; however, in this case a longer lapping time is required.On the other hand, abrasive no.13, it is observed that the higher lapping pressures result in lower roughness.From figure 5, low pressures require very long lapping times for removing marks from grinding.Also, in this case, roughness decreases with increasing lapping times no matter what pressure is used.The best surface quality was obtained for the highest pressure and the lowest grain size for a 240 min lapping time.The measured roughness in this case was 0.5 µm. Step lapping Step lapping tests in which surface ground workpieces were first lapped with a 5 µm grain size abrasive and then with a 1 µm abrasive were conducted.Lapping pressure as well as the fluid abrasive concentration and the lap speed were kept constant.The process time was 240 min.The best surface quality was obtained when working at 157 kPa pressure, for which a roughness Rc of 0.1 µm was measured.Figure 6 shows the surface images of the sequence used in this test for the best quality obtained.Marks from grinding are clearly observed in figure 6 (a), but after lapping with 5 µm grain size they are partly removed in 6 (b) and, finally, a smooth surface is obtained for the final stage in 6 (c). 4.2 Rigid bearing vs. air bearing Roughness Surface ground workpieces were lapped using both systems: air bearing (lapping type A) and rigid bearing (lapping type B).In both cases, the total lapping time was 4 hours.During that period, roughness measurements were taken every 30 minutes, at which times the workpieces and the lap were cleaned in order to remove all the metallic particles released by abrasive action.The lap rotation speed was fixed at 300 rpm and the pressure applied to the workpiece was due to its own weight.Figure 7 presents roughness Rc data with a cutoff of 0.25 mm as a function of lapping time with both 1 µm and 5 µm grain size abrasive.A difference of 0.5 µm is observed at 30 min for both abrasives.On the other hand, this difference is smaller, between 1.0 µm and 0.2 µm for longer lapping times.As this figure shows, longer process time is required with a rigid bearing than with an air bearing to reach similar roughness levels. Flatness Workpieces were lapped during 4 hours with aluminum oxide no.12. Flatness was measured by means of a coordinate measuring machine Zeiss MC 850.For that purpose, a mesh with a 10 mm distance between nodes was traced on workpieces and the relative heights between a reference point and the others were then measured.Table 1 presents the results from the flatness tests.A smaller angular deviation in both planes is observed when vibrations are damped out. Also, a smaller distance between the highest and the lowest points is observed with the use of an air bearing system. 1. Results from flatness analysis of lapped surfaces using the air bearing system (type A) and rigid bearing system (type B). CONCLUSIONS Tests were carried out to evaluate the surface quality obtained with an air bearing supported lap and to compare with results using a rigid bearing system.It was found that a rigid bearing system generates vibrations that have a negative effect on surface quality, especially on flatness and roughness, requiring longer lapping times than the air bearing supported lap to attain very high quality surfaces. Step lapping appears to be a good alternative to improve the surface quality when single-face lapping is required, but its application depends strongly on the material to be lapped as well as on the surface finish specifications. Figure 1 . Figure 1.Rotating lapping machine with a pneumatic bearing. Figure 2 .Figure 3 . Figure 2. Stock removal as a function of lapping time for aluminum oxide no. 12 (machine with pneumatic bearing). Figure 4 . Figure 4. Roughness Rc as a function of lapping time for aluminum oxide no.12(machine with pneumatic bearing). Figure 5 . Figure 5. Roughness Rc as a function of lapping time for aluminum oxide no.13 (machine with pneumatic bearing).
2018-11-30T04:13:21.087Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "bbcc7e89c84eb0c2f189eb0e311a345636f2e254", "oa_license": "CCBYNCSA", "oa_url": "http://www.jart.icat.unam.mx/index.php/jart/article/download/530/526", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "bbcc7e89c84eb0c2f189eb0e311a345636f2e254", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
219979154
pes2o/s2orc
v3-fos-license
Individual Security as an Indicator of Relationship Security among Indian Couples in an Intimate Relationship Healthy adult attachment is significant for relationship continuity and satisfaction. While security has been studied as a concept in literature, limited research exists on the experience of it in the context of relationships in India. The purpose of the research was to understand and conceptualize the experience of security in the process of tool development. A qualitative approach from a phenomenological paradigm was used to obtain rich experiences, understand it in depth and maintain the uniqueness of the data. In two separate studies, 29 participants between the ages of 18 and 32, married and those in pre-marital relationships participated either in a focus group discussion or an individual interview to share their experiences. The present paper explores how one emergent theme pertaining to individual security influences significantly relationship security. Analysis revealed how past experiences, family upbringing can influence sense of being „valued‟ in the relationship and the factors they seek for in a partner to feel secure. In a rapidly westernizing culture, this information could help understand the complexities of relationships in modern India, aid in building better psychoeducational and counselling services in addition to theory building on the concept of relationship security. INTRODUCTION Divorce and broken relationships is a rising concern in today"s society. A study by Dommaraju and Jones [1] showed that the number of divorced and separated people have more than doubled over the past twenty years in India. Individual security leads to healthy interpersonal relationships, triggering feelings of warmth and safeness, sets up a confidence expectancy that others will be warm, accepting, trustworthy, supportive and so on [2]. Romantic relationships are valued for being able to fulfil attachment needs and providing us reassurance of worth, social integration, and opportunity for nurturance explains Weiss [3] in his Theory of Relational provisions. Theoretical background Attachment theories state that individuals in a relationship are strongly driven to seek safety and security in the context of it and that secure relationships tend to last longer and associated with higher levels of satisfaction as compared to insecure relationships [4,5]. These theories which have been the foundation of most relationship studies states that early attachment influences the development of emotional stability, mental health and satisfying close relationships [6]. Blatz [7], who was one among the first to emphasize security as a feature in relationships, looked at it, not as a single entity but as a combination of "mature dependent security" and "mature independent security", wherein an individual depends on another for certain needs in addition to themselves [8,9]. Research states that repeated negative experiences or patterns in the past due to caregiver/ partner rejection can lead to insecurity by causing the individual to anticipate similar outcomes in the future [10,2,7,11,12,2]. According to Blatz [7], the influence on adult relationships is mediated by the personality changes that take place due to disruptions of relationships with primary caregivers [13]. Majority of relationship studies derive their meaning from Western values and culture [14]. Theories and paradigms that work in one culture may not necessarily hold true in another. Mirecki & Chou [15], in their study among immigrant families, emphasized the importance of culture and society on attachment. Considering the clear shortage of studies that examine relationship security in depth and factors or dimensions that constitute it from an experiential perspective, the current study takes into consideration the changing cultural context of India while understanding how individual security influences relationship security. Majority of literature in India has looked at how early attachment serves a mediating role in influencing adolescent and adult relationships. Factors such as developing a healthy personal identity, healthy views on sexuality [16], social competence [17] and even factors that influence choice of romantic partner [18] have been studied, with very little focus on the concept of relationship security itself, and its constituents in adult relationships. It is important at this juncture, to differentiate between individual security and relationship security. Individual security is a state in which a person perceives his/her environment to be safe and free from harm and threat [19]. Individuals who are psychologically secure usually have high confidence, trust in them and significant others to meet their basic need [20]. Psychiatrists have found extensive elements of insecurity in maladjusted individuals which endanger mental health [21]. Relationship security on the other hand is interdependent and dyadic in nature. Significance and Implication Taking into consideration the current context of relationships in India, relationship security, which is sought after in most relationships can be synonymous with a number of factors. This paper emphasizes individual security as a constituent of relationship security and therefore, firstly, by understanding the importance of this, we can identify areas of focus in counselling. Secondly, the results will help trace back to the early influential factors such as parenting, peer relationships and even experiences in school, which can influence how secure an individual feels in an adult relationship. Awareness or insight as we know is the first stage of successful counselling. By focusing on individual factors such as self-esteem, one can indirectly influence relationship variables such as satisfaction and stability. Thirdly, data from the current research can help build Relationship Education Programs focusing on individuals having realistic expectations about their own relationships (Fincham, Stanley & Rhodes, n.d), equipping them with coping skills and resources to function effectively. This can also help parents; teachers and peers understand their role in shaping a secure individual. Approach Qualitative approach, being one of the emerging and legitimate methodologies in social enquiry [22], was used to obtain rich experiences of the participants. The epistemological paradigm of phenomenology was adopted since the author wished to operationalize relationship security taking into account the lived experiences and expectations of Indian couples. Braun & Clarke [23], outline a series of steps that help code data, tapping into the experiences of participants and the meanings they attach to them. The definition of relationship security was first operationally constructed for the purpose of this research based on focus group discussions and individual interviews. Participants were asked to come up with phrases that define relationship security which were then given to 5 experts and revised to help construct the semi-structured interview for the studies. The definition was compared to the emergent themes to ensure that it encompassed all relevant aspects of it. The resultant definition was "A perceived sense of safety, trust and stability in the relationship where expectations are met or will be met adequately within the boundaries of the intimate relationship". The paper will look at how individual variables influence relationship security. Participants Individuals were selected through purposive sampling and were between the ages of 18 and 32. This sampling strategy was used to select participants who could provide rich information to study relationship security in-depth [24]. The inclusion criteria outlined that participants were either currently in a relationship, or had to have been in a relationship within the past one year. A total of 29 English speaking participants, from Urban Bangalore, a city in the South of India, were chosen to share their experiences. Procedure Data to help conceptualize relationship security was analysed from three phases of interaction with participants. In the first phase, nine participants were selected (five female, four male) who were individually interviewed about their experience of security in their relationship. The inclusion criteria was that individuals had to currently be in a relationship with the same partner for a minimum period of 2 months [ 1 ], and between the ages of 18 and 25 (premarital group). The second phase followed a similar method, and was conducted with twelve participants (six male, six female) in a focus group discussion, where they asked what security in a relationship meant, their expectations and experience of the same. The participants in this group consisted of individuals between the ages of 18 to 32 (pre-marital group), who were either currently in a relationship or recently separated from their partner. Focus groups have over the years been proven to be an effective method of data collection [25]. The third phase involved individual interviews with ten individuals (six female, four male) who constituted the marital group. A demographic detail sheet was used to ensure that participants met the inclusion criteria of the study. Information was recorded following informed consent by the participants. Analysis and Validation Key concepts and themes were elicited from the data using an iterative step-by-step thematic analysis process, which is described to be an apt method for analysis of large qualitative data [23] and for examining the perspectives of different research participants. The following steps as outlined by Braun & Clarke [23] were incorporated 1. Familiarising with the data-This was achieved through active engagement with, and repeated reading of the 29 transcribed interviews and the reflexive journal. 2. Generation of initial codes-The data was organized using a data-driven approach as there was no single theory explaining the concept of relationship security. Coding was done manually as the researcher wished to engage more closely with the data set, being a relatively new area of research in India. Interesting aspects and repeated patterns were highlighted. Microsoft Excel 2008 was used for this purpose. 3. Search for Themes-The codes generated were organized into basic level themes that are outlined below. In this case, a broad (Global Theme) was "past influenced present individual security". Based on common meanings emphasized by participants, subthemes related to the sense of self, sense of fear and sense of control was identified. Member check was used to increase the trustworthiness of data obtained, by asking the participants to verify accuracy or report discrepancies in the researcher"s interpretation of their experiences [26]. 4. Review of themes-Since the sub-themes were named in the previous stage and justified by literature, no data was discarded. Internal homogeneity (similar meaning within themes) and external heterogeneity (marked differences between themes) as emphasized by Patton [24] was adhered to. Peer debriefing was done at this stage, where a colleague trained in qualitative methodology reviewed the research process, method of analysis and results obtained. Interesting perspectives to discussing sub-themes relating to Indian culture were outlined [27]. 5. Defining and Naming Themes-Themes were named once again focussing on a data-driven approach as there were a large set of participants through whose interviews, data saturation had been obtained. A reflexive journal was maintained during the interview and analysis process which helped increase self-awareness about the researchers" belief system, values and even opinions [27] on relationships in an Indian context. RESULTS AND DISCUSSION Five significant themes (Figure 1) emerged from the data, and this paper discusses one theme, namely, "Past influenced present relationship security", which outlines the different ways in which an individual functions in his/her present relationship, based on experiences with significant others in the past. In this context, the past included, upbringing, experiences with peers, parents, and even previous relationships. Fig-1: Thematic Network for the Relationship Security Construct Our critical inner voices that are created due to experiences with influential early care-givers, teachers, peers can be the root of our insecurity as adults [28]. For example, an absent parent can generate the thought "I am not loveable", or exaggerated praise, pampered upbringing can result in very high self-esteem, or a high sense of self-worth that the individual will seek confirmation/ negation for in future relationships as well. Discussed below are three sub-themes relating to the individual security theme The self and the 'self-affirming' relationship This sub-theme describes how individuals' selves are malleable in romantic relationships [29] which offer a platform for them to explore their insecurities. Marigold, Holmes & Ross [30], emphasize the self-affirming nature of romantic relationships. Evaluative judgements about oneself created by our life experiences and influenced by what others think of them, are incorporated in the self [31,32]. Being a generation that compares, evaluates and judges ourselves with great scrutiny, the social construction of the self is significantly observed. People assume that people, themselves included, have a stable essence or core that predicts their behaviour, that that they are matters for what they do, and that what they do reflects who they are [33]. Participant CK [34], said, It hurts me when he uses my weaknesses when we fight.. It is always, when you fall ill, who else will take care of you? ... I have accepted you even if you are like this… I mean it is true, but I already feel like as if I am not good enough and he keeps bringing this up or, this one time he mentioned about my dry skin…it was casual statement, but he knows it hurts me, so how can I feel ok in a relationship where he keeps making me feel bad about myself. For participant CK, her personal inferiorities led her to the belief that she could never be loved again. She said, "After he left me to start dating my friend... I realized it must be because of how I looked or that I was not sexually appealing". These statements support self-verification theories that state that individuals often seek confirmation of their previously held self-beliefs, that they have a strong desire to stabilize views about themselves [35]. Positive relationship outcomes (satisfaction and commitment) and relationship functioning is facilitated by positive self-image and self-concept [34]. People with low self-esteem, often doubt their value to their romantic partners and resist positive feedback from them [36], distancing themselves from their partner, fearing criticism or rejection [37]. One interesting trend that emerged from the data was the strong need among women to feel "valued and respected" in the relationship by their partner. The economic independence seen in women of the 21st century appears to be influencing their sense of value in a relationship and towards themselves, their increased power, freedom of choice etc. This is contrast to the "house-wife" or domestic/ submissive role that the woman in a conservative/ collectivistic culture has been known to play. The self and the 'self-enhancing' relationship Physical appearance related insecurities either due to peer comparison, media standards or past relationships play an important role in relationship satisfaction. Participants look towards their partner to make them feel beautiful. Participant SH (personal communication, February 10, 2014), said, "He is very appreciative... about my ability also he keeps going on and on about how attractive he finds me and I guess in that way he kind of reinforces or eliminates any insecurity that I have". Most of them felt that with their partner, they could be their real self and hence explore, come to terms with, and even work on their insecurities. According to Firestone [28] it is about not losing oneself in the process of adjusting to a relationship and accommodating to a new partner, but of becoming a better self. Participants who were independently secure preferred partners who were secure and with whom they could relate to on a more mature level. It was seen that often in the attempt to seek clarification for or negate self-beliefs in the context of the relationship, individuals experience a change in the self and their attitudes towards their self. Sense of fear Relationships, in particular, can stir up past hurts and experiences by awakening long buried insecurities, bringing up emotions we don"t expect. In a study done among divorced couples in Netherlands in 2000, "fear of intimacy" was indicated as one of the most common reasons [28,9] for the decision. Early experiences with closeness are important in the creation of healthy internal representations about intimacy. Being close to someone else can trigger certain emotions and critical inner voices and listening to this, can cause us to feel desperate toward our partner or pull back when things start to get serious. It can exaggerate feelings of jealousy or possessiveness or leave us feeling rejected and unworthy. This pattern is more commonly seen among individuals with an insecure attachment style. When vulnerable behaviour in a romantic relationship is punished, intimacy might be feared leading to less satisfactory relationships [38]. Ruvola, Fabin & Ruvolo [39] showed that people experiencing a break-up became less willing to trust others, less comfortable being close to others, and less secure about initiating future relationships. As adults, insecure individuals feel that others are reluctant to get as close as they would like, and they are preoccupied with the possibility of rejection [12]. This is a strategy to avoid hurt and any changes to the "sense of self". "Fear of abandonment" also strongly fuels insecurity and is considered to be one of the underlying dimensions of anxious attachment patterns [1,20]. The fear of losing love, or of being rejected or abandoned, is a central issue for many clients that can cause hypervigilance [39] excessive questioning and constant checking behaviours [40] in relationships. Rejections from early caregivers feed insecure working models. The child believes that the world is unpredictable, even hostile [8] and a change in the sense of self [29] is experienced when a romantic relationship ends, affirming their belief about the same. Described as rejection sensitivity, this in turn feeds their anxiety to expect, readily perceive, and overreact to real or imagined rejection. A third fear that emerged is the "fear of engulfment", where participants felt that they might lose themselves in the relationship if they spend too much time on it and not get time for themselves. Most individuals who were well educated, working and independent emphasized on the need for "privacy" and "space". Sense of control The desire to feel secure can sometimes cause people to take actions that would reduce the actual damage to their sense of self. According to Blatz [7], this can occur if a person reasons a situation to be sufficiently familiar or if a learned pattern of behaviour gives him/ her reassurance to deal with the any threat to the sense of stability or the confidence that some factor will prevent him/her from suffering unacceptable consequences. Participant BG (personal communication, April, 2018) said, She doesn"t like it when I meet my friends, even if they are guys, its like I have to spend all my free time with her, she wants to control my social life completely, know who I am talking to, or who I am messaging, to the point that she was all I had and it was that phase that I went into depression… and when we broke up, I told her she has to get over her insecurities or she will not have a proper relationship at all. Relationship satisfaction according to Spielman, McDonald & Tackett [41] involved active avoidance or even planned avoidance of perceived social threats. This may be expecting the partner to give up his/her party life, monitoring social networking sites or even cutting away friendships with same sex individuals perceived as a threat. Individual insecurities can reflect personal inferiorities and fears. Participants also shared the precautionary behaviours, emotions and attitudes they engaged in to stabilize the relationship. Safety and relationship monitoring behaviours indicated a need to control aspects of the relationship. Jealousy often related to insecurity equips the individual to protect their relationship if they deem it worthy [42]. Participant EN (personal communication, March 17, 2014) said, "So I keep checking his Facebook...See he can chat with girls and all... but I want to see them… so we keep checking each other"s accounts..." Lindner [43] in a study on face-book stalking reported it as a sign of jealousy, insecurity and decreased relationship satisfaction. She further adds that occasionally she deletes messages of girls from his phone before he sees it thus preventing contact with a perceived threat. Increased alertness through monitoring and warding off potential rivals are behaviours seen associated with jealousy [42]. SUMMARY In the context of relationships in India, where the partner and the family have been known to play an influential role, security is also determined by the individual"s sense of self which is influenced by his/her life experiences tracing all the way back to experiences with early caregiver. In this study, it was observed that participants often use the relationship as a platform to work on their developed insecurities either seeking to affirm or enhance their sense of self. Individuals have a natural tendency to crave social rewards in relationships and report more satisfaction if there is a "feel good" factor in the same. While one cannot change the past, in addition to working on relationship dependent variables, it is also possible and more realistic to focus on methods to increase independent variables such as self-esteem. Partner appreciation for example is important to those who feel the need to dispel past influenced insecurities about the self [41]. Counselors and psychologists can help individuals have a positive view of them, help them set anchors in their independent achievements as well. Those with higher emotional intelligence for example, found their partner to be less critical and more supportive of them [44]. Working on emotional intelligence therefore can help create more resilience or help a person better deal with variables that threaten relationship security in a more mature manner. To help improve relationship security therefore, even in the absence of the partner or constraints in working with relationship variables, one can focus on individual variables such as goal orientation [44]. While emphasising on parenting techniques to build a healthier inner critic, increased awareness about mature conflict resolution strategies, even building healthy relationship in schools, preventing bullying etc, that can damage independent self-esteem, can help strengthen the individual in the relationship. Further studies can focus on identifying independent factors that can influence sense of self and methods to enhance the same. Cross cultural studies can compare the nature of these variables between individualistic and collectivistic societies taking into account differences in parenting styles, schooling systems and nature of premarital and marital relationships.
2020-06-23T01:27:14.474Z
2020-06-21T00:00:00.000
{ "year": 2020, "sha1": "74084eb3cc340a8ff0a8ad1248f57a832c1d88e4", "oa_license": null, "oa_url": "https://doi.org/10.36348/sjhss.2020.v05i06.006", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "74084eb3cc340a8ff0a8ad1248f57a832c1d88e4", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
10015314
pes2o/s2orc
v3-fos-license
DFT Study of the Structure, Reactivity, Natural Bond Orbital and Hyperpolarizability of Thiazole Azo Dyes The structure, reactivity, natural bond orbital (NBO), linear and nonlinear optical (NLO) properties of three thiazole azo dyes (A, B and C) were monitored by applying B3LYP, CAM-B3LYP and ωB97XD functionals with 6-311++G** and aug-cc-pvdz basis sets. The geometrical parameters, dipole moments, HOMO-LUMO (highest occupied molecular orbital, lowest unoccupied molecular orbital) energy gaps, absorption wavelengths and total hyperpolarizabilities were investigated in carbon tetrachloride (CCl4) chloroform (CHCl3), dichloromethane (CH2Cl2) and dimethlysulphoxide (DMSO). The donor methoxyphenyl group deviates from planarity with the thiazole azo moiety by ca. 38°; while the acceptor dicyanovinyl, indandione and dicyanovinylindanone groups diverge by ca. 6°. The HOMOs for the three dyes are identical. They spread over the methoxyphenyl donor moiety, the thiazole and benzene rings as π-bonding orbitals. The LUMOs are shaped up by the nature of the acceptor moieties. The LUMOs of the A, B and C dyes extend over the indandione, malononitrile and dicyanovinylindanone acceptor moieties, respectively, as π-antibonding orbitals. The HOMO-LUMO splittings showed that Dye C is much more reactive than dyes A and B. Compared to dyes A and B, Dye C yielded a longer maximum absorption wavelength because of the stabilization of its LUMOs relative to those of the other two. The three dyes show solvatochromism accompanied by significant increases in hyperpolarizability. The enhancement of the total hyperpolarizability of C compared to those of A and B is due to the cumulative action of the long π-conjugation of the indanone ring and the stronger electron-withdrawing ability of the dicyanovinyl moiety that form the dicyanovinylindanone acceptor group. These findings are facilitated by a natural bond orbital (NBO) technique. The very high total hyperpolarizabilities of the three dyes define their potent nonlinear optical (NLO) behaviour. Introduction In recent decades, researchers have become interested in the fabrication of metal surfaces that are functionalized with organic chromophores for tailoring their electrical, magnetic, optical and electroptical properties [1][2][3][4][5]. The adjustment of their key surface properties could be achieved by proper molecular design and/or by fine control of their film structure at the molecular level [6]. The proper chromophores suitable for these criteria could occur as organic second-order nonlinear optical (NLO) compounds having an electron donor (D) and electron acceptor (A) separated by a π-conjugated spacer (D-π-A) [7]. These structural rearrangements facilitate asymmetrical ground-state charge transfer emanating from the donor group (D) through the π-linker to the acceptor moiety (A) under the influence of an electric field [8]. The last few decades have witnessed the fabrication of thermally and photochemically stable NLO benzenoid chromophores with potent hyperpolarizabilities [9][10][11]. Toward the end of last century, a series of single-substituted thiazole ring and donor-acceptor thiazole-containing chromophores were synthesized, characterized and their superior hyperpolarizabilities compared to oxazoles, imadazoles and thiophenes were obtained [12,13]. In 2004, semi-empirical and ab initio calculations were performed on a series of push-pull π-conjugated styryl benzothiazoles dyes [14]. At the end of last decade, benzothiazolium salts having dimethylamino and diphenylamino electron-donating and nitro or cyano electron-withdrawing groups were synthesized and studied for their NLO properties both theoretically and experimentally by using Hyper-Rayleigh scattering [15]. In 2011, the linear and nonlinear properties of a series of triphenylamine-derived benzothiazoles were tuned by incorporating some electron-withdrawing groups [16]. Extremely polarizable NLO chromophores were manufactured also by incorporating the five-membered heteroaromatic thiazole, pyrrole or thiophene rings [17][18][19] and/or thiazole-annulated heteroaromatics, such as benzobisthiazole [20]. It has been established theoretically and experimentally that the location of the heteroatoms within heteroaromatic rings and the relative position of electron-donor and electron-acceptor groups could play vital roles in dictating the NLO activity of these chromophores [14][15][16][20][21][22][23]. El-Shishtawy et al. [31] have synthesized and investigated, experimentally, the structure and NLO properties of three thiazole azo dyes with a lateral methoxyphenyl donor group coupled with indandione, malononitrile and dicyanovinylindanone acceptor moieties. They showed that the three dyes are thermally stable and have strong absorption wavelengths whose maxima are dictated by the nature of the acceptor groups. In addition, they have also demonstrated that the dye with the dicyanovinylindanone acceptor group is an extremely promising NLO devise with a nonlinearity µβ coefficient amounting to three times those of the other two dyes. In this paper, we endeavor to further complement their findings theoretically and computationally. Density functional theory (DFT) and Time-dependent density functional theory (TD-DFT) with traditional hybrid and long-range corrected (LC) functionals and moderate basis sets will be used to monitor their photochromic and NLO properties. The intramolecular charge transfer (ICT) of the three dyes, from the methoxyphenyl donor toward the three-acceptor groups, will be studied by natural bond orbital (NBO) technique. Geometrical Analysis The structure of the three dyes A, B and C were optimized to global minima using B3LYP, CAM-B3LYP and ωB97XD functionals with aug-cc-pvdz and 6-311++G** basis sets (See Figure 1). Some selected bond lengths and dihedral angles of these optimization procedures together with those extracted from the crystal structures of dyes B and C [31] were listed in Table 1. In excellent agreement with the available crystallographic data [31], all the geometry optimizations of the dyes indicated that the donor methoxyphenyl group deviates from planarity with the thiazole azo moiety by ca. 38 • ; while the acceptor dicyanovinyl and dicyanovinylindanone groups diverge by ca. 6 • . As Table 2 shows, an overall excellent agreement between the measured [31] and calculated values is met. This is indicated by the small absolute errors. These findings agree satisfactorily with similar theoretical computations [32]. The N6-N7 crystallographic [31] and calculated bond lengths of dyes B and C are in overall good agreement with each other, having average errors maxima of 0.031 and 0.038 Å, respectively. Dye B crystallographic S4-C1 bond length of 1.744 Å was exactly reproduced by ωB97XD/6-311++G** level of theory but with an error of 0.012 Å in dye C. Generally, the bond lengths computed by the different DFT functionals with the aug-cc-pvdz basis set are slightly longer and much closer to the crystallographic values [31] compared to those estimated by the 6-311++G** basis set. The multiple bond character of C1-C33 bond with an average value of 1.426 Å is indicated by being shorter than that of 2-methylthiophene [33] by ca. 0.079 Å. That is, C1 and C33 are nearly sp 2 hybridized. The sp 2 hybridization environment around the bridge between the acceptor groups and the thiazole azo moiety is also facilitated by the value of the angle C33-C1-C2 being ca. 124 • . An unexpected agreement between the crystallographic [31] and the computed dihedral angles occurred satisfactorily, although the two measurements were carried out in different phases [34]. hybridized. The sp 2 hybridization environment around the bridge between the acceptor groups and the thiazole azo moiety is also facilitated by the value of the angle C33-C1-C2 being ca. 124°. An unexpected agreement between the crystallographic [31] and the computed dihedral angles occurred satisfactorily, although the two measurements were carried out in different phases [34]. Dye A Dye B Frontier Molecular Orbitals (FMOs) The frontier molecular orbitals (FMOs) are formed mainly by the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). The FMOs participate strongly in investigating the electrical and chemical properties of substrates [35]. They affect these properties through forming their polarities, together with their abilities for absorbing light. This means that they act as donor and acceptor orbitals [36], respectively. The HOMO and LUMO orbitals, the chemical hardness (η), the electronic chemical potential (μ) and the global electrophilicity index (ω) using the elected DFT functionals and basis sets are listed in Table 3. Figure 2 depicts the HOMO and LUMO orbitals for the studied dyes (A, B and C). On the one hand, the HOMOs for the three dyes are identical. They spread over the methoxyphenyl donor moiety, the thiazole and the benzene rings as π-bonding orbitals. On the other hand, the nature of the acceptor moieties dictated the energy and the shape of the LUMOs. The LUMOs of the A, B and C dyes extend over the indandione, malononitrile and dicyanovinylindanone acceptor moieties, respectively, as π-antibonding orbitals. The strengths of these acceptor groups are reflected in the stabilization of the LUMOs [31] in excellent agreement with our results shown in Table 3. The kinetic stability of the three dyes can be monitored by the HOMO-LUMO energy gaps [37]. This means that smaller HOMO-LUMO splittings lead to lower kinetic stability and higher chemical reactivity. The reverse is equally true. In addition, it is energetically favorable to move electrons easily between high-lying HOMOs and low-lying LUMOs [38]. All our results showed that Dye C is much more reactive than dyes A and B. However, dyes A and B are of comparable reactivity. With the exception of ωB97XD/6-311++G** model chemistry, all other elected levels of theory revealed that Dye A is slightly more reactive than Dye B. Frontier Molecular Orbitals (FMOs) The frontier molecular orbitals (FMOs) are formed mainly by the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). The FMOs participate strongly in investigating the electrical and chemical properties of substrates [35]. They affect these properties through forming their polarities, together with their abilities for absorbing light. This means that they act as donor and acceptor orbitals [36], respectively. The HOMO and LUMO orbitals, the chemical hardness (η), the electronic chemical potential (µ) and the global electrophilicity index (ω) using the elected DFT functionals and basis sets are listed in Table 3. Figure 2 depicts the HOMO and LUMO orbitals for the studied dyes (A, B and C). On the one hand, the HOMOs for the three dyes are identical. They spread over the methoxyphenyl donor moiety, the thiazole and the benzene rings as π-bonding orbitals. On the other hand, the nature of the acceptor moieties dictated the energy and the shape of the LUMOs. The LUMOs of the A, B and C dyes extend over the indandione, malononitrile and dicyanovinylindanone acceptor moieties, respectively, as π-antibonding orbitals. The strengths of these acceptor groups are reflected in the stabilization of the LUMOs [31] in excellent agreement with our results shown in Table 3. The kinetic stability of the three dyes can be monitored by the HOMO-LUMO energy gaps [37]. This means that smaller HOMO-LUMO splittings lead to lower kinetic stability and higher chemical reactivity. The reverse is equally true. In addition, it is energetically favorable to move electrons easily between high-lying HOMOs and low-lying LUMOs [38]. All our results showed that Dye C is much more reactive than dyes A and B. However, dyes A and B are of comparable reactivity. With the exception of ωB97XD/6-311++G** model chemistry, all other elected levels of theory revealed that Dye A is slightly more reactive than Dye B. Table 1. Some selected bond lengths (Å) and dihedral angles (degrees) of the optimized structures of A (top 6 lines), B (line 7 to 13) and C (bottom 7 lines) dyes which have been estimated by using different DFT functionals with 6-311++G** and aug-cc-pvdz basis sets. The crystal structures for B and C dyes are listed for comparison purposes. Table 2. Absolute and average absolute errors of some selected bond lengths (in Å) and torsional angles (in deg.) for the thiazole azo dyes B (top 7 lines) and C (bottom 7 lines), which were estimated by using different DFT functionals with 6-311++G** and aug-cc-pvdz basis sets as compared to the experimental crystal data 1 . The crystal structure of A is not available. Table 3. The HOMO (highest occupied molecular orbital) and LUMO (lowest unoccupied molecular orbital) orbitals energies (eV), the energy gaps (E.G./eV), the dipole moments (D.M./Debye) the chemical hardness (η/eV), electronic chemical potential (µ/eV), the global electrophilicity index (ω/eV) and the total hyperpolarizabilities (β tot /au) for gas-phase dyes A, B and C, which were calculated by applying B3LYP, CAM-B3LYP and ωB97XD functionals with 6-311++G** and aug-cc-pvdz basis sets. For comparison, the values for gas-phase p-nitroaniline (pNA) are given. B3LYP CAM-B3LYP ωB97XD 6-311++G** aug-cc-pvdz 6-311++G** aug-cc-pvdz 6-311++G** aug-cc-pvdz Likewise, the chemical hardness (η) is useful in studying the stability and reactivity of compounds. It is formulated in terms of the energies of the HOMOs and LUMOs [40]: This formula indicates that soft compounds have small chemical hardness, while hard ones have large chemical splittings. In other words, soft compounds have small excitation energies, that is, their electron densities are easily altered, while hard ones have large excitation energies or their electronic densities are difficult to modify [40]. Referring back to Table 3, it can be seen that Dye C is the softest, while Dye B is the hardest, among the three dyes. Moreover, the electronic chemical potential (µ) shows the escaping tendency of electrons in compounds [41] and given by Equation (2) [42,43]: As Table 3 shows, the order of the electronic chemical potential for the three dyes is as follows: B > C > A. The global electrophilicity index (ω) estimates the stabilizing energy when a surrounding environment supplies a chemical entity with an additional electronic charge. The index (ω) relates to the electronic chemical potential (µ) and the chemical hardness (η) through Equation (3) [41]: The values of the global electrophilicity indexes shown in Table 3 for the three dyes indicate that Dye A is the strongest nucleophile, while Dye C is the strongest electrophile among the three substrates. The values of the global electrophilicity indexes shown in Table 3 for the three dyes indicate that Dye A is the strongest nucleophile, while Dye C is the strongest electrophile among the three substrates. UV-Visible Spectral Analysis It has become general knowledge that the π→π* and n→π* electronic transitions in πconjugated organic compounds lead to UV-Vis. spectra [44]. They are due to electron motions between the FMOs; like the promotion of an electron from the HOMO to the LUMO. Dyes A, B and C have many π-bonds in the thiazole azo, the two benzene and indane rings, together with the nitrogen, oxygen and sulphur atoms lone pairs. The experimental and theoretical maximum absorption wavelength (λmax) for dyes A, B and C in chloroform are depicted in Table 4. The estimated values are computed by the time-dependent density functional theory (TD-DFT) [45] procedure using the polarizable continuum model (PCM) method [46] with TD-B3LYP/6-311++G** model chemistry. The experimental maximum wavelength bands for A, B and C dyes in chloroform showed up at 623, 619 and 687 nm, respectively [31]. The longest wavelength due to Dye C is ca. 10% longer than the average wavelength of the other two dyes. This difference in trend was nearly reproduced theoretically using the elected levels of theory. That is, our calculated maximum wavelengths, in chloroform, of Dye C are longer than those of dyes A and B by 5%-12%. In excellent agreement with El-Shishtawy et al. [31], the longer maximum wavelength of Dye C resulted from its more stabilized LUMO relative to the LUMOs of dyes A and B. It can be concluded that the extra stability of the LUMO of Dye C (see Table 3) originates from the stronger electron-withdrawing ability of the dicyanovinylindanone group compared to those of the indandione and dicyanovinyl moieties [31]. The UV-Vis. spectra of the three dyes (A, B and C) are simulated in CCl4, CHCl3, CH2Cl2 and DMSO solvents applying TD-CAM-B3LYP/6-311++G** level of theory. The results are shown in Table 5. The UV-Visible Spectral Analysis It has become general knowledge that the π→π* and n→π* electronic transitions in π-conjugated organic compounds lead to UV-Vis. spectra [44]. They are due to electron motions between the FMOs; like the promotion of an electron from the HOMO to the LUMO. Dyes A, B and C have many π-bonds in the thiazole azo, the two benzene and indane rings, together with the nitrogen, oxygen and sulphur atoms lone pairs. The experimental and theoretical maximum absorption wavelength (λ max ) for dyes A, B and C in chloroform are depicted in Table 4. The estimated values are computed by the time-dependent density functional theory (TD-DFT) [45] procedure using the polarizable continuum model (PCM) method [46] with TD-B3LYP/6-311++G** model chemistry. The experimental maximum wavelength bands for A, B and C dyes in chloroform showed up at 623, 619 and 687 nm, respectively [31]. The longest wavelength due to Dye C is ca. 10% longer than the average wavelength of the other two dyes. This difference in trend was nearly reproduced theoretically using the elected levels of theory. That is, our calculated maximum wavelengths, in chloroform, of Dye C are longer than those of dyes A and B by 5%-12%. In excellent agreement with El-Shishtawy et al. [31], the longer maximum wavelength of Dye C resulted from its more stabilized LUMO relative to the LUMOs of dyes A and B. It can be concluded that the extra stability of the LUMO of Dye C (see Table 3) originates from the stronger electron-withdrawing ability of the dicyanovinylindanone group compared to those of the indandione and dicyanovinyl moieties [31]. The UV-Vis. spectra of the three dyes (A, B and C) are simulated in CCl 4 , CHCl 3 , CH 2 Cl 2 and DMSO solvents applying TD-CAM-B3LYP/6-311++G** level of theory. The results are shown in Table 5. The interactions between these solvents and the dyes render them fairly solvatochromic, with red shifts of 0.063, 0.065 and 0.039 eV in the π→π* bands of A, B and C, respectively, on moving from the less polar CCl 4 to the highly polar DMSO. As Table 5 shows, the HOMOs and LUMOs of the dyes are further stabilized relative to the polarity of the solvents, with the latter being greatly affected. It seems, then, that the solvatochromic behaviour could have resulted mainly from the relatively strong interactions between these solvents and the indandione, malononitrile and dicyanovinylindanone acceptor moieties [47]. Table 4. The Dipole moments (D.M./Debye), the energy gaps (E.G./eV), the total hyperpolarizabilities (β tot × 10 −28 /esu) and the maximum absorption wavelength (λ max /nm) for the chloroform-solvated dyes (A, B and C) which were estimated by utilizing B3LYP, CAM-B3LYP and ωB97XD functionals with 6-311++G** and aug-cc-pvdz basis sets. Some experimental related values of the three dyes in chloroform are also listed for comparison purposes. Nonlinear Optical (NLO) Properties Experimentalists and theoreticians have adopted many different methods and conventions for the determination of hyperpolarizability. This situation has brought about some form of ambiguity when theoretical data are compared with measured ones [48]. In this contribution, we compute the total hyperpolarizability, β tot , by the relation: where The total hyperpolarizabilities in atomic units (a.u.) are related to the electrostatic units (esu) by the relation: 1 a.u. = 8.6393 × 10 −33 esu. The computed dipole moments, energy gaps and the total hyperpolarizabilities for gas-phase dyes A, B and C, together with those of gas-phase p-nitroaniline (pNA) using B3LYP/6-311++G** model chemistry, are listed in Table 3. pNA is selected for comparison purposes because it is a typical example of a donor-acceptor charge-transfer species with high hyperpolarizability that was exposed to both experimental [39] and theoretical [49] investigations. In Table 3, it can be easily seen that dyes A and B have nearly equal total hyperpolarizabilities, which amount to 14-and 12-fold that of pNA, respectively, using the same level of theory. In contrast, the total hyperpolarizabilities of Dye C amount to twice those of dyes A and B and up to ca. 25-fold that of pNA. In Table 4 are listed the estimated dipole moments, energy gaps and total hyperpolarizabilities of the chloroform-solvated dyes (A, B and C) using the elected levels of theory together with their experimental counterparts in the same solvent. Apparently, the B3LYP functional overestimated the total hyperpolarizabilities of both the gas-phase and solution substrates compared to those obtained from CAM-B3LYP and ωB97XD counterparts, while the latter functionals yielded comparable values [49]. In addition, the total hyperpolarizabilities of the solvated dyes are ca. three-fold greater than their gas-phase peers [50]. This phenomenon could be related to the increase of dipole moment change between the ground and excited states [15] (cf . Tables 3 and 4). Our calculated total hyperpolarizabilities of the three dyes in chloroform are in good match with the measured ones in the same solvent [31]. In addition, the estimated total hyperpolarizabilities of the three dyes in CCl 4 , CHCl 3 , CH 2 Cl 2 and DMSO solvents using CAM-B3LYP/6-311++G** level of theory, listed in Table 5, are dictated by the polarity of these solvents, that is, the solvatochromic behaviours of the dyes have brought about pronounced NLO characteristics [47]. As Tables 3-5 show, the values of total hyperpolarizabilities of the three dyes (A, B and C) are inversely proportional to the magnitude of the energy gaps [51][52][53]. This is because narrow energy gaps enhance intramolecular charge transfer and hence produce larger total hyperpolarizabilities. In addition, the total hyperpolarizability of Dye A is slightly more than that of Dye B, in spite of the fact that the dipole moment of the latter is ca. twice that of the former. The enhanced total hyperpolarizability of dyes A and C could be due to the longer extension of the π-conjugation of the indandione and dicyanovinylindanone moieties compared to that of the dicyanovinyl group, while Dye B competes with them through the stronger electron-withdrawing ability of the dicyanovinyl moiety [31]. Natural Bond Orbital (NBO) Analysis The second-order perturbation energies (E (2) ) are commonly used as a quantitative tool for investigating bonding and antibonding interactions through natural bond orbital (NBO) technique [54][55][56][57]. The second-order perturbation energies (E (2) ) are given by Equation (6): where the term F 2 (i, j) represents the off-diagonal matrix elements, q i gives the donor orbital occupancy, ε i and ε j estimate the donor and acceptor orbital energies, respectively. The quantities of E (2) evaluate the magnitude of interaction between the donor and acceptor orbitals. They, therefore, show the extent of delocalization throughout the chemical species. Table 6 lists the extremely influential interactions between the bonding or lone pair Lewis Type NBO occupied orbitals with antibonding Non-Lewis NBO unoccupied orbitals of dyes A, B and C. They were estimated by using HF/6-31+G*//B3LYP/6311++G** level of theory. The n→π* and π→π* interactions define the charge transfer from the donor methoxyphenyl group towards the acceptor moieties and contribute effectively to the stabilization of the three dyes. Table 6. Some selected most efficacious second-order perturbation (E (2) ) assessment of the hyperconjugative energies (kcal/mol) that trace the charge transfer from the methoxyphenyl donor group to the indandione, dicyanovinyl and dicyanovinylindanone acceptor moieties of the thiazole azo dyes A, B and C, respectively. They were computed applying HF/6-31+G*//B3LYP/6311++G** level of theory. The huge charge transfer that occurred in Dye C explains its larger total hyperpolarizability compared to those of A and B. On the one hand, this is because Dye C has both the long π-conjugation extension of the indanone moiety and the strong electron-withdrawing ability of the dicyanovinyl group. On the other hand, dyes A and B have either the long π-conjugation or the strong electron-withdrawing potency. The large dipole moment and the strong electron-withdrawing ability of Dye B yielded total hyperpolarizabilities comparable to those of Dye A that has a smaller dipole moment. The competitiveness of Dye A originates from the long π-conjugation extension associated with the indandione ring [31]. The intramolecular charge transfer associated with the π C1-C2 →π* C35-C50 interaction is effectively shown by the C1-C33 bonds being multiply bonded (1.419-1.431 Å). All these findings are in excellent agreement with those reported by El-Shishtawy et al. [31]. Computational Details The Gaussian09 Suite of programs [58] were used to perform the quantum mechanical molecular orbital calculations of the three dyes, A, B and C. The GaussView [59] and Chemcraft [60] softwares were applied to monitor the structure and properties of the studied dyes. A number of density functional theory (DFT) [61] functionals with the triple zeta and polarization functions at the hydrogen and carbon atoms (6-311++G**) [62][63][64] and augmented correlation-consistent polarization with double zeta functions (aug-cc-pvdz) [65] basis sets were tested. They were used to optimize the geometrical structures of those dyes for investigating their reactivity, hyperpolarizability, linear and nonlinear optical (NLO) properties. The DFT functionals comprise the Becke, three-parameter, Lee-Yang-Parr exchange-correlation hybrid functional (B3LYP) [66,67], the Coulomb-attenuating method that includes the hybrid features of B3LYP and the long-range correction (CAM-B3LYP) [68], and the long-range corrected (LC) hybrid density functional with empirical atom-atom dispersion corrections (ωB97XD) [69]. The Gaussian NBO software [56,70] was applied to perform a natural bond orbital (NBO) analysis for the three dyes. The time-dependent density functional theory (TD-DFT) [71] and the polarizable continuum model (PCM) [72] technique were applied to compute the UV-Vis. spectra of the CCl 4 , CHCl 3 , CH 2 Cl 2 and DMSO solvated dyes using the elected levels of theory. The frontier molecular orbitals (FMO) and the global chemical reactivity descriptors of the three dyes were examined by using all tested model chemistries Conclusions The geometry, reactivity, linear and nonlinear optical behaviour of three donor-acceptor thiazole azo dyes were monitored by DFT calculations. The donor methoxyphenyl group and the acceptor indandione, malononitrile and dicyanovinylindanone moieties, incorporated into dyes A, B and C, respectively, were not coplanar with the thiazole azo spacer group. The HOMO-LUMO analysis showed that Dye C is more reactive than both A and B. This property ties up nicely with the longer absorption wavelength of the former. This linear behaviour is assigned to both the long π-conjugation extension of the indanone ring and the strong electron-withdrawing ability of the dicyanovinyl moiety. The three dyes showed solvatochromism on moving from CCl 4 to DMSO solvents. The solvatochromic behaviours were reflected in pronounced NLO properties. The calculated total hyperpolarizabilities of Dye C were more than two-fold those obtained from A and B and ca. 25-fold of that from pNA. An NBO investigation supported these results. The enhancement of the linear and nonlinear behaviour of Dye C originates, in part, from the π C33-C50 →π* C51-C63 transition, which stabilized this dye by 37.99 kcal/mol. The delocalization energies of A and B are nearly comparable, with the former having a slightly higher amount. This is because the latter is devoid of the π C33-C50 →π* C51-O64 and π C33-C50 →π* C52-O63 transitions, which stabilized the former by 67.35 kcal/mol, while the former is destitute of the π C33-C50 →π* C51-N53 and π C33-C50 →π* C52-N54 transitions, which availed 59.17 kcal/mol for the latter. All our theoretical findings are in excellent agreement with experiment [31] and affirm the use of the three dyes as potential NLO devices.
2017-01-31T08:35:28.556Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "b715cce457265c30b9ca592afdae5666ac519a08", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/2/239/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d928fdf854a79f0e925a5af8ce80b0e417dffde1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
209919120
pes2o/s2orc
v3-fos-license
A combined finite volume-finite element scheme for a low-Mach system involving a Joule term In this paper, we propose a combined finite volume finite element scheme, for the resolution of a specific low-Mach model expressed in the velocity, pressure and temperature variables. The dynamic viscosity of the fluid is given by an explicit function of the temperature, leading to the presence of a so-called Joule term in the mass conservation equation. First, we prove a discrete maximum principle for the temperature. Second, the numerical fluxes defined for the finite volume computation of the temperature are efficiently derived from the discrete finite element velocity field obtained by the resolution of the momentum equation. Several numerical tests are presented to illustrate our theoretical results and to underline the efficiency of the scheme in term of convergence rates. Introduction Variable density and low Mach numbers flows have been widely investigated for the last decades. Indeed, they arise in plenty of physical phenomena in which the sound wave speed is much faster than the convective characteristics of the fluid : flows in high-temperature gas reactors, meteorological flows, combustion processes and many others. In this work, we are interested in a specific model derived from the usual low-Mach one for a calorically perfect gas, coming from an asymptotic expansion of the variables with respect to the Mach number M in the Navier-Stokes compressible equations (see [34]). For the usual low-Mach model, the local-in-time existence of classical solutions in Sobolev spaces is established in [23]. As observed in [33], a small perturbation of a constant initial density provides a global existence of weak solutions in the two-dimensional case. The originality of the model considered here relies on the dynamic viscosity of the fluid being explicitly given as a function of the temperature, as introduced in [4] and generalized in [5]. In this recent work, the authors establish the global existence of weak solutions in the three-dimensional case with no smallness assumption on the initial velocity. Thanks to a change of variables, we obtain a system in which the velocity field is divergence-free, leading in return to the presence of a non linear and so-called "Joule term" in the mass conservation equation expressed in term of the temperature. In [8], some theoretical results are obtained on the local-in-time existence of strong solutions in the three-dimensional case. We mention also [19] where the authors study the local and global existence in critical Besov spaces, assuming that the initial density is close to a constant and that the initial velocity is small enough. The formulation of this model is close to the so-called ghost effect system, considered in [30,32], where a thermal stress term is added to the right-hand-side of the momentum equation. The local well-posedness of classical solutions is established in [32] for 2D or 3D unbounded domains. A local well-posedness result for strong solutions is proved in [30], where the authors give also the existence and uniqueness of a global strong solution for the two-dimensional case. From a numerical point of view, many authors compute flows at low-Mach number regime. We refer only to the so-called pressure-based methods, widely used to compute incompressible flows (see e.g. [1,2,20,29,31]), but there exists also the so-called density-based methods widely used to compute supersonic or transonic flows, and recently adapted in the case of low-Mach regime (see [27,28]). In previous contributions on incompressible variable density flows [6,9,10] or on low-Mach flows with large variations of temperature [7], a combined finite volume -finite element scheme was proposed. Based on a time splitting, this combined method allows to solve the mass conservation equation by a finite volume method, whereas the momentum equation associated with the divergence free constraint and the temperature one are solved by a finite element method. It allows, in particular, to preserve the constant density states and to ensure the discrete maximum principle. In the present work, following the same idea, the nonlinear temperature equation is solved by a finite volume method, whereas the velocity equation associated with the divergence free constraint is solved by a finite element one. The main contribution of this paper is twofold. First, we prove a discrete maximum principle on the temperature (see Theorem 3.1), similarly to the solution behavior at the continuous level. Second, we establish a footbridge between the finite volume fluxes and the finite element velocity field (see subsection 3.4), to ensure the good consistency of the method. The outline of the paper is the following. Section 2 briefly introduces the model derivation. Section 3 details the proposed combined finite volume -finite element scheme. The maximum principle is established (Theorem 3.1), some variants of the original scheme are proposed (subsection 3.3), and the link between the finite volume fluxes and the finite element velocity field is carefully explained (subsection 3.4). Finally, section 4 proposes several numerical tests to illustrate the obtained theoretical results, and to investigate the behavior of the scheme on a physical benchmark corresponding to the convection of the temperature in a cavity. Model derivation As already mentioned in the introduction, a low-Mach model is obtained by inserting the asymptotic expansions of the variables with respect to the Mach number M in the Navier-Stokes compressible equations (see for example [2,20,34]). One of the characteristics of the process is to consider the asymptotic expansion of the pressure π with respect to M. Denoting x ∈ R d as the space variable and t ∈ R + * as the time one, we write where P is called the thermodynamic pressure and q the dynamic pressure. Here, we assume that P(t) = P 0 > 0 is constant for all t ≥ 0. The other variables considered here are the velocity u(x, t), the density ρ(x, t) and the temperature θ(x, t). Let Ω ⊂ R d be an open polygonal domain with a boundary Γ and T a positive real. The continuity, momentum, temperature and state equations in Q T = Ω × [0; T ] for a calorically perfect gas are given by: whereμ is the viscosity of the flow, κ is the heat conductivity which is assumed constant, R is the gas law constant and γ is the gas specific heat ratio. Here, Du = (∇u + ∇ T u)/2 denotes the deformation tensor, I the identity matrix and g(x, t) the gravity field. In order to reduce the study to a system whose velocity is solenoidal, we define: In addition, the density is eliminated from the equations thanks to the state equation (2.1d). Following the idea introduced in [5] where a particular relation between the density and the viscosity in the combustion model was introduced, we assume moreover that so that µ(θ) is strictly positive if and only if θ ∈ (0; 1). If we denote by p the modified pressure, defined by: we consequently obtain (see [8] for all details): The system (2.2) needs to be completed with suitable initial and boundary conditions. We set Γ = Γ D ∪ Γ N , with Γ D ∩ Γ N = ∅. The initial conditions for the system (2.2) are given by: The boundary conditions on the temperature and the velocity are given by: The local existence of a regular solution to the problem (2.2) has been shown in [8] in the case of dimension d = 3, as well as the maximum principle for the temperature. Furthermore, the unique global strong solution can be proved in the case of dimension d = 2 following the proof of Theorem 1.5 in [30] and assuming that for any fixedθ > 0, there exists a small constant δ(θ) > 0 such that In this paper, we will prove that there exists at least one solution to the discretized temperature equation, and that this solution verifies the maximum principle. The questions of unicity and convergence of the numerical scheme will be addressed in a further work. 3 A combined finite volume -finite element scheme 3.1 Spatial and temporal discretizations The time splitting The numerical scheme is based on a time splitting, allowing to solve the temperature equation with finite volumes, whereas the velocity equation is solved with finite elements, using the same strategy as in previous works [7,9]. Let ∆t be the time step and t n = n ∆t. Functions approximated at time t n will be identified with superscript n. Assuming that θ n and v n are known: 1. We first compute the new temperature θ n+1 by solving the temperature equation (2.2a) using an Euler implicit scheme: Here and as usual, the nonlinear term in the momentum equation (3.2) is linearized. Description of the mesh and notations From now, we fix d = 2, but our study can be easily extended to the three-dimensional case. The discretization in space is based on a triangulation T of the domain Ω ⊂ R 2 (the elements of T are called control volumes), a family E of edges and a set P = (x K ) K∈T of points of Ω defining an admissible mesh in the sense of Definition 3.1 in [25]. We recall that the admissibility of T implies that the straight line between two neighboring centers of cells x K and x L is orthogonal to the edge σ ∈ E such that σ = K ∩ L (and which is noted σ = K|L) in a point x σ (see Figure 1). This condition is satisfied if all the angles of the triangles are less than π 2 . The set of interior (resp. boundary) edges is denoted by E int = {σ ∈ E ; σ Γ} (resp. E ext = {σ ∈ E ; σ ⊂ Γ}). Among the outer edges, there are The measure of K ∈ T is denoted by m K and the length of σ by m σ . For σ ∈ E int such that σ = K|L, d σ denotes the distance between x K and x L and d K,σ the distance between x K and σ. For σ ∈ E ext K , we note d σ the distance between x K and σ. For σ ∈ E, The transmissibility coefficient is given by τ σ = m σ d σ . Finally, for σ ∈ E K , we denote by n K,σ the exterior unit normal vector to σ. The size of the mesh is given by: Spatial discretization The piecewise constant temperature θ h is computed with a cell-centered finite volume method described in section 3.2, so that θ h ∈ X(T ) with : The velocity v h is discretized with P 2 -Lagrange finite elements, and the pressure p h with P 1 -Lagrange finite elements, so that they satisfy the usual LBB stability condition. The Degrees of Freedom of each variable are shown in the Figure 1. The computation of the velocity and pressure by finite elements is usual, and we refer to our previous work for details [7,9]. One of the original points of the present work is the computation of the temperature by finite volumes. Indeed, we aim to develop a numerical scheme that ensures the discrete maximum principle property, see section 3.2. Another point of interest is the link between the two numerical methods, see section 3.4. Indeed, from the velocity field that has been computed by finite elements, we have to determine fluxes through the interfaces of the control volumes, which will be used for the computation of the temperature. The finite volume scheme Let v be a given velocity field and θ n the previous approximate temperature, we aim at solving the temperature equation: Since we want to ensure maximum principle properties, we favor a finite volume method. The main difficulty comes from the term |∇θ n+1 | 2 , called the Joule term in the context of the electrical conductivity, see for example A. Bradji and R. Herbin [3] or the works from C. Chainais and her collaborators [13,14]. Indeed, the definition of a discrete gradient is not straightforward, since the finite volume solution is piecewise constant. We can thus refer to the work of R. Eymard and his collaborators [24,26] for some definitions of discrete gradients on an admissible mesh. Alternatively, K. Domelevo and P. Omnes [21], and C. Chainais [13], used a discrete gradient reconstruction following the idea from the paper of Y. Coudière, J.-P. Vila and P. Villedieu [18]. The principle of these schemes, valid on very general meshes, consists in the double resolution of the equations, on primal and dual meshes. Moreover in [22], J. Droniou and R. Eymard propose a scheme whose unknowns are the function, its gradient and flows. Therefore, the definition of a discrete gradient by mesh is intrinsic to the scheme. The respect of the bounds, and in particular of the lower bound, is another difficulty related to the Joule term. Indeed, if we consider "close" models, like the equation of porous media, see for example the work of C. Cancès and C. Guichard [12], or the convection-diffusion equation involved in Khazhikhov-Smagulov models-type, see for example C. Calgaro, M. Ezzoug and E. Zahrouni [11], the maximum principle is obtained quite easily for one order schemes. Nevertheless, adding the positive term |∇θ n+1 | 2 in the temperature equation prevents the scheme from directly obtaining the lower bound. Consequently, we will have to particularly pay attention to the discretization of this term. We are looking for θ n+1 h = (θ n+1 K ) K∈T ∈ X(T ) an approximated solution of θ(t n+1 , .). The finite volume scheme is classically obtained by integrating equation (3.3) on a control volume K, that is: Here, θ n+1 σ,+ is defined for σ ∈ E K by: K,σ is an approximation of the exact flux of the diffusive term through the edge σ and is given by where we define θ n+1 σ , an approximation of θ n+1 at x σ , by: Concerning the Joule term, J K (θ n+1 h ) is an approximation of 1 m K K |∇θ(t n+1 , x)| 2 dx. We notice that |∇θ| 2 = ∇ · (θ ∇θ) − θ ∆θ, and we propose the following definition: where θ n+1 σ is another approximation of θ n+1 at x σ (this choice will be justified later) defined this time by: Finally we obtain: The rest of the section is devoted to the proof of the following result: 11) and: Then the scheme (3.4) admits at least one solution that satisfies: Proof. We start by studying the following intermediate scheme: with the modified numerical fluxF We underline that the definition (3.16) ensures thatθ n+1 σ ≥ 0. With this choice in the scheme (3.14), the diffusion term has been a little "inflated" in order to compensate the Joule term and to increase the stability, as it will be seen later. Note moreover that if θ n+1 h is solution of (3.14) and satisfies (3.13), it is also solution of (3.4). The schedule of proof is as follows. We first show in Lemma 3.2 that a solution of (3.14) satisfies the discrete maximum principle. Then, by an argument of topological degree, we prove the existence of a solution to (3.14) (see Lemma 3.3). As this solution satisfies the maximum principle, it is also a solution of (3.4) and this concludes the proof of Theorem 3.1. Proof. For n = 0, the property (3.13) clearly holds because of (3.11). Suppose now that (3.13) holds at step n and assume that there exists K M ∈ T such that: We write (3.14) for the control volume K M and obtain: The right hand side of (3.17) is negative, whereas the left hand side is non-negative (indeed, the treatment of the first term is classical, see for instance [25], and the sign of other terms is obvious). We thus obtain a contradiction. Assume now that there exists K m ∈ T such that: We write (3.14) on K m : The right hand side of (3.19) is positive. Looking now at the left hand side, we notice that the first (see [25]) and third terms are non positive, whereas the second is non negative. Thus, to obtain a contradiction, we must reach a balance between the Joule term and the diffusive one. By noticing that: a sufficient condition to obtain the contradiction is to show that for all σ ∈ E K , θ n+1 K m − θ n+1 K m ,σ +θ n+1 σ ≥ 0. Indeed it holds, because of (3.18) and Definition (3.16) ofθ n+1 σ , we have: Lemma 3.3. We assume that (3.11) and (3.12) are satisfied. Then the scheme (3.14) admits at least one solution. Then, H ∈ C([0, 1] × K, R #T ) and thanks to (3.24), admits no solution on ∂K. Consequently, its topological degree is independent of µ. As (3.21) admits a solution for µ = 0 its topological degree is different from zero. We can now conclude that (3.21) has a solution for µ = 1, and therefore (3.14) has at least one solution. Lemma 3.2 and Lemma 3.3 conclude the proof of Theorem 3.1 since as already mentioned, if θ n+1 h is solution of (3.14) and satisfies (3.13), it is also solution of (3.4). Variants of the scheme In this subsection, several variants of the scheme (3.4), denoted in the following (SD max J cen ), are presented. The scheme (SD moy J up ) (subsection 3.3.1) will also fulfill the discrete maximum principle without any condition, whereas the scheme (SD moy J cen ) (subsection 3.3.2) as well as (SD max J EGH ) and (SD moy J EGH ) (subsection 3.3.3) will fulfill the discrete maximum principle under some restrictions on the temperature bounds m and M. (SD moy J up ) We propose here a first variant of (SD max J cen ), which also leads to an unconditionally maximum principle. The idea is to consider the scheme (3.4), but with two differences compared to (SD max J cen ). On the one hand, the diffusion term is centered, by defining now θ n+1 σ used in the definition of the diffusive flux F n+1 K,σ in (3.7) by: instead of (3.8). On the other hand, an upwind in the Joule term gives us instead of (3.10), where we use the notation a + = max(0, a). In that case, the maximum principle occurs. Indeed, the proof of Theorem 3.1 can easily be adapted by noticing that (θ n+1 Let us note that the combination of (3.8) for the diffusion flux definition associated to (3.26) for the Joule term flux definition would also lead to a scheme with an unconditional maximum principle. Nevertheless, this choice would generate a loss of accuracy compared with (SD max J cen ) and (SD moy J up ) which are both already unconditionally maximumprinciple preserving. Since it is useless to upwind both the diffusion term and the Joule one, we did not considered it. Remark 3.4. Note that we could also have considered the following definition in the diffusion term: (3.27) From the theoretical and numerical points of view, the cases (3.25) and (3.27) give similar results. (SD moy J cen ) A quite natural question, in order to increase the accuracy of the approximation, is to investigate the behaviour of the scheme when both flux are centered. Namely, it would consist in considering (3.25) for the definition of the diffusive flux F n+1 K,σ in (3.7), while using (3.10) for the Joule term flux definition. In that case, the upper bound can be obtained in the same way as in Lemma 3.2. Nevertheless, the obtention of the lower bound needs an additional assumption, so that the maximum principle can be ensured by a specific balance between the Joule term and the diffusion one. The definition ofθ n+1 σ in (3.16) has to be a little modified and given by : so that the positivity of θ n+1 K m − θ n+1 K m ,σ +θ n+1 σ (see (3.20)) is ensured provided that M ≤ 3m. As it will be illustrated in the numerical test 4.1 below, such a condition is necessary. This can seem strange from the physical point of view. As we see in the proof, it is necessary for technical reasons, but could also be linked to the fact that the global solution in time of the continuous model is ensured if the initial datum is not too far from a constant state (see (2.3)). Anyway, even with this restriction in mind, (SD moy J cen ) could be used for cases where temperature variations are low, to expect a better accuracy of the solution compared to the one obtained by (SD max J cen ) or (SD moy J up ). (SD max J EGH ) and (SD moy J EGH ) Finally, two other schemes are investigated, considering another way to define the piecewise discrete gradient in each control volume given by (see [26]) : while considering either (3.8) or (3.25) for the computation of θ n+1 σ arising in the definition of F n+1 K,σ given by (3.7), and leading respectively to the schemes denoted (SD max J EGH ) and (SD moy J EGH ). Once again and in both cases, the maximum principle is ensured under a condition M ≤ C(T ) m with C(T ) ∈ (1, 2), depending on geometrical characteristics of the mesh (see [17]). In Section 4, we implement these different schemes and compare them on two main criteria: verification of the maximum principle and convergence rates. Table 1 summarizes the five considered schemes. Coupling between finite elements and finite volumes The resolution by a finite element method of (3.2) gives us the velocity field v n+1 h which is P 2 on each triangle K ∈ T . Let us denote by (v n+1 σ ) σ∈E the value of this velocity field at the center of edges. Thus, the local divergence constraint reads: The sequence of reals (α K ) K∈T is different from zero in general. Consequently, the velocity field (v n+1 σ ) σ∈E is not divergence-free in the Finite Volume sense, and can not be used for the resolution of the temperature equation. Here we can not follow the idea of [9], adapted in [7] for the projection method, which consists in defining a constant velocity per triangle. Indeed, the unknowns for the temperature are located at the center of the cells, and no more at the vertices of the mesh. For σ ∈ E int , we define arbitrarly K + σ and K − σ the two triangles such that σ = K + σ |K − σ . For σ ∈ E ext , we denote by K + σ the triangle such that σ ⊂ ∂K + σ . Let n σ the unit normal vector to σ getting out of K + σ , see Figure 2. We also define ∀K ∈ T and ∀σ ∈ E K : By denoting ( f σ ) ∈ R #E the global numerical flux defined by: equation (3.29) can be written as: We now approximate ( f σ ) σ∈E by (f σ ) σ∈E in order to obtain the local divergence-free constraint Then the fluxes (f σ ) σ∈E are used in the cell-centered Finite Volume scheme (3.4) for the computation of the temperature field, by setting: v n+1 K,σ = ε K,σfσ . Practically, (f σ ) σ∈E is computed as an approximation of ( f σ ) σ∈E in the discrete least-mean-squares sense, which fulfills the divergence-free constraint on each finite volume control. It consists in solving the global optimization problem given by: where (w σ ) σ∈E is a sequence of strictly positive weights, which can be defined for example by w σ = 1 or w σ = m σ d σ , ∀σ ∈ E. Numerically, we observed similar behaviors for both possibilities. The solution of the problem (3.32) is given by the following theorem: and A = (α K ) K∈T ∈ R #T . Let Λ = (λ K ) K∈T ∈ R #T be the solution of: Thus the solution of (3.32) can be defined ∀σ ∈ E by: 34) where for all σ ∈ E ext such that σ ⊂ ∂K, we set λ K − σ = 0. Proof. With the following change of variables: problem (3.32) writes: We define the lagrangian associated with (3.36) for (h σ ) ∈ R #E and (µ K ) ∈ R #T by: We will show the existence of a saddle point of L. Its first argument will therefore be the solution of the problem (3.36), while the second one corresponds to the Lagrange multiplier associated with the constraints, see for example [16]. We recall that if ((g σ ), (λ K )) is a saddle point of L, then: We start with computing H ((µ K )) = inf (h σ ) L ((h σ ), (µ K )). We easily verify that (g σ ) (which depends on (µ K )) defined by: is solution of ∂L ∂h σ ((g σ ), (µ K )) = 0. Numerical simulations 4.1 Verification of the maximum principle We previously proved that the schemes (SD max J cen ) and (SD moy J up ) satisfy the maximum principle, without any restriction on m and M. This first experiment illustrates that if M is too large compared to m, the other schemes do not respect the maximum principle. In this perspective, we consider only the temperature equation (2.2a) and set: The initial temperature is defined on Ω = [0; 1] 2 by: On all the boundaries, we impose Dirichlet conditions, see Figure 3. We denote by Γ H = {1} × (0, 3; 0, 7) and set ∀t ∈ [0; T ]: Table 2. We find numerically the results shown previously. On the one hand, the schemes M = Scheme (SD moy J EGH ) (SD max J EGH ) (SD moy J cen ) (SD max J cen ) (SD moy J up ) (SD max J cen ) and (SD moy J up ) allow us to obtain the maximum principle whatever the value of M. On the other hand, the schemes (SD moy J cen ), (SD moy J EGH ) and (SD max J EGH ) do not give a solution that satisfies the maximum principle if M is too large. Analytical benchmark In this benchmark, we want to investigate the accuracy of the scheme described in section 3.2, depending on the discretization of the diffusive and Joule terms. We consider the domain Ω = [0; 1] 2 . The simulations will be performed until the time T = 0.1. The exact solution is defined for (x, y, t) ∈ Ω × [0; T ] by: θ ex (x, y, t) = 1 + t 10 λ π 2 (1, 5 + cos πx) (1, 5 + cos πy) , with λ = 2. In (3.3), a source term is added accordingly. For the velocity, two cases are considered: sin(π y) 1, 5 + cos(π y) sin(π x) 1, 5 + cos(π x) (4.1) Here, Dirichlet conditions are imposed on all the boundaries, so that Γ D = Γ. The temperature error in L ∞ (0, T ; L 2 (Ω)) is plotted in Figure 4 as a function of the mesh size h, in log-log scale, for each scheme. We notice that without the convective term (case a)), the centered schemes (SD moy J cen ) and (SD moy J EGH ) converge at order two, whereas the others are only at order one. Thus, the upwind choice in the diffusion term (schemes (SD max J cen ) and (SD max J EGH )) or in the Joule term (scheme (SD moy J up ) degrades the rate of convergence to order one, but is necessary to obtain the maximum principle. Adding the convective term (case b)), the centered schemes (SD moy J cen ) and (SD moy J EGH ) converge at order 3/2. This decrease of convergence rate can be explained by the upwind treatment of the convective term in order to ensure the stability. The natural convection in a cavity We will now validate the scheme (SD max J cen ) by coupling the temperature equation with the velocity one on the benchmark introduced in [1,15,29,31] and used in [7] (slightly adapted because of the choice of the model (2.2)). We consider a square cavity Ω = [0, 1] 2 containing a calorifically perfect gas, see Figure 5. The gas is initially at rest with uniform temperature: u 0 = 0 and θ 0 = 0.5. Note that the temperature has been scaled to be between 0 and 1. A temperature of θ h = θ 0 (1 + ε) (respectively θ c = θ 0 (1 − ε)) is imposed on the left (respectively right) wall with ε = 0.01 as in [29]. For this small temperature amplitude, the thermodynamic pressure can be approximated by a constant, where the author verifies numerically that P(T ) = P 0 with an accuracy of 10 −5 ). The horizontal walls are insulated. Denoting by Γ N = [0, 1] × {0, 1}, we thus have: On all walls, the no-slip condition is imposed for the physical velocity u, which gives for the solenoidal one: v(x, t) = 0, ∀x ∈ Γ N , ∀t ∈ [0; T ], v(x, t) = λ∇θ(x, t), ∀x ∈ Γ D , ∀t ∈ [0; T ]. The time iterations are performed until the numerical steady state is reached, i.e. the relative errors for the solenoidal velocity and the temperature are less than 10 −10 . The steady state is obtained approximatively at T = 100, using for instance a mesh T composed by 3584 triangles (corresponding to h = 1.8 · 10 −2 ) and a time step ∆t of the order of 10 −2 . We represent in Figure 6 the streamlines of the velocity field and the contour lines of the temperature at steady state. The solutions obtained are close to the ones given in Figures 4 and 5 of [29]. We also verify that the maximum principle for the temperature is always respected. Note that even though we have to solve a global optimization problem in order to obtain the fluxes used in the finite volume scheme, its cost is negligible. Indeed, the matrix of the linear system (3.33) is assembled a single time before the time loop. On the contrary, some matrices in the finite element step must be assembled at each time step. Note moreover that the cost of Newton's iterations for the temperature equation resolution is also negligible. Indeed, only two or three iterations are necessary in order to obtain the solution with an accuracy of 10 −5 . With this choice, approximatively only 10% of the overall computation time is devoted to the resolution of the temperature equation. This ratio in the computation cost is quite the same when we consider more coarse or more fine meshes. Conclusion In this work we propose some finite volume schemes for the resolution of an unsteady convection-diffusion equation involving a Joule term. Several variants are investigated, depending on the way to discretize the diffusion term as well as the Joule one. A first main point is that two of these schemes verify the discrete maximum principle without any restriction on the data. Such schemes allow us to define a combined finite volume -finite element scheme for a low-Mach system, where the temperature is computed by the finite volume scheme whereas the velocity and pressure are approximated by a finite element one. The second point is that the finite volume velocity field, used for the convective term in the temperature equation, is computed from the finite element one by the resolution of a discrete least-mean-squares problem to ensure the local free divergence constraint. The numerical results illustrate the properties of the different schemes and confirm the relevance of the approach. The question of the convergence remains to be investigated, and will be addressed in a future work.
2019-11-14T17:09:57.167Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "159a17e2bed962ca871833251d3ed8fe1c1079de", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/math.2020021", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d31c5b26b03393774183211043c6eb9591f72603", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270795073
pes2o/s2orc
v3-fos-license
Reference-free structural variant detection in microbiomes via long-read co-assembly graphs Abstract Motivation: The study of bacterial genome dynamics is vital for understanding the mechanisms underlying microbial adaptation, growth, and their impact on host phenotype. Structural variants (SVs), genomic alterations of 50 base pairs or more, play a pivotal role in driving evolutionary processes and maintaining genomic heterogeneity within bacterial populations. While SV detection in isolate genomes is relatively straightforward, metagenomes present broader challenges due to the absence of clear reference genomes and the presence of mixed strains. In response, our proposed method rhea, forgoes reference genomes and metagenome-assembled genomes (MAGs) by encompassing all metagenomic samples in a series (time or other metric) into a single co-assembly graph. The log fold change in graph coverage between successive samples is then calculated to call SVs that are thriving or declining. Results: We show rhea to outperform existing methods for SV and horizontal gene transfer (HGT) detection in two simulated mock metagenomes, particularly as the simulated reads diverge from reference genomes and an increase in strain diversity is incorporated. We additionally demonstrate use cases for rhea on series metagenomic data of environmental and fermented food microbiomes to detect specific sequence alterations between successive time and temperature samples, suggesting host advantage. Our approach leverages previous work in assembly graph structural and coverage patterns to provide versatility in studying SVs across diverse and poorly characterized microbial communities for more comprehensive insights into microbial gene flux. Availability and implementation: rhea is open source and available at: https://github.com/treangenlab/rhea. Introduction Structural variants (SVs), loosely defined as genomic alterations that are 50 base pairs (bps) or longer (Mahmoud et al. 2019), play an important role in driving both evolutionary adaptation and heterogeneity in bacterial genomes (Rocha 2018).Bacterial genome dynamics not only influence the ability for the bacteria to grow and adapt to changing environments (Rocha 2004) but can also impact the function of the microbial community as a whole and the phenotype of the host (Durrant and Bhatt 2019).In isolate genomics, the goal of SV detection is relatively straightforward: detect long genomic differences between a sequence and reference genome that can be classified as an insertion, deletion, inversion, duplication, translocation, or any combination of the prior (West et al. 2022).However, in metagenomics, when reference genomes may not be well-defined and a mixed population of similar strains may exist in the community, detection of SVs becomes more complex (West et al. 2022). SV detection methods can be broadly categorized into three groups: mapping-driven, assembly-driven, and patterndriven.In mapping-driven approaches, reads are directly aligned to established reference genomes or pangenome of sequences, then, mapping patterns signifying inconsistent coverage identify SVs.In assembly-driven approaches, reads are first assembled into longer sequences (contigs), then aligned to another contig or reference to detect long scale differences.In pattern-driven approaches, SV patterns are pre-defined then searched for in sequencing reads.Zeevi et al. developed a mapping-driven SV detection approach for metagenomic short reads to survey SVs associated with host disease risk factors in the human gut microbiome (Zeevi et al. 2019).The authors built a comprehensive database specifically for known microbes in the human gut microbiome and developed an "iterative coverage-based read assignment" (ICRA) algorithm to repeatedly adjust read assignments and establish alignments.Their SGV-Finder algorithm then scans the coverage of each reference genome for presence of regions with unexpectedly low (deletions) or high (duplications) coverage.While this method has been effective as a comprehensive search for SVs in the human gut microbiome correlating to expressed phenotypes (Liu et al. 2023), relying on a confident database of reference genomes is challenging for communities that have not been extensively characterized.This pipeline is additionally restricted to only deletions and duplications relative to reference genomes in the supplied database. To expand upon the types of SVs detected and leverage the advantages of long read technologies, MetaSVs, an assemblydriven approach, was designed (Li et al. 2023).In this pipeline, long and short reads combined help to confidently create and classify metagenome-assembled genomes (MAGs).Each MAG is then evaluated independently through whole-genome alignment to a reference MAG or genome with the SV detection tool MUM & Co (O'Donnell and Fischer 2020).Chen et al. utilized MetaSVs to expand characterized SVs in the human gut (notably insertions and inversions) and demonstrates the value in incorporating long reads for SV detection (Chen et al. 2022).However, this assemblydriven method is still highly dependent on a reference database, as it is the taxonomic reference-driven classifications that determine which MAGs are compared to which references.Additionally, unique MAGs are often not created for subtle SV differences (Kerkvliet et al. 2024), especially in communities containing similar strains (Ghurye et al. 2016). MetaCHIP is another MAG-based approach for the slightly different goal of detecting recent horizontal gene transfer (HGT) events (Song et al. 2019).In an HGT event, genetic material is exchanged between organisms (Ochman et al. 2000), resulting in an insertion SV for the recipient.MetaCHIP effectively evaluates each MAG in the community for a gene sequence that has more BLASTN (Altschul et al. 1990) hits to genes in a different MAG than its own.This algorithm, however, can only detect inserted genes that are highly similar to another MAG, which resulted in simulation results declining at 25% mutation rate between donor and recipient. To entirely avoid reference genomes and MAGs, two pattern-driven methods have been developed.PhaseFinder (Jiang et al. 2019) was created for detection of inversions in bacterial genomes from genomic or metagenomic data, by detecting regions flanked by inverted repeats where sequencing reads support both orientations.DIVE (Abante et al. 2023) was developed to identify sequences surrounding genetic diversification such as transposable elements, within mobile genetic element (MGE) variability hotspots, or CRISPR repeats, by detecting repeated k-mers with diverse flanking sequences to define MGE bounding sequences and transposon arms.Both these methods show how detection of specific patterns directly from reads can be used to eliminate reference genomes and MAGs. Rhea takes a different approach to detect SV patterns within a microbial community.It constructs a co-assembly graph from all metagenomes in a series that are expected to have similar communities (i.e.longitudinal time series or cross-sectional studies where a significant portion of the strains are shared across samples) (Quince et al. 2021).Regions of the graph indicative of SVs are then highlighted, as previously explored for characterization of genome variants (Iqbal et al. 2012, Nijkamp et al. 2013, Narzisi et al. 2018, Ghurye et al. 2019).The log fold change in graph coverage between consecutive steps in the series is then used to reduce false SV calls made from assembly error, account for shifting levels of microbe relative abundance, and ultimately permit SV detection in understudied and complex microbiomes. Rhea method Rhea takes as input a series of long-read metagenomic sequences, expected to be taken from the same source at different time points or some other step-wise metadata separation.A single metagenome assembly graph is constructed by combining all provided samples, then each sample is separately aligned back to the graph.Change in graph coverage between consecutive pairs of samples and the graph structure are used to call SVs (Fig. 1).If desired, quality filtering or read removal should be completed prior to rhea's graph construction. SV definitions Four types of SVs are detected in rhea: insertions, deletions, tandem duplications (West et al. 2022), and complex indels (Roerink et al. 2014, Ye et al. 2016).An insertion here is a sequence that has been integrated in increasing abundance between successive steps in the sequential series.A deletion is the opposite, a subsequence whose abundance is declining.A tandem duplication is a gene sequence that has been repeated, directly one after another, in increasing presence.A complex indel as a sequence that has drastically changed between successive steps, showing the signature of a deletion and insertion at the same location.In this pipeline, SV detection equates to an increase in abundance of the SV, rather than simply a novel appearance, therefore, suggesting an advantage for the host microbe or community. Graph construction and coverage calculations A single co-assembly repeat graph for the series with N samples is constructed by combining all reads from all samples into one metaFlye run (Kolmogorov et al. 2020), withkeep-haplotypes parameter set to true to maintain strain variations.After the graph is constructed, each sample is separately aligned back to the graph with minigraph (Li et al. 2020), where the majority of the reads are expected to align to the graph since all reads were included in graph construction.An undirected graph is then built mimicking the structure of the metaFlye assembly graph where a single node is drawn for each complementary pair, as seen in the assembly graph visualization software Bandage "single" option (Wick et al. 2015).This graph is defined as G ¼ ðV; EÞ with a set of k nodes V ¼ fv 1 ; v 2 ::; v k g and a set of edges E. Each edge ðe i;j Þ is then given a weight equal to the number of edges that appear between nodes i and j in the metaFlye assembly graph, given there exist at least one edge between i and j in the assembly graph.Each edge ðe i;j Þ thus denotes the existence of overlap reads that expand directly from v i to v j (or from v j to v i ) without gaps, in either direction (forward or reverse) for the sequences in i and j.Minigraph alignments are then used to calculate node and edge coverage for each step in the series.Node coverage is calculated as the average coverage per base pair within the node, calculated by summing the coverage for each base pair divided by the total number of base pairs in the node.To account for error, all nodes with coverage less than 1, are set to a coverage of 1. Node coverage is then normalized for the entire series, by first calculating the median total base pairs m across samples in the series, then establishing a multiplier for each sample n ¼ 0::N as bp n =m, where bp n is the number of base pairs in sample n.This multiplier for each step is applied to all node coverages for each n ¼ 0::N.Edge coverage for each edge e i;j at each step n in the series is counted as the number of occurrence a read path covers directly from i to j or j to i in the read-graph alignment for step n.Each node in our undirected assembly graph then holds a vector of log fold change in coverage between rhea i59 successive steps in the series, calculated for each node i as logðvc i;tn =vc i;t n−1 Þ, where vc i;tn is the coverage of node i at step n in the series for all steps n ¼ 1 . ..N.A log fold change vector is also assigned to each edge ði; jÞ, defined as logðec ði;jÞ;tn =ec ði;jÞ;t n−1 Þ, where ec i;tn is the coverage of edge e i;j at step n in the series for all steps n ¼ 1::N.The log fold change vectors are then used in the next step to detect SVs and account for assembly error and changes in genome relative abundance between successive samples. Detected SV graph patterns Rhea utilizes the graph structure, edge weights, and the log fold change coverage vectors to call SVs between each pair of consecutive samples in the series.All triangles and squares, cycles of lengths 3 and 4, respectively, are detected in the coassembly graph using NetworkX simple_cycles (length bound ¼ 4) function (Hagberg et al. 2008, Gupta andSuzumura 2021).This function yields complexity Oððc þ nÞðk − 1Þd 4 Þ, where n, e, and c, are the number of nodes, edges, and simple circuits, respectively, and d is the average degree of nodes.For insertions and deletions, each triangle is searched for the pattern of two similar log fold change values and one that is significantly different for each step.This is completed by: calculating the median and standard deviation between the three log fold changes, then, labeling any node with a value that is more than one standard deviation away from the median as an outlier.If the triangle contains exactly one outlier, then an insertion or deletion is called, depending on if the outlier value is lower (deletion) or higher (insertion) than the median.Median is used here rather than mean to provide robustness against extreme outliers.For example, in the case of an extreme outlier due to a deletion from a thriving member in the community, the mean would be skewed and thus could call all three nodes an outlier; whereas the median would take the value of one of the non-deletion nodes, and thus, given the two non-deleted nodes carry a similar value, only the deletion would be an outlier.A similar process is conducted to search for complex indels.Here, each square in the graph is searched for outliers. If the square either has a single outlier or two outliers that do not have an edge between them (opposites in the square) and one is greater than the median while the other is smaller, a complex indel is called.A tandem duplicate can be called under two different scenarios.The first, a self-duplicate, shown by an edge log fold change of any self-loop edge greater than 1 for any successive steps in the series.The second is the situation where the duplicate produces a second node containing a nearly duplicate sequence and loops between two nodes.This is detected by searching all edges with weight w ≥ 2 for a log fold change edge weight greater than 1.If these criteria are met, the node with the greater log fold change coverage between the two is then called a tandem duplication, if it has not been called for another SV at the specified step. Simulated HGT events Rhea was compared to the metagenome HGT detection tool MetaCHIP by simulating long reads from the simulated HGT events completed in the HgtSIM manuscript (Song et al. 2017).For this community, 10 strains within class Alphaproteobacteria and 10 strains within class Betaproteobacteria were selected.1 gene was selected from each Alphaproteobacteria, mutated with rate m, and inserted randomly into each Betaproteobacteria.This resulted in a total of 100 HGT events for the community (Fig. 2a).Three long read metagenomic datasets of 500,000 reads were simulated from these reference genomes with NanoSim (Yang et al. 2017) v3.1.0with default parameters: a pre-transfer community (T0) of the 20 reference genomes in equal abundance, and two separate post-transfer communities with mutation rate m ¼ 0 and m ¼ 30 (T1 m0 , T1 m30 ), which include the 10 original Alphaproteobacteria and the 10 HGTinserted Betaproteobacteria references in varying abundances (Fig. 2a).These varying abundances were established by randomly selecting a relative quantity between 1 and 5 for each of the species as input into the NanoSim abundance text file.MetaCHIP v.1.10.12 was run with GTDB-Tk (Chaumeil et al. 2022) v2.2.6 with taxonomy release 207 and -r set to class (c).Rhea v1.0 was run with default parameters, metaFlye v2.9.3, and minigraph v0.20.Simulated HGT insertions were mapped against reported HGT sequences for both methods using minimap2 (Li 2016) v2.24 with default parameters; each HGT insertion sequence was marked as detected if the sequence had a hit to a reported HGT insertion. Simulated SVs To evaluate the accuracy of rhea for detection of SV types insertion, deletion, complex indel, and tandem duplication in comparison with a MAG-based workflow, two experiments mimicking the 10 microbes in ZymoBIOMICS Microbial Community Standard D6300 (even distribution) and D6310 Then, a custom script introduced 10 random complex indels of the same length range into each of the variant strains.The custom script randomly selected a location along the genome, then, performed a deletion and a random insertion, each within the prescribed length range.For our even distribution, two long read metagenomic datasets of roughly 500,000 reads were simulated from these reference genomes with NanoSim: a pre-transfer community (T0) of the original references in their provided relative abundances and a post-transfer community (T1), which includes only the variant strain for half of the species and equal abundance of variant and original strains for the other half (Fig. 3a).This was completed again for our log distribution, where only the original references were present in T0 and only the strains containing the added SVs in T1.The expected genome coverage for each species s was calculated for each distribution as ns�avgr lenðsÞ , where n s is the number of read for species s, avg r is the average read length for the entire simulation, and lenðsÞ is the length of the reference genome for species s.For our MAG workflow, reads were assembled with metaFlye (Kolmogorov et .8with the known reference genome length for parameter -g.Simulated SV sequences were mapped against reported SV sequences for both methods using minimap2.Each simulated SV was marked as detected if the sequence had a hit to a reported SV sequence with the correct SV type.Since MUM & Co does not call complex indels, we considered these correct if both the deletion sequence and the insertion sequence were returned. Cheese rind ripening To evaluate rhea on a real microbiome, PacBio HiFi metagenomic reads from cheese rinds throughout ripening were taken from a previous study (Saak et al. 2023).One rhea run for "Cheese C" was completed with the 5 corresponding samples in temporal order and parameter-type set to pacbio-hifi.The selected assembly graph connected component was classified with GTDB-Tk (Chaumeil et al. 2022) "classify-wf" with default parameters, and is referred to as the Halomonas subgraph per this taxonomic classification.Mobile genetic element (MGE) contigs and putative hosts were established in the original publication utilizing Hi-C sequencing technology, overlap read coverage, and the viralAssociatePipeline (Bickhart et al. 2019).To determine which of these contigs showed signatures in our Halomonas subgraph, BLAST (Altschul et al. 1990) was run for all MGE contigs with a putative host, against the extracted Halomonas subgraph sequences as reference with default parameters.MGE contigs were considered to have their signatures present in the graph if a hit with query coverage >5% was reported.One subsection of the Halomonas subgraph was selected for further investigate as it showed a change in dominating graph path over time.Nodes within this path were characterized with SeqScreen-Nano (Balaji et al. 2023) v4.1 with default parameters and provided SeqScreen databases v21.4. Hot spring microbial mat sequencing Microbial mat plugs were extracted from Mushroom Spring, Yellowstone National Park, USA on July 30, 2009 across a series of temperatures: 50 � C, 55 � C, 60 � C, 65 � C. DNA was quantified using the Qubit 3.0 Fluorometric Quantitation dsDNA High Sensitivity kit (ThermoFisher Scientific, Waltham, MA, USA) and stored for future use at −80 � C. DNA extractions were analyzed using the Genomic DNA ScreenTape Analysis kit on the 4150 TapeStation System (both from Agilent, Santa Clara, CA, USA).Size selection using AMPure XP beads (Beckman Coulter, San Jose, CA, USA) increased DNA fragment length from a mean of 2 kb up to 6 kb with high recovery of DNA.Size selected DNA was prepped for sequencing using the Oxford Nanopore Technologies (ONT) 1D Genomic DNA by Ligation library preparation kit (SQK-LSK109, Oxford Nanopore Technologies, Oxford, UK).Libraries were then sequenced using the ONT MinION sequencer using one FLO-MIN106D R9 Version Rev D flow cell per temperature sample.Sequencing was run on a MacBook Pro (model A1502, Apple) using ONT's MinKNOW software.Automatic basecalling through this software was turned off.Sequencing runs lasted between 24 and 44 hours.Basecalling was completed using the ONT software Guppy (https://github.com/nanoporetech/pyguppyclient.git) with default parameters. Hot spring microbial mat analysis Rhea was run on Oxford Nanopore Technologies (ONT) reads from a hot spring microbial mat for 4 unique temperatures (see above) to asses an environmental microbiome with a high-level of complex microbial interactions (Bhaya et al. 2007, Nelson et al. 2011).Basecalled sequences were listed in order of increasing temperature with the-collapse parameter set to true.MAGs were also curated for reads from the 60 � C sample by metaFlye assembly with-keep-haplo types set to true and contigs binned with MetaBat 2 (Kang et al. 2019).Each read was then aligned back to the set of MAGs with minimap2 with default parameters.Reads with an alignment to a MAG contig of >80% of length were considered to be included in MAGs, mimicking the pipeline of a previous manuscript (Benoit et al. 2024) Simulated HGT insertions Two simulation experiments were conducted with a community of strains within Alphaproteobacteria and Betaproteobacteria classes to evaluate HGT detection accuracy: one with mutation rates m ¼ 0 and the other with m ¼ 30.For the HGT insertions with m ¼ 0, rhea delivered comparable recall to MetaCHIP (0.73 to 0.74) and improved precision (1.0 to 0.77) (Fig. 2b).The only non-insertion SV that rhea called was a single complex indel, which was due to two insertions sequences in close genomic proximity.Given the two inserted sequences were still detected as sequences of increasing abundance, this was still considered this an accurate call.Although results for MetaCHIP and rhea for m ¼ 0 were relatively similar, a large discrepancy was observed for mutation rate m ¼ 30.Here, the accuracy for rhea stays consistent to that of no mutations (0.76 recall and 1.0 precision), yet MetaCHIP is not able to detect any of the HGT insertions.This caveat is also highlighted in the MetaCHIP manuscript; the inserted sequence is required to be present in another MAG (putative donor) in the community for MetaCHIP to be able to detect the HGT insertion. Additionally, MetaCHIP returned a total of 13 false positive insertions, while rhea did not report any false positives. Simulated structural variants Two simulated experiments were conducted to evaluate rhea in comparison to a MAG-based workflow for a variety of SVs.Each experiment contained two mock time points (T0 and T1), where T0 contains only the references in the ZymoBIOMICS Microbial Community Standard.For our even abundance distribution, T1 contains a mix of original references and simulated variant strains, while T1 contained only the simulated variant strains.For the even distribution, rhea greatly outperformed the MAG workflow in terms of recall (Fig. 3b).While rhea detected 71, 68, 63, and 72 of the simulated insertions, deletions, complex indels, and tandem duplications, respectively, the MAG workflow only identified 19, 23, 0, and 25, respectively.This discrepancy was largely due to the inability to curate independent MAGs for low abundant species and SV distinctions.MAGs were classified for 5 of the 10 species at both T0 and T1, limiting the MAG-based workflow to only attempt to call SVs for these species.Of the five species, two (B.subtilis, S. aureus) were from species where the SV-containing strain dominated in sample T1, while three (E.coli, P. aeruginosa, S. enterica) contained both the original and the SVcontaining strains in T1.Accuracy results between the rhea and MAG pipeline proved comparable for insertions, deletions, and tandem duplicates when only the SV-strain was present in post-transfer sample T1.However, when both the original and SV-strains were present, only one MAG was curated for the species, leaving many of the SV graph nodes unbinned and thus impossible to detect (Fig. 3c).To get a sense of the coverage needed for SV detection in each workflow, recall for each species was reported for our log distribution experiment 3c.Since only one MAG was created for this community, the MAG workflow was only able to detect SVs in the most abundant microbe.While rhea also decreases its detection ability with a decrease in coverage, it was able to detect 30% of SVs in a microbe with only 4x coverage. Of the 125 SVs that were not detected by rhea in the even distribution, roughly 50% were not detected in the assembly graph, roughly 40% were in the graph but resolved into longer nodes rather than partaking in SV graph patterns, and the remaining 10% were called as the wrong SV type. Cheese ripening temporal series To demonstrate rhea's ability to explore interesting microbial evolutionary patterns within a microbiome over time, PacBio HiFi metagenomic sequences taken from a cheese rind over the course of ripening were used as input (Saak et al. 2023).A total of five samples were included from sampling weeks 2, 3, 4, 9, and 13, creating four pairs of change (C1-4).Evaluating the assembly graph coverage visuals produced by rhea and Bandage (Wick et al. 2015), one connected component stood out for displaying significant graph complexity and diversity in coverage, implying a disproportionately large number of SVs.Rhea SV results indicated roughly 20% of SVs in the community to be contained in this subgraph (Fig. 4a).This connected component was then classified by GTDB-Tk under genus Halomonas and further exploration was pursued. First, the ability for viral and plasmid mobile genetic elements (MGEs) to show signatures in the Halomonas subgraph was evaluated.In the original publication for the cheese samples, MGE contigs and putative hosts were established via Hi-C sequencing technology and overlap read coverage with the viralAssociatePipeline (Bickhart et al. 2019) for sampling weeks 2, 4, and 13.Their results showed Halomonas to be host for 0, 6, and 17 MGE contigs, respectively.A BLAST (Altschul et al. 1990) comparison of all MGE contigs against the Halomonas subgraph, showed all putative Halomonas MGE contigs to display signatures in our Halomonas subgraph (hit with more than 5% query coverage), despite previous host connections being defined via Hi-C sequencing and our graph being constructed solely on long-read sequences.An additional 4, 2, and 3 MGE contigs showed signature in the Halomonas subgraph without having a previous description of a Halomonas host for the time point for each of the three included sampling weeks, respectively (Fig. 4b), which may be false positives or novel host discovery.Finally, one noteworthy section of the Halomonas subgraph was selected for gene function analysis (Fig. 4c).Here, a newly emerged path (displayed lower option) shows an increase in coverage over time up until stabilizing by week 9, suggesting an evolutionary advantage over the alternative path (top option).Gene function predictions returned by SeqScreen (Balaji et al. 2023) showed the newly dominating path to contain a type I restriction-modification system that was not expressed in the alternative sequence.This suggests an evolutionary advantage due to phage protection in the Halomonas strains, which is unsurprising given the increasing number of phage interactions detected throughout ripening for Halomonas.Exploratory analysis here demonstrates an additional feature of rhea, which permitted the extraction of genomic subsequences that suggest an evolutionary advantage, gained insight into MGE hosts, and helped infer microbial interactions. Hot spring microbial mat temperature series To assess an environmental sample with complex interactions, rhea was run on a temperature series of samples taken from the Mushroom Spring microbial mat in Yellowstone National Park, USA.Samples were collected from four different portions of the mat with temperatures 50 � C, 55 � C, 60 � C, and 65 � C. Figure 5a displays the number of SVs reported and the number of unique SV sequence-type pairs observed between successive temperature increments (C1: 50 � C to 55 � C, C2: 55 � C to 60 � C, and C3: 60 � C to 65 � C).For insertions and deletions, the number of SVs detected is roughly three orders of magnitude greater than the number of unique SV sequence-type pairs.This implies that either the same SV sequence and type tend to occurs in many different genomic locations throughout the community or SVs are falsely inflated by rhea due to graph complexity and high-degree nodes rhea i63 (i.e.nodes that are either repeated in different locations or conserved across divergent strains).This pattern is also observed for complex indels, where SVs are counted to be roughly four orders of magnitude greater than unique SV sequence-type pairs.As for SV sequence length, the majority of reported SVs were between 500 and 1000 bp (Fig. 5b). Previous research closely analyzed two Synechococcus isolates from these mats and showed a large number of diverse insertion sequence (IS) activity occurring within the two strains (Nelson et al. 2011).Our findings suggest very high levels of transposon activity, gene exchange, and uncharacterized strains that occur in microbial mats.Further research is needed to confirm these findings and characterize the gene functions relevant to the SVs to provide additional insight into extremophile evolution and adaptation.One sample (60 � C) was selected to assess the read inclusion rate of alternative workflows for this community rife in unknown microbes.To evaluate a reference-based taxonomic classification method, reads were classified by Kraken2 with default database, where 42% of the reads were left unclassified.To evaluate a MAG creation workflow, MAGs were created with MetaFlye contigs and MetaBat2 binning, where roughly 30% of reads did not map to a binned contig.With rhea, 13:5% of reads did not align back to the constructed co-assembly graph. Read to co-assembly graph mapping rates To evaluate the ability for the constructed co-assembly graph to incorporate all sequenced reads, the percent of reads that did not align to the co-assembly graph were recorded.For the two ONT simulations, 8.1% and 8.4% of reads did not align to their co-assembly graph.However, when restricting only to those reads that mapped back to their reference (based on NanoSim reported error), only 0.2% and 0.4% did not align to their co-assembly graph.For the real datasets, the PacBio HiFi cheese reads showed few reads to not align (0.6%) while error-prone ONT hot spring reads had far more (13.5%). Computational usage Table 1 reports the CPU and RAM usage for rhea experimental results.All software analysis was completed on a Ubuntu 22.04 LTS system with 15 threads.The /usr/bin/time command was used to gather time and memory statistics.Reported CPU (central processing unit) time was calculated by summing the user and the system time; RAM (random access memory) requirements were determined using the maximum resident set size. Discussion Rhea is a graph-based method for detecting structural variants (SVs) between consecutive samples in long-read metagenome series data.Rhea avoids reference databases and MAGs by analyzing structural motifs and change in alignment coverage on a combined co-assembly graph for SV detection of intraspecies variations, lower abundance genomes, and novel organisms. Long reads have been shown to improve the ability to detect SVs in isolate genomes (Ahsan et al. 2023).This led us to develop rhea for long reads, yet the fundamental idea could likely be expanded to short reads with further experimentation.Specifically, the type of co-assembly graph constructed should be evaluated since repeat assembly graphs are optimized for long reads (Kolmogorov et al. 2020).While our results did not show a strong correlation between the SV length and rhea's ability to detect SVs (recall of 63% for SVs < 1000 bps in length, 62% for SVs 1000-1500 bps, and 83% for SVs >1500 bps), further evaluation is needed to determine if this holds true for a broader range of SV lengths. One benefit of rhea is the inclusion of more reads into SV analysis than MAG-or reference-based approaches.When using low-error PacBio HiFi reads, we found less than 1% of the reads to get discarded due to an inability to align to the graph.In our simulated ONT reads, all reads that contained too many errors to be mapped back to the reference were discarded, while only <0.5% of remaining reads were discarded.We thus posit that the majority of unaligned reads are likely to be high-error reads, while the remainder may be from contamination or extremely low abundant organisms and SVs. Currently, rhea is only able to detect insertion, deletion, tandem duplication, and complex indel SV types between two metagenomes of similar microbes.Since these are detected through simple triangles and squares on the coassembly graph, further development is required to permit detection of SVs over more complex regions of the graph and to reduce false positives of recurring SVs in graph regions with high-degree nodes.Detection could theoretically be expanded to inversions and translocations; however, we anticipate the need to maintain node directionality (whether the sequence is read forward or reverse) in the co-assembly graph.Rhea also decreases in its ability to detect SVs as the genome coverage decreases, and was unable to detect any SVs for genomes with less than 1x coverage.Further algorithm developments could help improve rhea for more sensitive detection in low abundance genomes. While rhea has so far only been evaluated for SV detection over the course of microbiome series data, the idea of constructing a co-assembly graph and comparing the coverage between samples could be expanded beyond series data and used for different types of studies, such as cohort comparison analyses.However, caution should be taken with regards to the similarity of microbes across samples.Rhea detects SVs when reads from different samples align to similar areas within a co-assembly graph.As the communities diverge, graph alignment overlap between samples is expected to decrease.Further testing is needed to determine which divergence levels are too extreme for rhea's algorithm.An additional consideration of cohort studies is the increased number of reads likely to be included in the co-assembly graph.As the graph may become too complex computationally, methods of downsampling sequences or alternate graph construction methods could be considered. An additional benefit of rhea is that its results contain input data for the interactive visual software package Bandage (Wick et al. 2015) for exploration of changes in graph coverage throughout a metagenome series.This tool provides researchers with an efficient method to investigate sequencelevel fluctuations while maintaining genome context, to ultimately extract sequences of interest as shown in Fig. 4c. In lieu of metagenome-specific methods, metagenomes are often analyzed with methods and models developed for genomic analyses.Yet this simplification overlooks inherent complexities of dynamic and interdependent microbial ecosystems (Brito 2021).By viewing these communities holistically and acknowledging their intricate co-evolution with rhea, we can pinpoint microbial heterogeneity and evolution of these diverse and interconnected ecosystems. Figure 1 . Figure 1.(a) Rhea takes a series of long-read metagenomic reads as input.Then, a co-assembly graph of all reads is created with metaFlye.Reads from each sample are then separately aligned to the co-assembly graph with minigraph.Rhea evaluates log fold change in coverage between series steps for SV-specific patterns in the assembly graph to detect SVs between steps.(b) Assembly graph patterns detected in rhea, which indicate insertions (INS), deletions (DEL), complex indels (CI), and tandem duplicates (TD).INS and DEL are detected by observing a triangle where one node has a significantly higher (INS) or lower (DEL) log fold change.CIs are noted by a square with one or two outliers; in the case of two outliers, the two outliers must be of opposing sides of the median and not have an edge between them.TDs are detected by a log fold change of a self-loop edge coverage greater than 1. al. 2020) with-keep-hap lotypes set to true, contigs were binned with MetaBat (Kang et al. 2019) v2.15 with default parameters, and bins were classified with GTDB-Tk.Bins with the same classification in both simulated samples were analyzed for SVs with MUM & Co (O'Donnell and Fischer 2020) v3 Figure 2 . Figure 2. (a) Simulated relative abundances for time points T0 and T1.T0 is a simulation of the 20 reference genomes in equal abundance; T1 is simulated from the 10 original Alphaproteobacteria species and the 10 mutated Betaproteobacteria species in varying abundances (b) Precision, recall, and F1-score for MetaCHIP (Song et al. 2019) and rhea detected insertions for the mock community with mutation rates 0 and 30.Time point T1 is used for MetaCHIP results; change from T0 to T1 is used for rhea. Figure 3 . Figure 3. (a) Relative abundance of long reads for two simulated time points (T0, T1) for each of our ZymoBIOMICS communities, one with even distribution (D6300) and the other with log distribution (D6310).Each of the 10 microbes were randomly given 20 indels, 10 tandem duplications, and 10 long complex indels to create a variant strain (Jeffares et al. 2017).T0 contains only the original references (R); T1 introduces the variants (V), where, in our even distribution, half the species have variants in equal abundance to their original reference [Escherichia coli (EC), Lactobacillus fermentum (LF), Pseudomonas aeruginosa (PA), Salmonella enterica (SE), Cryptococcus neoformans (CN)], and half the species are dominated by their variants [Bacillus subtilis (BS), Enterococcus faecalis (EF), Listeria monocytogenes (LM), Staphylococcus aureus (SA), Saccharomyces cerevisiae (SC)].In our log distribution, only the variant strains are present.Expected genome coverage here is the expected read coverage across the entire length of the genome (total number of simulated from the reference � average read length/length of the reference genome).(b) Recall, precision, and F1-score for each of the SV types (Ins: insertion, Del: deletion, CI: complex indel, TD: tandem duplication) for both workflows in our even distribution.For the MAG workflow, MAGs were curated for T0 and T1 separately.Then, Mum & Co called SVs between T0 and T1 MAGs of matching taxonomic classification.(c) Combined recall for all SV types, separated by each species.For our even distribution, species are separated into two groups, signifying the presence of only the variant strain (single strain) or both the original and variant references (multistrain) at time point T1.For our log distribution, species are ordered by decreasing coverage. Figure 4 . Figure 4. (a) SV counts detected by rhea for pairs of consecutive samples throughout cheese ripening (C1-4) for the entire community (Full microbiome) and exclusively the extracted Halomonas subgraph (Halomonas).(b) Plot where each stacked horizontal bar represents one of the labeled mobile genetic element (MGE) contigs, per the original cheese evaluation manuscript, for three sampling time points (week 2, 4, and 13).Each bar is colored to signify if viralAssociationPipeline (vAP) determined Halomonas as a host for that contig (green for yes; red for no).A grey box is drawn around a select stack for bars for each sample, signifying the MGE contigs that had a BLAST hit of >5% query coverage to our Halomonas subgraph.(c) Rhea and Bandage generated visual for the log fold change in coverage for the Halomonas subgraph.Left shows the complete Halomonas subgraph between weeks 4 and 9 (C3), selected for showing a general decrease in abundance yet an increase in abundance for several subsequences.In this graph visualization, each rectangle represents a sequence node.A line between two nodes a and b represents the presence of read overlap from either node a to node b, or vice versa.Each node is colored to show the change in coverage from week 4 to week 9, where a darker red represents an increase and darker blue for a decrease.Right zooms in on a small portion of the subgraph, selected due to one path showing favoritism over other paths over time, where the log fold change in coverage graph is shown for each pair of consecutive time points (C1-4).The graph node marked with a � indicates a sequence node containing the predicted type I restriction-modification system. Figure 5 . Figure 5. (a) SV counts detected by rhea for pairs of consecutive temperature gradient samples in increasing order (C1-3).The left "Unique SV" plot counts each unique SV as one, where the right "Unique node-type pair" plot counts each unique node-type pair as one (i.e. the same SV sequence labeled as an insertion between multiple different pairs of nodes is counted as one).The "Unique SV" plot contains a broken y-axis to improve visibility.In this way of counting, complex indel insertions (CI-i) and complex indel deletions (CI-d) contain the same values.The values for both tandem duplication categories, inserted duplicates (TD-i) and deleted duplicates (TD-d), are all under 1000 for both plots.(b) Histogram of the unique SV node-type pairs lengths colored by type, with overflow bin set at 6 kb. Table 1 . CPU and RAM usage for rhea experiments.
2024-06-29T06:17:20.389Z
2024-06-28T00:00:00.000
{ "year": 2024, "sha1": "b566983b18b63805d932c10573eaf9f0c66642f3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "687ed5f2a65874af9a70ba8657969ed175ddd5cf", "s2fieldsofstudy": [ "Biology", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
249333443
pes2o/s2orc
v3-fos-license
Production, intake, and feeding behavior of dairy goats fed alfalfa via grazing and cassava - This study examined the replacement of maize and soybean meal with cassava chips and alfalfa grazing, respectively. Twelve lactating Anglo-Nubian goats were kept on a Panicum maximum cv. Tobiatã pasture. The experiment was laid out in a Latin square design in which the following diets were tested: ground maize + soybean meal, cassava chips + soybean meal, ground maize + alfalfa grazing, and cassava chips + alfalfa grazing. The evaluated variables were feed intake, daily weight gain, milk yield and composition, and feeding behavior of the goats as well as production costs. Cassava chips and grazed alfalfa influenced the intakes of dry matter, crude protein, neutral detergent fiber, and total digestible nutrients. However, milk yield, body weight, and body score did not change. There was no diet effect on the proportions of protein, solids-not-fat, somatic cell count, or urea nitrogen in the milk. Treatments influenced the levels of fat, lactose, and total solids in milk, with the highest fat levels achieved with diets containing alfalfa. Grazing, rumination, and idle times and time spent interacting with other goats were not influenced by diets. The evaluated feedstuffs improved feed efficiency and reduced production costs. Therefore, cassava chips and alfalfa can replace certain ingredients without impairing the production performance of goats, but rather improving the profit of the producer. Introduction Commercial production of goat milk and derivatives has grown in the southeastern region of Brazil. This expansion is due mainly to the establishment of dairy industries in these states and the growing marketing of such products, which have augmented competition and forced producers to become more efficient in the activity by increasing their revenue and/or reducing production costs. The use of pastures is an option to reduce feed expenses. Maize and soybean meal are traditionally used as concentrate feeds-an essential element for the nutritional care of dairy animals-, representing energy and protein components, respectively. For dairy goats, the fluctuation in the price of these products on the international market, in certain years, makes their use unfeasible. Therefore, including Production, intake, and feeding behavior of dairy goats fed alfalfa via grazing and cassava Marques et al. 2 cassava chips as an energy substitute and alfalfa grazing as protein source is a possible strategy to be adopted by small goat-milk producers. Alfalfa is a forage plant that contains high protein levels as well as the main minerals required by dairy herds. Additionally, it has the potential to increase the fat content of milk thanks to its long fibers that allow ruminal bacteria to produce large amounts of volatile fatty acids, precursors of milk fat. This roughage has high nutritional value, palatability, and digestibility, characteristics that provide an increase in intake and, consequently, in milk yield . Cassava is considered a feed of good nutritional value, which is similar to maize in terms of energy content. In addition to having a lower purchase price, this tuber is also widely produced in Brazil (Fernandes et al., 2016). When a farming system uses pasture as a roughage source, it also takes into account the feeding behavior of animals and assessment of pastures. This practice enables the adoption of appropriate management strategies aimed at improving dry matter intake and animal comfort and ultimately increase the efficiency of the dairy activity (Adami et al., 2013). In view of the lack of information on dairy goat farming on pasture involving the use of alternative feeds, the present study was developed to examine the influence of replacing maize and soybean meal in the concentrate with cassava chips and alfalfa grazing, respectively, on the milk yield and composition, intake, and feeding behavior of Anglo-Nubian goats in a rotational grazing system on Tobiatã grass pasture, as well as their production costs. According to the Köppen classification, the region has a Cwa climate type (mesothermal hot). The average annual precipitation is 1,479 mm (Cunha and Martins, 2009), and the period considered dry runs from May to September (30% of the annual precipitation). Twelve multiparous Anglo-Nubian goats at post-lactation peak, with an average body weight of 60 kg, were distributed into three balanced Latin squares (4 × 4), according to milk yield, to evaluate the replacement of maize and soybean meal in the concentrate with cassava chips and alfalfa grazing, respectively. The following diets were thus established: ground maize + soybean meal, cassava chips + soybean meal, ground maize + alfalfa (Medicago sativa cv. Crioula), and cassava chips + alfalfa. Tobiatã grass (Panicum maximum cv. Tobiatã) was offered as a roughage feed for all diet groups, and so was alfalfa for the maize + alfalfa and cassava + alfalfa groups. Goats fed the maize + soybean meal and cassava + soybean meal diets were kept on Tobiatã grass pasture from 07.00 to 18.00 h. Those fed the maize + alfalfa and cassava + alfalfa diets, in turn, remained on alfalfa pasture from 07.00 to 07.30 h and from 07.00 to 08.00 h, respectively (times calculated based on bite rate and pre-determined amount of alfalfa in the diet). Animals fed the cassava + alfalfa diet remained twice as long on alfalfa to compensate for the lower protein content of the cassava relative to maize in the maize + alfalfa diet. After grazing on alfalfa, animals were moved to the Tobiatã pasture, where they remained together with goats from the other diet groups until 18.00 h. Rotational grazing was implemented with a fixed stocking rate. After grazing, animals were gathered into individual stalls with suspended slatted floors. In the stalls, animals received the experimental concentrate and had water and mineral salt freely available. The area established with Tobiatã grass pasture was approximately 0.6 ha, whose average dry matter (DM) yield is 8,650 kg/ha/cut and which was divided into 10 paddocks of approximately 500 m 2 . The paddock occupation and rest periods were three and 27 days, respectively. Each paddock had an automatic drinker and a free-access rest area with artificial shade provided by a black mesh cloth, located in the corridor to the paddocks, which retained 75% of solar radiation. Production, intake, and feeding behavior of dairy goats fed alfalfa via grazing and cassava Marques et al. 3 To form the alfalfa pasture, seeds of Medicago sativa variety Crioula were used. The alfalfa pasture area was approximately 400 m 2 , with an average DM yield of 5,500 kg/ha/cut, which was divided into 10 paddocks of approximately 40 m 2 and used in the months of October and November. The paddock occupation and rest periods were three and 27 days, respectively. A bucket of water with a capacity of 20 L was provided daily in the occupied paddock. Cassava chips were made on the experimental farm using freshly harvested cassava roots (Manihot esculenta, Crantz) of the mixed varieties IAC 13 and IAC 15. The roots were washed, peeled, chopped to particles of approximately 2 to 3 cm, and dried in the sun until they reached 89% DM. Then, they were bagged and stored to be subsequently prepared for the experimental diets. The experimental diets were formulated according to the NRC (2007) to meet the nutritional requirements of lactating goats with a live weight of 60 kg and the potential to produce 2-3 kg of milk with 4% fat per day. The chemical composition of the diet ingredients ( The experiment had a duration of 60 days, which were divided into four periods of 15 days, consisting of 10 days of adaptation to the diet and five days of data collection. Body weight and body condition score were determined at the beginning of each period and at the end of the experiment, always after the afternoon milking. Body condition score was assessed using a scale of 0 to 5, by palpating the lumbar region (Ribeiro, 1997). Voluntary intake of concentrate was calculated by the difference between the feed supplied and orts, during the five days of data collection. Forage intake was determined based on the estimates of fecal output, associated with the internal marker iNDF (indigestible neutral detergent fiber). To estimate the fecal output, the external marker chromium oxide (Cr 2 O 3 ) was administered orally, using gelatin capsules. The capsules contained 2.5 g of Cr 2 O 3 and were administered at 18.30 h over 10 days, the first five of which were used to stabilize the concentration of Cr 2 O 3 in feces and the last five for fecal collection. Feces were collected in plastic pots at the time of defecation, at 06.00 and 18.30 h. The concentration of marker in feces samples was determined in the laboratory using a colorimeter (Thermo Scientific ® , model Evolution 60S -UV -Visible Spectrophotometer), after digesting the samples in nitric and perchloric acids, following a methodology adapted from Bremer Neto et al. (2005). Production, intake, and feeding behavior of dairy goats fed alfalfa via grazing and cassava Marques et al. 4 Fecal dry matter output (FDMO) was estimated as the ratio between the amount of marker supplied (AM supplied) and its concentration in feces (AM feces), according to the formula below: To estimate the voluntary intake of DM, iNDF was used as an internal marker by in situ incubation of feed and feces samples in the rumen of fistulated goats, for 144 h (Berchielli et al., 2005). Samples were placed in nylon bags with 50 μm porosity, following the standardized technique mentioned by Vanzant et al. (1998). Each fistulated goat was adapted to one of the diets for 14 days, and bags of the respective treatment were incubated in each animal. After incubation, the bags were washed in a tank system with a propeller agitator, where water was renewed until it became colorless. Next, the material was dried in a forced-air oven at 55 °C for 72 h. Then, NDF analyses were performed according to the methodology proposed by Van Soest et al. (1991). Voluntary DM intake was calculated using the equation proposed by Detmann et al. (2001): in which DMI = dry matter intake; FDMO = fecal dry matter output (kg/day); iNDFFe = concentration of iNDF in feces (kg/kg); iNDFIC = iNDF intake from concentrate (kg/day); iNDFFo = concentration of NDFi in the forage (kg/kg); and CDMI = concentrate dry matter intake (kg/day). Goats were milked twice daily, at 06.00 and 18.00 h, using a mechanical milking machine, in the milking parlor. Milk yield was determined on the last five days of each experimental period, by weighing the milk on a digital scale with a capacity of 15 kg and dividing the result by 5 g. Milk yield was estimated as the average production of the five test days with correction for 3.5% fat (3.5%FMY), using the formula of Gaines (1928) as suggested by the NRC (2001): To determine the components of milk, individual samples proportional to the milk production from the morning (1/2) and afternoon (1/2) milkings of the first test day were collected and packed in 30-mL plastic tubes containing the preservative bronopol (2-bromo-2-nitropropane-1,3-diol). In the Production, intake, and feeding behavior of dairy goats fed alfalfa via grazing and cassava Marques et al. 5 samples, contents of fat, protein, lactose, total solids, solids-not-fat (SNF), urea nitrogen (MUN), and somatic cell count (SCC) were determined using a Bentley ® 2000 instrument. For the assessment of feeding behavior, animals were observed every 10 min by the method of visual observation, for three consecutive days of paddock occupation, in each experimental period. For this procedure, goats were identified with colored collars. Times spent grazing, ruminating, idling, and interacting with other goats were recorded. Diet production costs were computed using the following formulas: Total feed cost (BRL) = [Intake (kg DM) × Cost (kg DM)] × Lactation period (days) (4) Total feed cost (BRL) Cost per liter of milk (BRL) = Total milk production (kg) Gross revenue (BRL) = Total milk production (kg) × Price received (BRL) Net revenue per liter of milk (BRL) = Price received (BRL) − Cost per liter of milk (BRL) Total net revenue per goat (BRL) = Total milk production (kg) × Net revenue per liter of milk (BRL) (8) In the calculation of the cost of a kilogram of forage DM, costs of labor and inputs for the formation and maintenance of pastures were taken into account. The Latin square design (Model) was applied for traits: intake, body weight and score, milk yield and composition, and feeding behavior. Model: in which Y ijkl = trait observed in goat k in period j, treatment l, and square i; u = average of the trait; S i = effect of square i (i = 1, 2, and 3); p j(i) = effect of period j within square i (j = 1, 2, 3, and 4); c k(i) = effect of goat k within square i (k = 1, 2, 3, and 4); T l = effect of treatment l (l = 1, 2, 3, and 4); T × S li = interaction effect between treatment l and square i; and e ijkl = random error regarding observation Y ijkl . Data were subjected to analysis of variance using SAS statistical software (Statistical Analysis System, version 9.0). In all analyses, the significance level adopted was α = 0.05. Results There was an effect of diets on DM intake (Table 3). When maize was used as an energy feedstuff (maize + soybean meal and maize + alfalfa), the presence of alfalfa reduced DM intake, which led to lower intakes of CP, NDF, and TDN. When cassava was used (cassava + soybean meal and cassava + alfalfa), the same behavior occurred, except for TDN intake, which was similar. With soybean meal as a protein feedstuff (maize + soybean meal and cassava + soy), the group fed the diet containing cassava showed a reduction in DM intake and, consequently, in the intake of other nutrients. The use of alfalfa as a protein ingredient (maize + alfalfa and cassava + alfalfa) did not change the intakes of DM, NDF, or TDN, but the group fed cassava + alfalfa exhibited a lower CP intake. The goats on the cassava + alfalfa and maize + alfalfa diets showed the lowest bite rates on Tobiatã grass. The other feeding behaviors evaluated were not influenced by diets (Table 4). The differences in DM intake between diets were not sufficient to influence weight gain or the body condition score of the animals (Table 5). When alfalfa was present in diets, feed efficiency (FE) and FE corrected for 3.5%FMY were better than the control diet. Goats fed the cassava + alfalfa diet produced milk with a higher total solids content than those fed the cassava + soybean meal, due to the higher fat content in the milk of the former. Goats on the cassava + alfalfa diet also had a higher fat and lactose content in their milk than those fed maize + soybean meal. There was a higher fat content in the milk from the goats receiving the cassava + alfalfa treatment as compared with those that received the maize + soybean meal and cassava + soybean meal treatments, which did not eat alfalfa and which did not differ from the group fed the maize + alfalfa treatment. Lactose was higher in the milk from goats fed cassava + alfalfa than in the milk from the animals fed maize + soybean meal. Discussion The decreasing intakes of the diets indicates that both the inclusion of cassava and alfalfa caused intake to decrease. This reduction in intake is more accentuated when we compare the cassava + alfalfa with the maize + soybean meal and cassava + soybean meal diets, in which cassava and alfalfa were included simultaneously (Table 3). Cassava is a feedstuff known to reduce DM intake due to its lower palatability and powderiness caused by its light and fine powder, which bothers the animals during consumption (Silva et al., 2012). In the present study, there was no difference in concentrate DM intake between the groups fed diets with the same protein base (maize + soybean meal vs. cassava + soybean meal; and maize + alfalfa vs. cassava + alfalfa), but they contributed to differences in total DM intake ( Table 3). The high intake of DM from Tobiatã grass in the maize + soybean meal treatment group (Table 3) may be related to the higher intake of concentrate, which stimulated forage intake, as an additive effect (Adami et al., 2013). The lower total DM intake from the diets with alfalfa with the same energy base (maize + soybean meal vs. maize + alfalfa; and cassava + soybean meal vs. cassava + alfalfa) (Table 3) can be attributed to the management strategy adopted in the experiment. After receiving 100 g of concentrate during the morning milking, goats were sent to the paddocks of Tobiatã grass (maize + soybean meal and cassava + soybean meal) or alfalfa (maize + alfalfa and cassava + alfalfa). Because goats were conditioned to graze on alfalfa for some time, coupled with its good acceptability (Comeron et al., 2015), they ingested this forage quickly and, later, when put to graze on Tobiatã grass, they reduced their intake due to momentary rumen fill. This, combined with the fact that alfalfa is richer in nutrients than Tobiatã grass, allowed the animals to meet their nutritional requirements with less DM intake. This behavior was confirmed by the lower bite rate observed in goats fed the maize + alfalfa and cassava + alfalfa diets (Table 4) on Tobiatã grass, demonstrating that the previous grazing on alfalfa might have reduced the avidity for grazing on Tobiatã grass. The fact that goats first grazed the alfalfa in the maize + alfalfa and cassava + alfalfa diet group during the early times of the day did not influence the time spent grazing, ruminating, idling, or interacting with other goats (Table 4). Nevertheless, the animals that grazed on alfalfa (maize + alfalfa and cassava + alfalfa) had a lower bite rate than those that did not graze. Although diets did not influence weight gain or body condition score, the animals in all diets lost weight and body condition (Table 5). It is because goats usually lose weight and body condition in the first months of lactation due to a decreased DM intake, which has not yet reached its maximum. During this period, animals mobilize body reserves to meet their needs (Barbosa et al., 2016). The results found in this study indicate that the inclusion of cassava and alfalfa does not interfere with the yield, protein, SNF, MUN, or SCC of milk from grazing goats (Table 6). These findings agree with Comeron et al. (2015), who did not observe differences in milk yield of cows receiving alfalfa when compared with their feedlot counterparts. Mouro et al. (2002) and Lourençon et al. (2013) also obtained similar results after replacing maize with cassava sweep meal for Saanen goats and cassava chips for Alpine goats, respectively. Production, intake, and feeding behavior of dairy goats fed alfalfa via grazing and cassava Marques et al. 9 However, alfalfa inclusion in the diets provided a better FE than control diet (Table 6), demonstrating that these diets induced similar milk production, but with less feed intake. Milk protein levels were similar between the diet groups (Table 6), suggesting a similar provision of amino acids for the mammary gland to synthesize milk protein. This result is in line with those reported by Peres Netto et al. (2011) in the milk of cows receiving alfalfa when compared with feedlot cows, and by Mouro et al. (2002) and Lourençon et al. (2013), who did not observe an effect of replacing maize with cassava sweep meal and cassava chips, respectively. The higher fat content found in the milk from the goats fed the cassava + alfalfa diet, in comparison with those that did not consume alfalfa, can be explained by the better synchronism between the energy from the cassava starch, which has immediate fermentation due mainly to its high amylopectin content (Wanapat and Kang, 2015), and the protein of alfalfa, which is rapidly degraded in the rumen (Comeron et al., 2015). This is coupled with the good-quality fiber of alfalfa, which provided a ruminal environment favorable to a large production of acetic acid, a direct precursor of 50% of milk fat (Santos et al., 2001). Peres Netto et al. (2011) did not observe differences in the fat content of milk from cows fed diets with alfalfa. Mouro et al. (2002) and Lourençon et al. (2013) described similar findings in goats receiving diets in which maize was replaced with cassava sweep meal and cassava chips, respectively. The lactose content was higher in the milk from the animals fed cassava + alfalfa than in the milk from those fed maize + soybean meal (Table 6), although it is considered the most stable component of milk and responsible for regulating osmotic pressure in the mammary gland (Madureira et al., 2017). Lactose is formed from glucose, which is synthesized from propionic acid, derived from the rumen fermentation of dietary carbohydrates (Madureira et al., 2017). Thus, the higher production of propionic acid in the milk from the goats fed the cassava + alfalfa diet, which resulted in the highest lactose content of milk, can be explained by the greater degradability of cassava relative to maize, which is due to the lack of pericarp, horny and peripheral endosperm, and protein matrix (Silva et al., 2015). Milk urea nitrogen averaged 17.65 mg/dL, with no difference occurring between the diets (Table 6). This value is within the range of 11.9 to 67.5 mg/dL deemed adequate for the goat species (Rapetti et al., 2014) for maximum use of the dietary nitrogen. Higher MUN values, as found by Peres Netto et al. (2008) in cows that consumed alfalfa, were attributed to ruminal degradation of the protein from fresh forages, especially legumes, which is high and normally exceeds the microbial requirements of ammonia. These excesses are eliminated in the form of urea by the kidneys and mammary gland. Somatic cell count is highly influenced by biological and environmental factors. Thus, in comparing results, one must be very careful by always taking into account the animal breed, physiological stage, and, mainly, its species (Madureira et al., 2017). Goats show apocrine-type milk secretion, in which part of the alveolar epithelial cells is eliminated together with milk. Because these cells are anucleated, they are counted as leukocytes in the standardized SCC test for cattle, generating high SCC values in goat milk (Madureira et al., 2017). The average 2.93 cells/mL obtained in this study (Table 6) is close to the 2.80 and 3.07 cells/mL reported by Lopez et al. (2020) and Marques et al. (2016) in goats, respectively. The lower production cost per liter of milk produced and consequent higher net revenues in the treatments with the inclusion of alfalfa are due mainly to the fact that these diets have the lowest costs (Table 7) and provide the best FE (Table 6). These findings agree with Comeron et al. (2015), who evaluated milk production costs of cows fed alfalfa in comparison with those reared in a feedlot. These results showed that the diets with alfalfa (maize + alfalfa and cassava + alfalfa) provided the best effects on milk quality and FE and the highest net revenue per liter of milk, generating more profit for the producer. Conclusions Cassava chips and alfalfa grazing can replace maize and soybean meal in the concentrate of the diet of lactating adult goats without changing their feeding behavior or milk yield. Rather, this replacement improves feed efficiency and increases milk fat and lactose levels. Diets containing alfalfa have the lowest production costs per liter of milk.
2022-06-04T15:14:11.470Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2c8c6fb8b0fa00da55b841590656b070ef96849b", "oa_license": "CCBY", "oa_url": "https://www.rbz.org.br/wp-content/uploads/articles_xml/1806-9290-rbz-51-e20210102/1806-9290-rbz-51-e20210102.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "312b5514c2790406b57243a15efe290965c4017d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
119125131
pes2o/s2orc
v3-fos-license
Topological rigidity in totally disconnected locally compact groups In \cite{Kramer11} Kramer proves for a large class of semisimple Lie groups that they admit just one locally compact $\sigma$-compact Hausdorff topology compatible with the group operations. We present two different methods of generalising this to the group of rational points of an absolutely quasi-simple algebraic group over a non-archimedean local field (the second method only achieves this on the additional hypothesis that the group is isotropic). The first method of argument involves demonstrating that, given any topological group $G$ which is totally disconnected, locally compact, $\sigma$-compact, locally topologically finitely generated, and has the property that no compact open subgroup has an infinite abelian continuous quotient, the group $G$ is topologically rigid in the previously described sense. Then the desired conclusion for the group of rational points of an absolutely quasi-simple algebraic group over a non-archimedean local field may be inferred as a special case. The other method of argument involves proving that any group of automorphisms of a regular locally finite building, which is closed in the compact-open topology and acts Weyl transitively on the building, has the topological rigidity property in question. This again yields the desired result in the case that the group is isotropic. In [2] Kramer explores the question of when a semisimple Lie group G admits just one locally compact σ-compact Hausdorff group topology, or, to put it another way, when it is the case that any abstract isomorphism ϕ : Γ → G whose domain Γ is a locally compact σ-compact Hausdorff topological group is necessarily a homeomorphism. He proves that this is the case for a connected centreless real semisimple Lie group for which all the simple ideals in the Lie algebra are absolutely simple (in the sense that the result of taking the tensor product of the real lie algebra with C yields a simple complex Lie algebra). In this paper we wish to discuss two different ways by which this result can be generalised to the group of rational points of an absolutely quasi-simple algebraic group over a non-archimedean local field. 1 The following theorems play a key role in Linus Kramer's argument regarding the Lie group case. These two results yield an important technique for verifying the topological rigidity result for a specific locally compact σ-compact Hausdorff topological group G. Suppose that we have some base of open neighbourhoods of the identity in G, and we are able to prove for each member of the base that it is unconditionally Borel, that is, it is a Borel set with respect to any locally compact σ-compact Hausdorff topology compatible with the group operations on G. Then it follows that we may conclude that G is topologically rigid in the previously described sense. For suppose that we are given some abstract isomorphism ϕ : Γ → G where the domain Γ is a locally compact σ-compact Hausdorff topological group. Then for each member K of the base of open neighbourhoods of the identity in question, we have that ϕ −1 (K) is Borel. From this it follows that ϕ is Borel measurable. We can now conclude from the two results given above that it is open and continuous and therefore a homeomorphism. The conclusion that an abstract isomorphism ϕ of the kind described is always a homeomorphism is another way to state the topological rigidity result, so this completes the argument. We now come to the proof of our first main theorem. Theorem 0.4. Let G be a totally disconnected locally compact σ-compact topological group that is locally topologically finitely generated and has the property that no compact open subgroup has an infinite abelian continuous quotient. Then G admits just one locally compact σ-compact Hausdorff topology compatible with the group operations. Proof. Call the given topology on G the "standard topology". Suppose that there were an exotic topology on G, different to the standard topology, that was locally compact, σcompact, Hausdorff, and compatible with the group operations. This exotic topology would have to be also totally disconnected. For otherwise some compact open subgroup K in the standard topology would have to have a subgroup which was a countable-index subgroup of a pro-Lie group H. But then by considering the image under the exponential map of a one-dimensional subalgebra of the pro-Lie algebra h of H, we get that H has a subgroup abstractly isomorphic to a one-dimensional Lie group, and so K has a subgroup abstractly isomorphic to a countable-index subgroup of a one-dimensional Lie group. In particular it has a subgroup isomorphic to Q, but Q has no proper finite-index subgroups so this is a contradiction. So by means of this argument we may assume that the exotic topology is totally disconnected, and therefore the compact open subgroups form a base of open neighbourhoods of the identity, by von Dantzig's theorem. Let K be a compact open subgroup of G in the exotic topology and let H be a compact open subgroup of G in the standard topology. H is topologically finitely generated, locally topologically finitely generated, and has no infinite virtually abelian continuous quotients, and so, in particular, by a result of Nikolov and Segal [4], it has no countably infinite abstract quotient, and therefore no subgroup of countably infinite index. Given a choice set for the left cosets of K in G and the left cosets of H in G, the Cartesian product of these two choice sets may be used to determine a choice set for the left cosets of K ∩ H in G. It follows that K ∩ H has countable index in both K and H, and therefore has finite index in H. By the results of [3] a finite-index subgroup of H is open in the standard topology, and also strongly complete. Since K ∩ H is strongly complete it follows that it is a closed subgroup of K in the exotic topology, so the exotic topology and the standard topology on K ∩ H are both locally compact σ-compact Hausdorff topologies compatible with the group operations. However, since K ∩H is topologically finitely generated, the verbal subgroups of K ∩H form a base for K ∩ H in the standard topology, again by [3], and they are unconditionally σ-compact and therefore unconditionally Borel. Hence we may apply the remark we made before the start of the proof to conclude that K ∩ H is topologically rigid. So the exotic topology agrees with the standard topology on K ∩ H. Since K ∩ H is closed in K in the exotic topology and has countable index in K it must have finite index in K. Now the exotic topology on K agrees with the standard topology, and K is a compact open subgroup in the exotic topology, so the exotic topology must agree with the standard topology on all of G. Next we examine the consequences for the group of rational points G(k) of an absolutely quasi-simple algebraic group G over a non-archimedean local field k. It follows from the classification of the semisimple algebraic groups over local fields [6] that we may choose a global field K contained in k such that G is defined over K and k is the completion of K at a valuation v. Let S be a finite set of places of K containing all the Archimedean ones, and containing a non-archimedean place different to v, but not containing v, and such that G(O S ) has rank at least two, where by the rank of G(O S ) we mean the sum of the ranks of G(K v ′ ) where v ′ ranges over a set of valuations representing each place in S. By strong approximation, G(O S ) is dense in the compact open subgroup G(O k ). If U ⊆ G(k) is any compact open subgroup of G(k), then the intersection Γ := G(O S ) ∩ U has finite index in G(O S ). Furthermore, since G(O S ) has rank at least two, it is finitely generated. Thus Γ is finitely generated and dense in U, so that U is topologically finitely generated. Thus we have shown that the groups G(k) of the kind described are locally topologically finitely generated. Also, by the Margulis normal subgroup theorem, G(O S ) does not have an infinite abelian quotient, so therefore the group U does not have an infinite abelian continuous quotient. I am grateful to Tyakal Nanjundiah Venkataramana for suggesting this argument. Thus we obtain as a corollary to the previous theorem Theorem 0.5. Let G be an absolutely quasi-simple algebraic group over a non-archimedean local field k. Then the group of rational points G(k) admits exactly one locally compact σ-compact Hausdorff topology compatible with the group operations. Next we present an argument showing topological rigidity for closed Weyl transitive groups of automorphisms of a regular locally finite building. Definition 0.6. A pair (W, S) such that W is an abstract group and S is a set of generators of W of order two is said to be a Coxeter system if W admits the presentation S; (st) m (s, t) = 1 where m(s, t) is the order of st and there is one relation for each pair s, t with m(s, t) < ∞. Definition 0.7. Suppose that (W, S) is a Coxeter system. A building of type (W, S) is a pair (C, δ) consisting of a nonempty set C, whose elements are called chambers, together with a map δ : C × C → W called the Weyl distance function, such that for all C, D ∈ C, the following three conditions hold: (1) δ(C, D) = 1 if and only if C = D. Definition 0.8. Suppose that a group G acts by isometries (that is, bijections preserving the Weyl distance function) on a building ∆ of type (W, S). The action is said to be Weyl transitive if it is transitive on the set of ordered pairs of chambers (C, D) such that δ(C, D) = w for each fixed w ∈ W . Definition 0.9. Two chambers C, D in a building of type (W, S) are said to be adjacent if and only if δ(C, D) = s for some s ∈ S. A building is said to be locally finite if the number of chambers adjacent to any given chamber is finite. A building is said to be regular if the number of chambers D such that δ(C, D) = s is the same for all chambers C, for each s ∈ S. Definition 0.10. Suppose that ∆ is a regular locally finite building of type (W, S). The compact-open topology on the full automorphism group G of ∆ is the topology such that the class of all pointwise stabilisers of finite sets of chambers is a base for the topology. Proof. Suppose for a contradiction that the group H admitted an exotic topology other than the compact-open topology, which was locally compact, σ-compact, Hausdorff, and compatible with the group operations. First we shall prove that the pointwise stabiliser of any finite set of chambers must be dense in the exotic topology. Let C be a fixed chamber and let H C denote the stabiliser of C in H. From the Weyl transitivity of the action of H we can infer that H C is either closed or dense in the exotic topology on H. We argue this point as follows. Suppose that the closure of H C ) in the exotic topology (which we shall denote by H C ) were strictly larger than H C ). Then we would have an element h ∈ H C with the property that C = D := h(C). Since H C is a subgroup of H, it would then follow that H D ⊆ H C . Hence H C would contain the orbit of h −1 under conjugation by H D ). Thus it would contain elements mapping D to any chamber the same Weyl distance from D as C. In particular it will contain an element mapping D to at least one chamber adjacent to C. Thus it follows that H D would contain an element mapping C to some chamber adjacent to C, and therefore would contain elements mapping C to every chamber adjacent to C. Now it follows by induction that H C is equal to all of H. We have established that any chamber stabiliser is either closed or dense in the exotic topology on H. Now we wish to show that the pointwise stabiliser of any finite set of chambers is dense in the exotic topology on H. We can see this as follows. Suppose that F is a finite set of chambers. The closure of the pointwise stabiliser of F in the exotic topology on H would have to have finite index in H. In particular if we let C be a chamber such that C ∈ F then the orbit of C under the closure of the pointwise stabiliser of F would have to be unbounded, and the closure of the pointwise stabiliser of F would have to contain the pointwise stabiliser of each one of a family of finite sets of chambers containing respectively the elements of the unbounded orbit in question. This shows that the closure must be all of H. (What we have just done is equivalent to proving that H cannot have any proper finite-index subgroups.) It can be seen that the exotic topology on H must be totally disconnected. For if not, then H would have a subgroup isomorphic to Q, but this is not possible for a group which acts faithfully on a regular locally finite building. So it follows that the compact open subgroups of H in the exotic topology form a base of open neighbourhoods of the identity in the exotic topology, by van Dantzig's theorem. Now let K be a compact open subgroup of H in the exotic topology, and let H C be the stabiliser in H of a chamber C. H C ∩ K has countable index in both H C and K. Since every pointwise stabiliser of a finite set of chambers is dense in the exotic topology, it follows that H C ∩K is a dense subgroup of H C in the compact-open topology. Hence H C ∩ K acts transitively on the chambers a fixed Weyl distance from C. It follows that the action of K on ∆ is Weyl transitive, and, by similar reasoning to before, that K has no proper finite-index subgroups, but this is a contradiction. We conclude that it is not possible for the exotic topology to be different to the compact-open topology. This completes the proof of the result.
2014-11-05T14:12:25.000Z
2012-10-04T00:00:00.000
{ "year": 2012, "sha1": "725423f8afab087136d3292ba4600d3a6e559227", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "85d8fb302b39cbb31a26f334c171e2918d300773", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
236478236
pes2o/s2orc
v3-fos-license
“Does it Matter When I Think You Are Lying?” Improving Deception Detection by Integrating Interlocutor’s Judgements in Conversations It is well known that human is not good at de-ception detection because of a natural inclination of truth-bias. However, during a conversation, when an interlocutor (interrogator) is being asked explicitly to assess whether his/her interacting partner (deceiver) is lying, this perceptual judgment depends highly on how the interrogator interprets the context of the conversation. While the deceptive behaviors can be difficult to model due to their heterogeneous manifestation, we hypothesize that this contextual information, i.e., whether the inter-locutor trusts or distrusts what his/her partner is saying, provides an important condition in which the deceiver’s deceptive behaviors are more consistently distinct. In this work, we propose a Judgmental-Enhanced Automatic Deception Detection Network (JEADDN) that explicitly considers interrogator’s perceived truths-deceptions with three types of speech-language features (acoustic-prosodic, linguistic, and conversational temporal dynamics features) extracted during a conversation. We evaluate our framework on a large Mandarin Chinese Deception Dialog Database. The re-sults show that the method significantly out-performs the current state-of-the-art approach without conditioning on the judgements of interrogators on this database. We further demonstrate that the behaviors of interrogators are important in detecting deception when the interrogators distrust the deceivers. Finally, with the late fusion of audio, text, and turn-taking dynamics (TTD) features, we obtain promising results of 87.27% and 94.18% accuracy under the conditions that the interrogators trust and distrust the deceivers in deception detection which improves 7.27% and 13.57% than the model without considering the judgements of interlocutor respectively. Introduction Deception behaviors frequently appear in human daily life, such as politics (Clementson, 2018), news (Conroy et al., 2015a;Vaccari and Chadwick, 2020), and business (Grazioli and Jarvenpaa, 2003;Triandis et al., 2001). Despite its frequent occurrences, researchers have repeatedly shown that humans are not good at detecting deceptions (it's 54% accuracy on average for both police officers and college students (Vrij and Graham, 1997)), even for highly-skilled professionals, such as teachers, social workers, and police officers (Hartwig et al., 2004;Vrij et al., 2006). Due to the difficulty in identifying deceptions by human, researchers have also developed an automatic deception detection (ADD) systems applied in various fields, such as cybercrime (Mbaziira and Jones, 2016), fake news detection (Conroy et al., 2015b), employment interviews (Levitan et al., 2018b,a), and even court decision (Venkatesh et al., 2019;Pérez-Rosas, Verónica and Abouelenien, Mohamed and Mihalcea, Rada and Burzo, Mihai, 2015). Although many works have studied approaches of automatic deception detection, few works, if any, has investigated whether judgements of human can help provide a condition that enhance ADD recognition rates. In recent years, ADD has gained popularity and attention; however, almost all studies (if not all) on ADD pay attention to western cultures (countries), and there are very few literates that focus on eastern cultures (countries). Deception behavior often varies with different cultures (Aune and Waters, 1994), and every culture has its way to deceive others. Additionally, Rubin (2014) suggested that researchers need to study and understand more deception behaviors in the Asian area. Besides, many researchers have utilized various behavioral cues to build an ADD system, like facial expressions (Thannoon et al., 2019), internal physi-ological measures (Ambach and Gamer, 2018) and even functional brain MRI (Kozel et al., 2009a,b). While these indicators can be useful in detecting deceptions, many of them require expensive and invasive instrumentation that is not practical for real-world applications. Instead, speech and language cues carry substantial deceptive cues that can be modeled in ADD tasks for potential large-scale deployment (Zhou et al., 2003;. Hence, the proposed method modeled the speech and language cues of humans with real-world data in Mandarin Chinese. Despite these important advances in understanding and automatically identifying deceptions, there has been little work investigating whether the performance of ADD models can be significantly improved if considering the behaviors and perceptions of interrogators. Several questions remain: is there a difference in linguistic and acoustic-prosodic characteristics of an utterance from both interlocutors given trusted/distrusted judgments of interrogators? How do the judgments of interrogators help the ADD model detect deceptions? To investigate these questions, we firstly follow the previous studies to segment a dialog into Questioning-Answering (QA) pair turns and then extract acoustic-prosodic features, linguistic features (e.g., Part-Of-Speech taggers (POS), Named Entity Recognition (NER), and Linguistic Inquiry and Word Count (LIWC)), conversational temporal dynamics (CTD) features. Then, we trained machine learning and deep learning classifiers using a large set of lexical and speech features to automatically identify deceptions and evaluated the results in the Daily Deceptive Dialogues corpus of Mandarin (DDDM). Also, to investigate the differences between interlocutor's behaviors, we perform Welch's t-test (Delacre et al., 2017) on the characteristics of utterances from both interlocutors given three different scenarios: (A) human-distrusted deceptive and truthful statements, (B) human-trusted deceptive and truthful statements, and (C) successful/unsuccessful deceptive and truthful statements. In our further analyses, we found that (i) the judgments of human are indeed helpful to significantly improve the performance of the proposed method on detecting deceptions, (ii) the behaviors of interrogators should be considered into the model when the interrogator distrusted the deceivers, and (iii) the additional evidence indicates that human is bad at detecting deceptions -there are very few significant indicators that overlap between trusted truths-deceptions and successful-unsuccessful deceptions. We believe that these overlap-indicators could be useful for training humans to detect deceptions more successfully. Finally, we summarize our 3 main contributions as below. • We are the first work to include the judgements of the interrogator as a condition to help improve the recognition rates of deception detection model. • We demonstrate that the features of interrogators are more effective and useful to detect deceptions than the deceivers' ones under the condition that the interrogator disbelieves the deceiver. • The proposed model has high potentials for practical deception detection applications and impact on the ADD area. Related Work Automatic deception detection in a dialogue Previous studies have trained a deception detector with various features in a dialog. Levitan et al. (2018a) extracted acoustic features of utterances to build the detection framework using a global-level label as the ground truth in employment interviews. indicated that the interlocutor's vocal characteristics and conversational dynamics should be jointly modeled to better perform deception detection in dialogues. The grammatical and syntactical POS features has been widely used in the automatic deception detection (Pérez-Rosas, Verónica and Abouelenien, Mohamed and Mihalcea, Rada and Burzo, Mihai, 2015;Levitan et al., 2016;Abouelenien et al., 2017;Kao et al., 2020). In addition, Liu et al. (2012); Levitan et al. (2018b) modeled the behaviors of language use from the LIWC features. Gröndahl Dando and Bull (2011);Sandham et al. (2020) found that policies can be trained to identify criminal liars with advanced interrogation strategies (e.g, tactical use procedure) because these interview techniques maximize deceivers' cognitive load (Dando et al., 2015). In addition, Chou and Lee (2020) tried to learn from the behaviors of both interlocutors for identifying perceived deceptions, but their learning targets are from the perception of the interrogators not from the deceivers. Therefore, to our best knowledge, we are the first work to take the interrogators' behaviors for detecting deceptions automatically. The perceptions of interrogators for detecting deceptions Levitan et al. (2018b) had studied the perception (judgment) of deception by identifying characteristics of statements that are perceived as truths or lies by interrogators, but they did not use the perceptions for detecting deceptions. Kleinberg and Verschuere (2021) used the LIWC variables and POS frequencies as input features to train a random forest classifier respectively, and then asked subjects to mark the scores ranging from 0 (certainty truthful) to 100 (certainty deceptive) on the deceptive or truthful text data. Finally, they presented the output probabilities of two trained classifiers on each data for the subjects to change the probabilities of the data. Their results showed that the perceptions of human impair the automatic deception detection models. However, we are different from Kleinberg and Verschuere (2021). The main difference is the way how judgements is being utilized; in our work, this is used to provide a condition in improving the prediction results. (4) 53 100 Figure 1: The illustration of Questioning-Answering (QA) pair turns. We only used complete QA pair turns and excluded some questioning turns if we cannot find the corresponding answering turns. To be noticed that each turn could have multiple utterances. DDDM Database We used conversational utterances from the Daily Deceptive Dialogues corpus of Mandarin (DDDM) . The entire DDDM contains about 27.2 hours of audio recordings from 96 unique speakers and 283 "question-level" conversational data samples. This corpus is particularly useful for our study, and all annotations in the DDDM come from "human" raters. Most deception databases lack recordings and perceptions (judgments) of the interrogators, while DDDM recorded the whole interrogator-deceiver conversations and the judgements of both interlocutors, allowing us to study deception detection given the judgements of the interrogators. With the judgements of both interlocutors, we group the data samples into four classes (shown in Table 1) as follows: (1) successful deceptions, (2) trusted truths, Figure 1) because the interrogator tended to ask follow-up questions for judging the deceiver's statements. The definition of deception Deception is different from lying. Deception is human behavior that aims to make receivers believe true (or false) statements that the deceiver believes to be false (or true) with the conscious planning acts, such as sharing a mix of truthful and deceptive experiences to change the perceptions of the interrogators when being inquired to answer to questions. However, lying is just saying that something is true (or false) when in fact that something is false (or true) (Mitchell, 1986;Sarkadi, 2018). Hence, it is challenging for the interrogators to de-tect deceptions through the behaviors of deceivers. Human needs to engage in higher-order cognitive processing to detect these consciously planned deceptions (Street et al., 2019). The deceiver can act in a way to change the perceptions of that potential deception detector. This then shifts a heavier burden onto the interrogator's cognitive processing. Hence, the interrogator must necessarily engage in "higher-order" cognitive processing to detect these advanced lies because they usually cannot just detect the behavior (e.g., signs of nervousness invoice), but must interpret why this individual may be nervous, including the honest reason why (e.g., afraid of being disbelieved). Deception detection with judgments of human Humans rarely perform better than chance on detecting deceptions, but the interrogators make their judgements according to context information in an interrogator-deceiver conversation. People might be hard to remember the whole detailed information, but their judgements might consist of some context-general information based on their own experience, which results in a truth-bias. Therefore, we build the deception detection models based on the conditional perceptions of humans (humantrusted or human-distrusted). We use judgements of human as criteria to define the following conditions (we also include the condition that we have no judgements of human, and the most conventional studies on ADD are in this condition): (i) Truthful and deceptive statements detection: detecting deceptions without perceptions of interrogators (judgements of human) (ii) Trusted truthful and deceptive statements detection: detecting deceptions with believed judgments of interrogators (iii) Distrusted truthful and deceptive statements detection: detecting deceptions with disbelieved judgments of interrogators method, judgements of human are criterion in choosing the classifiers for certain conditions to detect deceptions (not as the features). That is, when the interrogator believes the deceiver's statements, we use the condition (ii) classifier. Instead, when the interrogator disbelieves the deceiver, we can use the condition (iii) classifier. We fuse the best feature set from each modality by late fusion with additional three dense layers. Besides, there are two main goals. One is to investigate the effectiveness and robustness of speech and language features of both interlocutors. The other is to show whether the model performance of detecting deceptions with the judgements of interrogators could be better than the model without them. More specifically, we split four-class sample data in Table 1 into two conditions based on judgements of interrogators (human-trusted/human-distrusted). The unit of features of interrogators/deceivers incorporates all of the utterances from the complete QA pair because interrogators would like to ask questions to seek detailed information. The closest previous study is Chou and Lee (2020). They have investigated perceived deception in the condition that the deceiver is telling either truths or deceptions, but they only focus on perceived deception recognition. Our objective is to detect the deceiver's answers corresponding to each question. In contrast, the learning targets of Chou and Lee (2020) are from the interrogator's guessed answers. Therefore, our learning targets are different from them. Moreover, their work is not useful in real life since they have to know the judgements of the deceivers, and it is impractical and impossible to be applied in the real world. In this paper, we hypothesize that (i) we can get better performance if the model takes judgements of interrogators into account, and (ii) there are differences in both interlocutors' behaviors between the trusted/distrusted truthful and deceptive dialogues. In the rest of the sections, we will describe the feature extraction in detail (notice that all types of the following feature sets are normalized to each speaker using z-score normalization) and the use of a deception detection framework. Table 2 summarized 8 various feature sets, which were extracted from the acoustic and linguistic characteristics of all speakers based on questioning turns of interrogators and answering turns of deceivers within QA pairs. In this work, we use the features extracted from audio and text recordings data to build the models, and we describe each feature set one by one as below. -Utterance-duration ratio: the reciprocal ratio between the utterances length (u) and the turn duration (d), denoted as Int ud and Int du respectively. -Silence-duration ratio: the reciprocal ratio between the silence (s) duration and the turn duration, denoted as Int sd and Int ds respectively. -Silence-utterance ratio: the reciprocal ratio between the silence duration and the utterance lengths, denoted by Int su and Int us respectively. -Silence times (st): the number of times that a subject produces a pause that is more than 200ms, denoted as Int st and Dec st . • XLSR: Due to the scarcity of deception databases in Mandarin Chinese, we use the multilingual pre-trained model, XLSR-53 (Conneau et al., 2020), to extract acoustic representation. XLSR-53 is trained for acoustic speech recognition (ASR) task with more than 56,000 hours of speech data in 53 different languages including Chinese-Taiwan (Mandarin Chinese) based on wav2vec 2.0 (Baevski et al., 2020). The dimension of the feature vector is 512 per frame, and then the feature vector per frame is applied to the 15 statistics 1 to generate the final 7680dimensional feature vectors. Text Recordings • BERT: we utilize BERT-Base in the Traditional Chinese version pre-trained model (Devlin et al., 2019) to extract turn-level 768-dimensional feature vectors. BERT was trained with a large amount of plain text data publicly available on the web using unsupervised objective functions (like masked-language modeling objective (MLM)) and works at the character level. We do not have to perform word segmentation when extracting representations. • RoBERTa: we also use RoBERTa (Cui et al., 2020) To our best knowledge, the NER feature set has never been used to train the deception detector. We are inspired by the findings of psychologist's studies on crime interrogation to use the NER feature set as input features for detecting deceptions. Vrij et al. (2021) suggest that the interrogators need to manipulate and design questions to ask the deceivers for detailed information, complications, because truth-tellers often reported more complications than lie tellers in each stage of the interview. A complication refers to details associated with personal experience or knowledge learned from any personal experience. In the DDDM, most recruited subjects are university students, and the three designed questions the researchers assigned each subject to ask are mainly about general activities or experiences of an average college student. For instance, scores of department border cups, professional knowledge about instruments, and detailed process of any events held by different clubs are regarded as personal experiences. Therefore, we extracted the NER features to capture the detailed information. • LIWC: we use LIWC 2015 toolkit to extract 82dimensional features (excluding all punctuationrelated feature dimensions and total word counts (WP)) in this work after performing word segmentation pre-processing by CKIP Tagger. Experimental Setup We conduct our experiments to show whether judgements and speech and language cues of interrogators are helpful to detect deceptions. The closest deception database is the Columbia X-Cultural Deception (CXD) Corpus (Levitan et al., 2015), but we have no access to the CXD corpus. To compare and show the baseline results, we compare all the models that had been used in the CXD corpus to reveal overall performance on the DDDM corpus. These baseline models include Support Vector Machines ( (Chou et al., 2021). The whole framework is implemented by Pytorch (Paszke et al., 2019). The evaluation metric is macro F1-score based on the dyadindependent 10-fold cross-validation. We use the zero-padding to ensure each data sample's timestamp is the same if the length is less than the maximum timestamp (40). Several hyper-parameters for Table 3: Results on the produced deception detection on the DDDM database in macro F1-score (%). The Who's Feature column implies that the feature comes from whom, such as the interrogator (Int.), the deceiver (Dec.), or both of interlocutors (directly concatenate the features of interlocutors in feature-level). the LSTM-DNN and BLSTM-DNN models as below are grid-searched: the number of nodes in the LSTM and BLSTM layers is ranging in [2,4,8], and the batch size is ranging in [16,32], the learning rates is ranging in [0.01, 0.005] with adjusting mechanism by multiplying 1 √ 1+epoch per epoch. Finally, the maximum epoch is 10000. These hyperparameters are chosen with early stopping criteria in all conditions to minimize cross-entropy with balanced class weights on the validation set. Table 3 presents a summary of the complete results in three different conditions. There are 283, 183, and 100 question-level data samples under conditions (i), (ii), and (iii) respectively. The more detailed information about the portion of DDDM is shown in Table 1. Besides, the human performance is 54.7% macro F1-score in the DDDM corpus. The performance of DNN (Mendels et al., 2017) is very competitive, but modeling time-series information is important for conversation setting. Hence, we only present the results with the BLSTM-DNN model in the conditions (ii) and (iii). Experimental Results In Table 3, the performances of the BLSTM-DNN with judgments of interrogators are consis-tently higher than the models without the judgments of interrogators, and the findings show corroborating evidence of the ALIED theory (Street, 2015;Street et al., 2019) which claimed that the perceptions of human could be potential lie detector even though the judgments of human are error-prone. We also found that the interrogators' features seem more contributing to deception detection in condition (iii). This finding demonstrates that we could consider the interrogators' features when the interrogators distrust the deceivers for building deception detection models. However, the performances of most models trained with the feature sets of the deceivers in the condition (i) and (ii) consistently surpass the ones trained with the features from the interrogators or both interlocutors. Ablation Study To investigate the effectiveness of audio, text, and turn-taking dynamics (TTD) modalities, we take the feature set according to the best performance in Table 3. We take Emobase, BERT, and CTD to represent the audio, text, and TTD modalities respectively. In the condition (i) and (ii), Emobase and BERT are from the deceivers. On the other hand, the counterparts are from the interrogators in the condition (iii). In the fusion method, we follow Chou et al. (2021) to firstly freeze the weights of all models trained with the above-mentioned feature sets and concatenate their final dense layers' outputs as the input of the additional three-layer feed-forward neural network to perform late fusion. Table 4 summarizes the results of the ablation study, and the text modality is the most effective modality. Finally, we get the promising results 87.27 % and 94.18 % and significant improvements 7.27% and 13.57% than the model without judgements of human in the condition (ii) and (iii) respectively. Analyses Having established the presence and characteristics of each speech and language cue, we were interested in exploring the differences in both of interlocutors' speech and language cues on the different judgements of the interrogators given three different scenarios: (A) human-distrusted deceptive and truthful statements, (B) human-trusted deceptive and truthful statements, and (C) successful/unsuccessful deceptive and truthful statements. We firstly performed Welch's t-test (Delacre et al., 2017) for each speaker's turn (e.g., questioning/answering turns) within QA pairs that represented a question and answer from the 3 daily questions. The QA pairs shown in Figure 1 were marked manually, and each deceivers' answer was labeled as truth or deception using the daily life questionnaire response sheet. This resulted in 2764 QA pairs. Using this data, the significant indicators after performing Welch's t-test between each feature set on the different conditions are shown in Appendix A.1 Table A.1. Then, we calculate the ratio of significant features in each feature set divided by its dimension base because every feature set has different dimensions, i.e., in the NER feature set under the scenario (A), there are 7 significant indi-cators and its dimension base is 17, so the ratio is calculated by 7 divided by 17. Additionally, while XLSR, NER, POS, BERT, and RoBERTa are all extracted by not zero-error-rate pre-trained models and LIWC is also calculated the word counts afterword segmentation by CKIP Tagger, they all have significant indicators whose p-value is smaller than 0.05 among them. For example, BERT and RoBERTa from the deceivers have a high proportion of significant indicators. However, since the meaning of XLSR, BERT, and RoBERTa representations are difficult to explain intuitively, so we focus on other feature sets to examine the following research questions. Is there a difference in both interlocutors' behaviors between distrusted truths and deceptions (Scenario A)? According to the experimental results in Table 3, we understand that the features of interrogators are significant indicators to detect deceptions. After performing the Welch's t-test on each feature set between distrusted truthful and deceptive interlocutor's questioning/answering responses (there are 898 QA pairs in scenario A), we found that the feature sets of NER, POS, and LIWC have a higher ratio of statistically significant indicators. Moreover, we check the predictions of them in the DDDM, and we observe that the interrogators tend to ask more complex questions to inquire detailed information about the statements of deceivers. That is, the interrogators would check the numbers information about scores of games, frequency of presentation, or length of music concerts (PERCENT, QUANTITY, Neqb, and DM), things about musical instrument or events about concert presentations and ball games (EVENT, PRODUCT, and WORK OF ART), and places/locations (i.e., elementary schools and universities) (Nc). This result is very interesting because the psychologist studies had also shown that how interrogators interrogate the deceivers in details would affect the success in catching liars. Besides, there are some significant indicators in LIWC, such as the words describing the movements in the sport game (death: "殺"球 (殺球 means kill and spike)) and the words to ask the deceivers to provide more detailed information (focusfuture: "然後"你之後還有繼續打球/彈樂 器嗎? ("then", did you keep playing balls/musical instrument afterward?)). Is there a difference in both interlocutors' behaviors between trusted truths and deceptions (Scenario B)? In scenario B, the results of Welch's t-test reveal that NER consists of the highest ratio of significant indicators than others. When we go back to read the data in the DDDM (Appendix A.1) et al., 2020). That is, the interrogator tended to judge high-intensity utterances as truths because the louder utterances might be perceived as more confident even though these utterances could be deceptive in fact. Additionally, the significance test shows that some CTD features of interrogators are important indicators indicating whether the deceiver is telling the truth or not when the interrogator trusted the deceivers. For example, in the Appendix A.2 Table A.3, we can find that the interrogator spends more time to come up with more complex questions to inquire the deceiver; however, the interrogator eventually believes the deceiver's statements, but the proposed method can successfully detect the deceptions by the interrogator's temporal TTD behaviors. This finding is the same as the previous study . Is there any common significant indicator between the one from distrusted truths and deceptions and the other from successful/unsuccessful deceptions (Scenario C)? In this analysis, we demonstrate additional evidence indicating that human is poor at detecting deceptions-there are very few indicators that overlap in all feature sets in this condition in Appendix A.1 Table A.1 (the rightmost column). However, the results repeatedly show that the ways how the interrogators ask questions about detailed information (MONEY, PRODUCT, and DM), and the meaningful information in the deceivers' answering statements (A (one of POS features) means the words to describe the noun, such as female, big, small, to name a few). Hence, the more detailed information we have, the higher chances to detect deceptions. Conclusion and Future Work This paper investigates whether judgements and speech and language cues of interrogators in conversation are useful and helpful to detect deceptions. We analyzed a full suite of acoustic-prosodic features, linguistic cues, conversational temporal dynamics given different conditions. Finally, with the late fusion of audio, text, and turn-taking dynamics (TTD) modality features, JEADDN obtains promising results of 87.27% and 94.18% accuracy under the conditions that the interrogators trust and distrust the deceivers in deception detection which improves 7.27% and 13.57% than the model without considering the interlocutor's judgements respectively. While there is some research in studying perceived deception detection, this is one of the first studies that have explicitly modeled acousticprosodic characteristics, linguistic cues, and conversational temporal dynamics using judgments of interrogators in conversations for detecting deceptions. Furthermore, we provide analyses on the significance of different feature sets in three different scenarios and show additional evidence indicates that human is bad at detecting deceptions. Especially, the content of questions the interrogators ask is an indicator for telling deceptions or truths when the interrogators distrust the deceivers. Verigin et al. (2019) also reveal that truthful and deceptive information interacts to influence detail richness provides insight into liars' strategic manipulation of information when statements contain a mixture of truths and lies. In the immediate future work, we aim to extend our multimodal fusion framework to combine semantic information to enhance the model robustness and the predicting powers within multiple QA pairs. That is, we observe that some interrogators finally trusted the deceivers after many follow-up questions while the statements of the deceivers were deceptive. Kontogianni et al. (2020) also pointed out that follow-up open-ended questions prompt additional reporting. However, practitioners should be cautious to corroborate the accuracy of new reported details. (B) human-trusted deceptive and truthful statements, and (C) successful/unsuccessful deceptive and truthful statements ("*" indicates the significance threshold, p-value, is smaller than 0.01; "**" is smaller than 0.001). Table A.2: The Welch's t-test results on Emobase in three different scenarios ("*" indicates the significance threshold, p-value, is smaller than 0.01; "**" is smaller than 0.001).
2021-07-29T13:20:03.604Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "9e8c54f4bd1ad434f3ebde280e242a18c14a53c3", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.findings-acl.162.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "c207b7f4361da54b2611038b5f90ddd653bc2395", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
265868166
pes2o/s2orc
v3-fos-license
A COMPARISON OF PRE-PROCESSING APPROACHES FOR REMOTELY SENSED TIME SERIES CLASSIFICATION BASED ON FUNCTIONAL ANALYSIS : Satellite remote sensing has gained a key role for vegetation mapping distribution. Given the availability of multi-temporal satellite data, seasonal variations in vegetation dynamics can be used trough time series analysis for vegetation distribution mapping. These types of data have a very high variability within them and are subjected by artifacts. Therefore, a pre-processing phase must be performed to properly detect outliers, for data smoothing process and to correctly interpolate the data. In this work, we compare four pre-processing approaches for functional analysis on 4-years of remotely sensed images, resulting in four time series datasets. The methodologies presented are the results of the combination of two outlier detection methods, namely tsclean and boxplot functions in R and two discrete data smoothing approaches (Generalized Additive Model ”GAM” on daily and aggregated data). The approaches proposed are: tsclean -GAM on aggregated data (M01), boxplot -GAM on aggregated data (M02), tsclean - GAM on daily data (M03), boxplot -GAM on daily data (M04). Our results prove that the approach which involves tsclean function and GAM applied to daily data (M03) is ameliorative to the logic of the procedure and leads to better model performance in terms of Overall Accuracy (OA) which is always among the highest when compared with the others obtained from the other three different approaches. INTRODUCTION In the last four decades, satellite remote sensing has gained a key role for vegetation and habitat distribution mapping (Zlinszky et al., 2015).The habitat distribution can be properly represented by vegetation map, which, when repeated over time, are useful to assess their preservation (Dash andOgutu, 2016, Viciani et al., 2016).Given the availability of multi-temporal satellite data, seasonal and inter-annual variations in vegetation dynamics (phenology) can be quantified trough time series analysis and used for vegetation distribution mapping (Caparros-Santiago et al., 2021).These types of data have a very high variability within them, due to the fact that they are derived from satellite images acquired at different time periods, using different sensors, capturing constantly changing dynamics, with ever-changing weather conditions and with solar radiance inclination which creates shadows and reflections caught by the sensors (Meraner et al., 2020).Furthermore, if we add to this the technological issues which may occur when dealing with artificial instrumentation (e.g., of sensor malfunctions), we can immediately realize that the data must be fixed before being used for different purposes.The most challenging spectral reflectance abnormal values are often caused by adverse weather conditions, undetected sub-pixel cloud cover, atmospheric dust and gaseous absorbers but also seasons lighting variations, soil-induced disturbances, shading, or sensor glitches (Alvera-Azcárate et al., 2012).All of them are generally referred to as artifacts (Du et al., 2003) and must be removed from the timeseries.These artifacts can alter the temporal pattern of reflectance values, causing a reduction in the accuracy of the pheno- * m.balestra@pm.univpm.itlogical estimations (Clark et al., 2002).Therefore, in order to process the data correctly, it is necessary to implement a data pre-processing phase.Given the very large variability of the data and, consequently, the outliers occurrence, it is essential to operate through diversified methods for the detection of those anomalies to obtain a representative output of the vegetation to classify (Jackson and Chen, 2004).A knowledge about what is intended to classify is fundamental because outliers detection also comes through a deep awareness concerning the expected value of a given class.Without such background, which only the user's experience can provide, information might be lost that, instead, is essential to the classifier.Hence, smoothing of the discrete data becomes a key phase to achieve accurate classification models which describe detailed vegetation dynamics and reduce as good as possible the outlier (Zeng et al., 2020).Considering these issues, it is necessary to use appropriate approaches (i.e data smoothing and outliers detection methods) that will mitigate the noise effects of time-series from satellite imagery (Santos et al., 2021).The greater the care and attention in this step, the higher the quality of the data that will be processed and, hence, the output that will be obtained.In statistics, smoothing consists in the application of a function filter aimed to highlight relevant patterns by mitigating noises generated by environmental, computational, or physiological artifacts (Atkinson et al., 2012).The presence of time gaps, given by the missing data due to both the satellite temporal resolution and the artifacts, makes it necessary to deal also with a data interpolation phase that will allow to obtain a continuous function throughout the year.The interpolation process consists to average the data in a series with contiguous values to describe a pattern, but considering these values in a cyclic way given by their annual nature.Interpolation process can either precede or even be followed by a data aggregation phase.The data, in order to provide valuable insights for the resulting classification, can be processed through functional data analysis (FDA) (Hurley et al., 2014).The latter must represent discrete data as functions, as the FDA needs to analyze the data as a single function rather than a set of point values spread over time (Pesaresi et al., 2022).Therefore, to work by functional analysis, a proper outlier detection step, data smoothing phase and data interpolation can not be ignored. With this papers, we want to figure out a suitable strategy to adopt when processing time-series, through the comparison among the classification accuracies obtained.Thus, we compared the outputs of four data pre-processing approaches.The approaches presented are the results of the combination of two outlier detection options (tsclean function in forecast package (Hyndman and Khandakar, 2008) and boxplot function in graphics package (Murrell, 2005)) and two discrete data smoothing approaches (Generalized Additive Model "GAM" (Wood, 2006) on daily and aggregated data).The approaches proposed are: tsclean-GAM on aggregated data (M01), boxplot-GAM on aggregated data (M02), tsclean-GAM on daily data (M03), boxplot-GAM on daily data (M04). The main contribution of this work is to perform four different satellite image time series pre-processing approaches, combining two outliers detection methodology (tsclean and boxplot) and a data smoothing technique (GAM) applied to differently aggregated datasets, testing them within 2 study areas with a combination of predictors and vegetational indicies. Frasassi Gorge The study area overlaps with the Special Area of Conservation "Gola di Frasassi IT5320003", covering an area of 728 ha within the Rossa and Frasassi Gorge Regional Natural Park between the municipalities of Genga and Fabriano, in the province of Ancona.The Frasassi gorge is placed inside the anticline of Mount Valmontagnana -Mount Frasassi, in the pre-Apennine mountain belt.The peak elevation of the site is 931.2 m.a.s.l. at "Monte di Valmontagnana", while the minimum elevation measured is 200 m.a.s.l. at the edge of the Esino river's left bank. Conero Mount This study area is part of the territory in between the Special Areas of Conservation "Monte Conero IT5320007" and "Portonovo e falesia calcarea a mare IT5320007".It covers an area of 650 hectares within the Conero Regional Natural Park in the province of Ancona, between the municipalities of Sirolo and Ancona.Conero mount is a limestone promontory of 582 m.a.s.l., being the only stretch of limestone coastline from Trieste to Gargano, it interrupts the continuous low and sandy shoreline typical of the Adriatic coast. The images have been acquired by the Sentinel-2A and Sentinel-2B satellite platforms, both managed by the European Space Agency (ESA) as part of the European Copernicus plan (Pesaresi et al., 2022).For each study area, 93 L2A images referable to the period between April 2017 and March 2020 have been collected using the sen2r package.The satellite imagery acquisition frequency permits to aggregate in accordance with defined temporal intervals.They can be aggregated by year, month (semester, trimester, bimester), week (weeks, biweeks) or days of the year (DoY).Sentinel-2 acquisition frequency allows for a weekly aggregation time, allowing for time-series composed of 52 values.The images have been coregistered, cropped with the shapefiles matching the limits of the study areas and masked by the cloud cover.Seven vegetation indices have been computed for each of these images and each one has been used as prediction variable.Additionally we used the FPCA to group the information related to the curves' temporal variability into a set of main components.The coef- ficients quantifying the weight of each component and maintaining the chronological order of the functional variations are referred to as "scores" (Pesaresi et al., 2020).We used them, in total and in a fraction thereof, as second and third prediction variables in this classification.Moreover, we removed the last component resulting from the FPCA and we reconstructed the time-series and we used them as the fourth prediction variable.The seven vegetation indices are: Normalized Difference Vegetation Index (NDVI), Modified Chlorophyll Absorption in Reflectance Index (MCARI), Green Normalized Difference Vegetation Index (GNDVI), Normalized Difference Red/Green Redness Index (RI), Normalized Difference Red-Edge (NDRE), Normalized DIfference Moisture Index (NDMI), Modified Normalized Difference Water Index (MNDWI).The proposed classification algorithm is Random Forest.In order to ease the readability of the manuscript, the general workflow is reported in Figure 3. combining the dataset to be subjected to one of the 4 pre-processing approaches. Outliers Removal Temporal information is obtained by extracting the capture date and then converting it into the corresponding day of the year (DoY).Using this information, data are chronologically sorted and the dataset is thus obtained, which will be subjected to the outlier detection process.The detection methods for the anomalous point values are the most cited in the literature (Willsky et al., 1980, Hu et al., 2021, Venkatasubramanian et al., 2003).Among them, the most widely used are the so-called model-based techniques (Mehrang et al., 2015, Basu andMeckesheimer, 2007), followed by the densitybased (Tang and He, 2017, Tian et al., 2016, Angiulli and Fassetti, 2007) and the histogramming ones (Blázquez-García et al., 2021, Muthukrishnan et al., 2004).In this paper two different methods of point outlier detection are compared: the tsclean function of the forecast package (Hyndman et al., 2020) and the boxplot function of the graphics package (Murrell and Murrell, 2020).Both are part of model-based techniques, meaning that a point x at time t can be declared an outlier if the distance from its expected value x is greater than a predefined threshold τ (Formula 1). The tsclean function is used to process univariate time series and the detection of outliers is different for seasonal and nonseasonal time series (Kandanaarachchi et al., 2020).In this study the interest is in the former type, where a significant seasonal component is identified in the variation of the phenomenon.Specifically, the function uses a time series decomposition method: Seasonal and Trend decomposition using LOESS (STL) (Cleveland et al., 1990).The STL method employs location-adapted regression models to deconstruct a time series into trend, seasonal, and residual components.The STL algorithm smooths the time series using the locally estimated scatterplot smoothing (LOESS) method in two cycles: an inner and an outer cycle.During the inner one, the seasonal and trend components are calculated.The residual is then found by subtracting these from the time series (Cleveland, 1990).For each time series, outliers are identified and replaced by interpolation. In the context of outlier detection methods, the boxplot belongs to the techniques using basic statistics.Pixels considered outliers are those having values placed over X times the interquartile range from the first and third quartiles, and represented as isolated points in the plot.In this study, the coefficient X is equal to 1.5, considered as the default value in the boxplot graphics package function of R.Although groups of pixels can be processed simultaneously, the function is configured by considering the single pixel as a univariate time series (Bernard et al., 2012), making the outputs comparable to those obtained with the tsclean function.As opposed to the latter, the boxplot function permits to analyze the data by selecting a chosen time span.Therefore, it is necessary to set a vector containing the time span that best considers, in an independent way, the seasonal variability of the data (a value detected anomalous on winter is not necessarily anomalous on summer).In this study, monthly time span has been chosen, based on the density of observations in the DoYs and their seasonal distribution. Data Smoothing The proposed smoothing algorithm is a Generalized Additive Model (GAM) based on Cyclic Cubic Spline Regression.GAMs are extensions of linear models in which the predictor is the sum of regular functions plus a conventional parametric component (Wood and Wood, 2015). GAMs allow the configuration of both complex non-linear relationships and inferential statistics by understanding and explaining the inherent structure of the discrete data dispersion model (Azzalini and Scarpa, 2012).The Cyclic Cubic Spline Regression summarises the multi year variability of vegetation surfaces from satellite imagery into one artificial/ideal year.The Cyclic Cubic Spline Regression is a function composed of several connected polynomials designed to interpolate a set of points into defined intervals, called knots.The number of knots in which the dataset is divided is set through a process of cross validation.Separating the dataset into subsets, localized smoothing is achieved and overfitting (given by the global influence of each point on the fit) is avoided (Faraway, 1992).The Cyclic Cubic Spline Regression provides the connection between the first and last knot to consider the temporal nature of the analyzed curves which describe a continuous trend between December and January. Interpolation process is performed by the gam function, of the mgcv package (Wood and Wood, 2015).The "GAM on Aggr" approach involves the GAM application on data previously aggregated by weeks, while the "GAM on DoY" one applies the GAM directly on cleaned daily data, not yet aggregated.In the first case, an early data grouping is carried out, in order to compensate the double collection of images in the same days but in different years, and subsequently aggregated by weeks.In the second case, instead, the compensation of double collected values is carried out directly by GAM, to obtain time-series fitted with 365 values.The latter are then aggregated by weeks, making the two approaches comparable. Predictors Once that the outliers removal and the data smoothing processes have been performed with the different approaches, each pixel value (recorded throughout the 4 years) has been fit in a yearly time series.The latter presents slight differences according to the pre-processing approach used, as can be noticed in the figure 4. The aggregated time-series resulting from approaches M01, M02, M03, and M04 are the first predictors used in this study for classification.These are subsequently subjected to FPCA, a statistical method for variance analysis of functional data (Ullah and Finch, 2013).It is well adapted to time-series, where individual observations are not independent, but rather constrained by their chronological order.It provides an estimation of dataset complexity by determining the minimum number of components needed to represent the content of the dataset without loss of information.The "scores" are the parameters quantifying the similarities among the time-series: they provide information on the position, shape and variation of each curve observed in the space (Shang, 2014).The main FPCA outputs, besides the scores, are the eigenvalues and the eigenfunctions.The eigenvalues represent the variation explained by each component for each time-series.Their sum is equal to the overall data variance.The components explaining up to 99% of the total variance are considered to extract a reduced number of scores.Eigenfunctions, on the other hand, estimate the scores' value by describing the largest functional variances for each component (Hurley et al., 2014).Through these outputs, the original data (X iK ) can be rebuilt as a summation of the product of each score (A ik ) by its eigenfunction (ϕ k ), plus the mean value (µ) (Wang et al., 2015) (Formula 2): Through this process the four predictors used in this study were obtained: time-series (TS), scores (ST), reduced scores (SR) and rebuilt time-series (TSR).The classification is performed by Random Forest.The overall accuracy (OA) is defined by the proportion of correctly classified pixels by the algorithm and the total number of pixels used to train the model.The number of decision trees set in the algorithm is 1500.The Random Forest training is performed through repeated cross validation.The latter has been set by the authors and the number of folders [kfold] in which the dataset is divided is 10, while the number of repetitions is 5.All functions used in this section belong to the caret (Classification and Regression Training) package, which is specific to the construction of classification models (Kuhn, 2015). RESULTS In this chapter, the tested combinations for defining our best pre-processing approach for remote sensed time-series are shown through boxplot graphs.These various methods, obtained from the 4 pre-processing approaches, using 4 predictors from 7 different vegetation indices in 2 different study areas resulted in a total of 224 models.The charts summarize the variability in accuracies generated by the models for the 4 approaches with respect to the different variables.Therefore, for each plot, each method is compared individually to either a predictor, an index, or a study area.All of the accuracies obtained from the models of each method for each predictor/index/study area being analyzed are then grouped together.In particular, each approach/predictor comparison chart groups 56 (224/4) accuracies.Each approach/index comparison chart groups 32 (224/7) accuracies.Each approach/area comparison chart contains 112 (224/2) accuracies. Approaches and Predictors The results obtained from the predictors' classification reveal a substantial difference between the accuracies achieved.Models obtained through TS, ST, and SR predictors achieved higher accuracies than those based on TSR (Figure 5).For each method and index, in both study areas, the TSR thus demonstrate an insufficient models' accuracy and it cannot be compared to the other predictor models (Table 1).Therefore, the results obtained from the TSR are not included in the subsequent charts showing the accuracies achieved by the different approaches (M01, M02, M03, M04) when compared to the vegetation indices and the study areas.The combination tsclean and GAM on DoY (M03) proves to perform better than the other approaches for TS, ST and SR. Approaches and Vegetation Indices Analyzing the 32 results obtained for each vegetation index (Table 2), it can be noted that M03 is the best performing method for 5 of the 7 indices used (MCARI, NDVI, NDMI, NDRE, RI) (Figure 6).The M04 method is the best performing for GNDVI and MNDWI. Approaches and Study Areas As a result of the 112 results obtained for each study area, the chart (Figure 7) reveals that the M03 method continues to be the most accurate (Table 3).Each of the 224 combinations of approaches has generated a map of the analyzed area, along with its corresponding confusion matrix and OA.By selecting the models that achieved the highest OA for the two study areas, in Figure 8 we can observe the classification obtained in Frasassi gorge using the M03 approach, with SCR and the vegetation index RI.In Figure 9 we can observe the classification of Conero mount obtained using the M03 approach, SCR and the vegetation index NDRE.The maps were produced from the best results obtained through the combination of different approaches.However, a few classes were misclassified for labels that were not initially represented in our classes. DISCUSSION According to the results obtained in this work, the method resulting in higher model performance is M03, which involves the tsclean function and the application of GAM on the daily data. The tsclean function allows proper dataset cleaning from outliers with limited computation time and proves to be an important tool when processing time-series from vegetation indices, as earlier proved by (Pesaresi et al., 2020, Pesaresi et al., 2022).Nevertheless, the boxplot function has the merit of allowing for outliers removal only (Kerandel et al., 2020) and, moreover, although computation time is slightly stretched, in terms of coding it is easily handled and outliers can be detected within the time series over different periods of the year (i.e.month, bimester, trimester).Within more heterogeneous environments such as the Frasassi gorge, the boxplot function performs better for certain indices when compared to the tsclean function.In literature, different techniques have been used to carry out the smoothing and correction phase of time-series satellite data, such as the curve fitting (Pickers and Manning, 2015), the Fourier decomposition (Mingwei et al., 2008), the asymmetric Gaussian function (Jonsson and Eklundh, 2002), the double logistic functions (Atkinson et al., 2012, Eklundh andJönsson, 2015), the Whittaker smoother (Shao et al., 2016, Kandasamy et al., 2013), the Savitzky-Golay filter (Huang et al., 2021), the high order spline with roughness damping (Hermance et al., 2007), the spatio-temporal tensor completion method (Chu et al., 2021) and other spatio-temporal combination methods such as the adaptive spatio-temporal weighted method (Li et al., 2017) and hybrid Generalised Additive Model (GAM)geostatistical space-time model (Poggio et al., 2012) wich are even useful to fill temporal gaps.GAM utilization for regression model fitting widely demonstrated in literature (Hua et al., 2021).Applying GAM on the daily data permits to perform data aggregation during the subsequent steps of the work. Being able to manipulate the dataset starting with the DoY is an advantage in the procedure's logic and actually an efficient way to proceed.Computational times are stretched since the range of values within which the data can be interpolated (from 1 to 365) is greater.However, a single subsequent aggregation step is necessary to compensate positively the times, thus allowing for easy variation in aggregation time (i.e.weekly or biweekly).As a result, the process proves to be both elastic and adaptable as required.Within the two study areas, differences in the produced model accuracy values are evident and substantial, making them coherent with those reported by (Pesaresi et al., 2020, Pesaresi et al., 2022).These results are related to the different complexity levels of vegetation phytocoenoses and land cover in general.In the Conero mount study area there are 4 classes identified, while in the Frasassi gorge study area 8 classes are defined.Those are situated in a much more complex geomorphological and topographical context.This emphasizes the need to test the described methodologies in different and diverse contexts, so as to further assess their reliability.From this standpoint, this work has succeeded in providing a suitable comparative analysis among 4 approaches for time-series preprocessing.The processes involved can be replicated in other areas in order to enhance and validate the mapping accuracy. CONCLUSION This method is indeed proven to be fast and efficient.In this paper, 4 time series preprocessing approaches were compared, combining 2 outlier detection methods (tsclean function forecast package and boxplot function graphics package) and 2 interpolation algorithm application methods (GAM on aggregate data and GAM on daily data).Therefore, this research intended to stress the preprocessing part of the data that will be subjected to FPCA to identify which of the proposed methodologies performed best in terms of outputs and computational time.From the results obtained, the approach which involves tsclean function and GAM applied to daily data (M03) is ameliorative to the logic of the procedure and leads to better model performance in terms of Overall Accuracy.Although the algorithms implemented with the GAM have demonstrated the ability to adequately interpolate aggregate and daily data, the application of other techniques is also desirable to improve the construction of the time series.Other solutions, in the outlier detection phase, will be subject to further analysis since there are several methodologies which can be applied to clean the time series.As a result of the results obtained and the identification of this methodological approach for mapping, it will therefore be possible to periodically repeat these tests to produce maps up-to-date and, thus, to comply with EU regulations. Figure 3 . Figure 3. Workflow for the overall accuracy evaluation, combining the dataset to be subjected to one of the 4 pre-processing approaches. Figure 4 . Figure 4. NDVI values for a randomly chosen pixel which fit differently in the function in the different approaches, the black dots are the single pixel values recorded in the 4 years satellite imagery used in this paper. Figure 8 . Figure 8. Best model classification obtained in Frasassi gorge, using M03 approach with SCR and RI as vegetation index. Figure 9 . Figure 9. Best model classification obtained in Conero mount, using M03 approach with SCR and NDRE as vegetation index. Table 1 . Median accuracy values from models comparing approaches and predictors. Table 2 . Median accuracy values obtained from models comparing Approaches and vegetation indices. Table 3 . Median accuracy values obtained from models comparing approaches and study areas.
2023-12-07T16:09:59.438Z
2023-12-05T00:00:00.000
{ "year": 2023, "sha1": "d80044541473cc40e4f7732b02a23a827462f41a", "oa_license": "CCBY", "oa_url": "https://isprs-annals.copernicus.org/articles/X-1-W1-2023/33/2023/isprs-annals-X-1-W1-2023-33-2023.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fd43b369954ea32f262d225b8df3b8e7640ac3cc", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [] }
222005256
pes2o/s2orc
v3-fos-license
Barefoot Running: Between Fashion and Real Way to Prevent Joint Osteo Lesions? Abstract Background and objectives Running has gone from a vital necessity for the man to a playful sport. Different rheumatic and orthopedic pathologies have appeared, in front of which the shoe industry has reacted by creating reinforced shoes that are supposed to overcome the induced lesions. Several years later, the trend toward reinforcement has gone toward minimalism, which is the absence of reinforcement, that is, a more natural race. Method We observed variations of kinetics and kinematics in young, unprofessional, healthy runners during a shoe race and a shoeless race, which is the form of maximum minimalism. We then correlated minimalism variations with the variables of the race and the joint angles. Results We observed significant difference (P < 0.01) in the cycle rate, the cycle length, the step rate, and the angle of attack between running with and without shoes. A small variation of the minimalism index is associated with an increase in knee angle (r2> 0.5). Conversely, a large variation in the minimalism index is related to a decrease in the knee angle (r2> 0.5). The minimalism index has no impact on the angulation of the ankle and hip (r2< 0.3). Conclusion Slow transition will bring gains in terms of decreasing the length of the stride, which limits the load on the shin. Greater flexibility can be achieved by decreasing the flexion angle of the knee, which decreases the demand for quadriceps muscles and the risk of knee injury with a greater risk of injury at the tibial level. INTRODUCTION The human being practices the barefoot running for millennia. For the past 30,000 years, people have covered their feet to protect them from the ground that can cause direct wounds. [1] Running, which was originally a natural means of locomotion, has gradually become a recreational sport that has gained popularity in the recent years. This attractiveness for running caused the emergence of new orthopedic lesions with an estimated incidence between 19.4% and 79%. [2] To respond to this issue, the shoe industry has developed various technologies such as increased cushioning, elevated heel and motion control and stability technologies. [3] The rational was that the support for the race had to be identical to that of the march, that is, to say by a heel foot strike. [4,5] Similarly, for a long time, it was thought that it was necessary to compensate the different types of foot regarding anatomical aspect (pes planus, normal foot, and pes cavus) and mechanical issue (pronating, normal, or supinating foot) as well. [3] All these measures have not succeeded in reducing the rate of lesions with a perverse effect on natural running biomechanics, kinematics, kinetics, and muscle activation pattern. [3] A "minimalist" shoe, which promotes short strides and midfoot/forefoot strike, has been proposed to limit these injuries. The minimalist shoes, featuring a lower profile and a more flexible sole, reduced heel compensation and a lack of motion control technology. There is no need for biomechanical evidence to support the ability to "barefoot" to interact with the natural movement of the inferior member. [1] Several studies show a lower prevalence of foot lesions, [6] an improvement in the economy of stroke, [7] and a decrease in impact forces on the joints. [8] There is, however, a great disparity in the literature regarding the beneficial effect of barefoot running. One hypothesis advanced on this disparity was the lack of uniformity when defining the minimalist shoe. In order to overcome this problem, Esculier et al developed, following a consensus of 42 international experts, the concept of minimalism index expressed in percent. Characteristics taken into account in this index were weight, flexibility, heel to toe drop, stack height, and motion control/stability devices. Zero percent is a maximalist shoes and 100% is barefoot. The same authors provided recommendations on the transition from traditional footwear to minimalist footwear. These recommendations are based on this minimalist index by proposing to increase the minimalism index by 10% every month. [3] All authors emphasized the lack of studies concerning changes in race biomechanics correlated with the difference in the degree of "minimalism" between shoes previously used and those tested. [8][9][10] We propose to measure the impact of acute minimal index variation on the biomechanics of stroke and discuss the potential risks of orthopedic injury caused by these variations (Table 1). Subjects Participants were aged 18 years or more. All subjects had to be a regular runner with a maximum distance of 10 km per session, minimum 1 race session per week. Runners with a history of pathology of the lower limb or spine (traumatic, neurological, and/or rheumatic), with injury or had surgery of the lower limb within 6 months of the start of the experiment, or with against medical indications to the practice of sport or running were excluded. Experimental protocol The subjects do their shopping on a treadmill (brand Powerjog ®) with a mark of joint anatomical landmarks on the skin of participants and undergarments. A slope of 1%, corresponding to flat land, was chosen arbitrarily. Each participant will have to complete three races of 5 min punctuated by 3 min of passive recovery. • Race 1: Warm-up: The participant runs his or her first 5-min run at his or her own pace with his or her own shoes to determine the speed that corresponds to his or her usual jogging pace. The speed is determined by the first run using the Borg scale or "effort perception measure." The Borg score should be between a low effort (2/10) and moderate (3/10) and could not exceed a difficult effort (5/10). The speed was chosen to correspond to the usual training speed of the runner. • Race 2: 5-min race with own shoes at the speed determined during the race 1. • Race 3: 5-min run without shoes at the speed determined during race 1. Variables measured -Anthropometric data (sex, age, height, and weight) will be requested before the session. The weekly distance traveled, the minimalist index of the shoe (in percentage), and the weight of the shoe were included. -Spatiotemporal variables: race cycle (interval of time and space between two successive identical positions, one cycle corresponds to two steps, or successive strides; cm and sec), step length (cm), stride length (cm), step rate (step/min), and ground contact time (sec) -Foot-ground interaction: first at the heel, mid-plantar, or on the forefoot. -Kinematic variables: foot separation in the frontal plane (cm), flexo-extension angles of the lower hip level, knee, and ankle (degrees). Everything is measured during the support phase only. Collection of biomechanical data is made using GoPro® Hero 5 (GoPro Inc., USA) video surveys in sagittal plane and Statistical analysis The results were collected and prepared manually on an excel spreadsheet, and statistical analysis was carried out using the program Prism 6, GraphPad (GraphPad Software, La Jolla, California, USA). A Kolmogorov-Smirnov test allowed us to evaluate the parametric and non-parametric data. For the parametric data, we used a Student t-test to compare the 2 subpopulations. For non-parametric data, a Wilcoxon test was used. Patients were grouped according to the minimalist index variation concentration between their shoes and barefoot, and each group was stratified into quintiles. Linear and nonlinear regression models were estimated to look at the trend in kinetic and kinematic variable. Statistical significance was defined as P < 0.05. RESULTS We included 26 participants aged 21.7 ± 2.7 years. They were all students of a school of kinesitherapy in Brussels and regularly practiced running with an average running distance of 22.4 ± 18.70 km/wk. Their anthropometric characteristics are fairly homogeneous with a small standard deviation for age, weight, and height (Table 2). However, they had a great variability in the minimalist index of their shoes (minimum, 4%; maximum, 62%; median, 20%). When we compare the measurements made during the race with shoes with those made during the race without shoes, there is a significant difference (P<0.01) in the cycle rate, the cycle length, the step rate, and angle of attack. During the race with shoes, we also observe a more frequent heel strike (22/26), which decreases by half (11 + 26) in favor of a midfoot or a forefoot strike during the race without shoes. The runner who was initially in midfoot or in forefoot strike stayed with the same strike ( Table 3). The runners who had a forefoot strike with their shoes were the most trained, that is, with the highest weekly training distance (50, 70, and 80 km/wk). A slight decrease in the minimalism index is associated with a greater decrease in the length (Figure 1a) and the duration of the stride ( Figure 1b) and a longer contact time ( Figure 1c) than a runner, which significantly decreases its index of minimalism (r²> 0.5). The rate of step ( Figure 1d) and the angle of attack (Figure 1e) are not correlated with the minimalism index variation (r² <0.5). Conversely, a large variation in the minimal index results in a smaller decrease in the cycle length, a greater decrease in cycle time, and a shorter contact time. When comparing the hip, knee, and ankle angles between running with a shoe and running without a shoe, there is a decrease in hip flexion. The knee has greater flexion without achieving statistical significance (P = 0.056) and the ankle shows a decrease in dorsal flexion. A slight decrease in the minimalism index is associated with an increase in knee angle (r² > 0.5; Figure 2a). Conversely, a large variation in the minimalism index is related to a decrease in the knee angle (r² > 0.5; Figure 2b). The minimalism index has no impact on the angulation of the ankle and hip (r² < 0.3; Figure 2c). DISCUSSION Several researches have shown that reducing training distances is a way of reducing the risk of injury. The risk of running injuries is related to the distance of the week round. [11][12][13][14][15] Increasing frequency, duration, and intensity of stroke are risk factors for stress fracture. [16] Unfortunately, endurance runners are unable or unwilling to reduce their training distance. Attention was then paid to the correction of anatomical variables using reinforced shoes. [1] However, the use of modified shoes did not produce an effect in terms of reducing the lesion rate. The prescription of "pronation control, elevated heel" was not based on any evidence demonstrated in the literature with additional added injury. [17] When comparing the prescription of running shoes adapted to the plantar shape versus standard shoes, there is no significant difference in terms of lesion rate. [18,19] Following this, a style of race called "barefoot style" characterized by short stride, light steps, and footwear with minimal protection and maximum flexibility appeared in the runners. [20] We observed a significant decrease in stride length and a significant increase in stride frequency related to the constant speed of the experiment. Shock attenuation has been shown to be correlated with the stride length (r = 0.70) more than the stride rate (r = 0.40) (Mercer). [21] A 10% reduction in stride length results in a corresponding decrease in the peak tibial contact force result. However, the concomitant increase in frequency may be accompanied by an increase in metabolic cost and induce an earlier onset of fatigue. [22] The state of muscle fatigue increases bone strains and may be a major factor in the etiology of stress fracture. [23] Under 10% of variation, it has been shown that there is no change in energy demand related to the concomitant increase in frequency over the same distance. [24] Runners who want to reduce their risk of fracture can do so by reducing 10% of their length stride. [21] Concerning stride rate, a 10-20% increase substantially reduces joint loading that provides a beneficial in reducing the risk of developing a running-related injury. [25][26][27][28] Beyond 15% to 20% increase, a change appears in the foot, which can explain the effect on the discharge of charge that the frequency itself. [22] This is also why we chose to work at constant speed, on the one hand, to stay in the conditions of reality of each participant and then not to induce changes in the pattern of the race by an exaggerated increase in the stride rate that does not match the natural race runners. We observe a significant difference in the angle of attack. We also observe a more frequent heel strike (22/26) in race with shoes that decreases by half (11 + 26) in favor of a midfoot or a forefoot strike during the race without shoes. It has been shown that runners in a first trial of minimalist footwear who adopted a forefoot strike decreased their loading rate (hashish biomechanics) by 41%. With a midfoot or a forefoot strike, they are able to disperse impact force more efficiently. This could be a result of a dense collection of plantar mechanoreceptors that "feel the ground." [1] A recent meta-analysis reported a significant relationship between vertical load rates and tibial stress fracture in heel strike runner. [29] Conversely, forefoot runner has a lower vertical load rates. [30] A perverse effect of the strike change is that the runners experiment a series of microtraumas repeated at their metatarsals leading to stress fractures and fractures of the plantar fascia (Table 1).We confirm the preliminary data of Murphy. [11] The main influence of the barefoot on patellofemoral junction (PFJ) is related to smaller knee flexion angle during the stance phase of running, which decreases the demand on the muscles of the quadriceps. It has been measured a 12% decrease in peak PFJ stress when comparing running barefoot with shod running. [31] There is a decrease in hip flexion. The trunk is projected forward relative to the lower limbs, which facilitates the race. The knee has a greater flexion without reaching the statistical significance (P = 0.056). More knee flexion allows better shock wave absorption. The ankle presents a decrease in dorsal flexion, which leads to a more forward attack of the typical foot of the minimalist stride. A slow transition brings a gain in terms of decreasing the length of the stride, which limits the load on the shin. We do not observe a correlation with the frequency or angle of attack that could decrease the pressure on the joints and the load on the tibia. A greater variation of minimalism could bring a benefit by decreasing the flexion angle of the knee, which decreases the demand on quadriceps muscles but with a smaller decrease in cycle length and an increase in the cycle time. Limitations of the study We conducted a study on a 5-min run, but lesions appeared unpredictably for a period of 1-3 months. In the Salzler study, 86% of participants reported an injury after an average of 5 weeks (1-19 weeks) after the start of the transition. Metatarsal head plus calf and arch pain were the most common injuries. [32] CONCLUSION The benefit of minimalism is marked by a decrease in stride length, an increase in the frequency without exceeding 15%, and an improvement of the angles. A slow transition brings a gain in terms of decreasing the length of the stride, which limits the load on the shin. Greater variation of minimalism could bring a benefit by decreasing the flexion angle of the knee, which decreases the demand on quadriceps muscles but with a smaller decrease in cycle length and an increase in cycle time.
2020-09-30T13:14:01.020Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "c6ca251cdc396350c6eb9dd6bbcfebd96e6227f1", "oa_license": "CCBYNCND", "oa_url": "https://www.sciendo.com/pdf/10.2478/jtim-2020-0028", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c6ca251cdc396350c6eb9dd6bbcfebd96e6227f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264891
pes2o/s2orc
v3-fos-license
Anuria, omphalocele, and perinatal lethality in mice lacking the CD34-related protein podocalyxin. Podocalyxin is a CD34-related sialomucin that is expressed at high levels by podocytes, and also by mesothelial cells, vascular endothelia, platelets, and hematopoietic stem cells. To elucidate the function of podocalyxin, we generated podocalyxin-deficient (podxl −/−) mice by homologous recombination. Null mice exhibit profound defects in kidney development and die within 24 hours of birth with anuric renal failure. Although podocytes are present in the glomeruli of the podxl −/− mice, they fail to form foot processes and slit diaphragms and instead exhibit cell–cell junctional complexes (tight and adherens junctions). The corresponding reduction in permeable, glomerular filtration surface area presumably leads to the observed block in urine production. In addition, podxl −/− mice frequently display herniation of the gut (omphalocele), suggesting that podocalyxin may be required for retraction of the gut from the umbilical cord during development. Hematopoietic and vascular endothelial cells develop normally in the podocalyxin-deficient mice, possibly through functional compensation by other sialomucins (such as CD34). Our results provide the first example of an essential role for a sialomucin in development and suggest that defects in podocalyxin could play a role in podocyte dysfunction in renal failure and omphalocele in humans. Biochemical and sequence analysis have shown that podocalyxin is a 150-165-kD transmembrane protein composed of a mucin domain, a disulfide-bonded globular domain, a transmembrane region, and a highly charged cytoplasmic tail with potential phosphorylation sites for protein kinase C and casein kinase II (Fig 1). Structurally, it belongs to a large family of highly sulfated cell surface sialomucins of poorly understood functions. The amino acid sequence, protein structure, and genomic organization of podocalyxin suggest it is most closely related to two other molecules, CD34 and endoglycan ( Fig. 1; references 5 and 8). CD34 is the prototypic member of this family and has been widely used in human and mouse as marker of hematopoietic progenitor cells and vascular endothelia (for a review, see reference 9). On high endothelial venules 14 Podocalyxin Knockout Leads to Kidney Failure, Omphalocele, and Lethality (HEVs) * CD34 (1) and podocalyxin (1) have been shown to act as an adhesive ligand for L-selectin expressed by leukocytes (10). This interaction requires HEV-specific glycosylation of CD34, and similar modifications have not been observed on the majority of vascular endothelial cells (ECs) or hematopoietic cells. Thus, it is remarkable that despite extensive use as a marker ( Ͼ 7,000 CD34-related publications), the functional role of CD34 on hematopoietic cells and most vascular cells remains obscure. Mice lacking CD34 develop normally (11,12) although very subtle defects in hematopoietic maturation and function can be detected in in vitro assays or in in vivo assays of allergic responses. Endoglycan is the newest member of this gene family and has a broader distribution as it is expressed by hematopoietic progenitors and some mature blood cells, vascular endothelia, smooth muscle, and a subset of neuronal cells (13). Its function, too, is unknown. Podocalyxin has been studied most extensively as a marker of kidney podocytes, which are epithelial cells that form a meshwork supporting the glomerular capillaries. The cellular architecture of podocytes can be described in three segments: the cell body, the major processes (MPs), and the foot processes (FPs; reference 14). The cell body and the MPs of the podocyte lie in the urinary space and are attached to the glomerular basement membrane (GBM) via the FPs. During glomerular development podocalyxin is first expressed on the apical surface of podocytes as they differentiate from epithelial precursors (15). Its expression then migrates laterally between cells and closely mirrors the appearance of open intercellular spaces between podocytes and the migration of occluding junctions down towards the basal surface of the podocyte. Close to this basal surface highly interdigitating FPs form and this is coupled to the modification of intercellular junctions to form slit diaphragms (SDs; reference 15). The slit diaphragm is a modified adherens junction (AJ) that defines the apical and basolateral surfaces of the mature podocyte FPs (16). During glomerular filtration, plasma is filtered through fenestrae in Figure 1. Protein structure and genomic organization of podocalyxin and the CD34 subfamily of sialomucins. (A) Schematic representation of the structure of murine CD34, podocalyxin, and endoglycan based on predicted protein sequences. White boxes, mucin domains; black boxes, cysteine-rich domains; circles, potential NH 2linked carbohydrates; horizontal bars with or without arrows, potential O-linked carbohydrates; arrows, potential sialic acid motifs on O-linked carbohydrates; protein kinase C, TK, and CK2, potential phosphorylation sites; DTHL or DTEL, potential PDZ domain docking sites. (B) Schematic showing the genomic organization of human cd34, podxl, and endgl genes based on sequence contigs identified in the human sequence database (see Materials and Methods). The location of the coding sequences for the signal peptide (S, purple), mucin domain (S-T-P, blue), cysteine-rich domain (C-C, green), transmembrane (TM, orange), and cytoplasmic tail (Cyt tail, red) are indicated. Numbers indicate exon size in base pairs. The human cd34 gene spans ‫52ف‬ kbp with a first intron of 11 kbp. The human podxl gene was found to span ‫06ف‬ kbp with a first intron of 45 kbp. The human endgl gene spans Ͼ23 kbp with a first intron of 13 kbp. The 3Ј introns of all three genes are relatively small; the last four exons in each case are separated by Ͻ3 kbp. the capillary endothelium and then through the GBM. In the final stage of ultrafiltrate production the filtrate passes through the SDs between the interdigitating FPs. On mature podocytes, podocalyxin is a major component of the apical cell surface where it has been proposed to help maintain the spacing between the interdigitating FPs by charge repulsion (15). The proper function of podocytes as filters is critically dependent on the anionic nature of the glycocalyx covering the podocytes (17)(18)(19). In the 1970s it was shown that neutralization of this charged glycocalyx by infusion of polycations, or by treatment with glycosidases to remove negatively charged carbohydrates, results in a rapid remodelling of the podocyte cytoskeleton with "effacement" or loss of the fine, interdigitating FP structure and SDs. This, in turn, resulted in nephrosis and massive proteinuria (17)(18)(19). With the later discovery that podocalyxin is the most abundant heavily charged sialomucin expressed by podocytes, it was speculated that alterations in podocalyxin could be the principal cause of these experimentally induced nephrotic syndromes (1,2). To determine whether podocalyxin plays an essential role in renal, vascular, and hematopoietic function we have disrupted the podocalyxin-encoding gene in mice ( podxl Ϫ / Ϫ ). All podxl Ϫ / Ϫ mice die within the first 24 h of postnatal life from profound defects in kidney and/or gut formation (herniation or omphalocele). Surprisingly, the loss of podocalyxin expression did not result in the massive proteinuria characteristic of leaky podocyte filtration in human nephrotic syndromes. Instead, newborn podxl Ϫ / Ϫ mice were anuric (no measurable urine in the bladder) and failed to form FPs and SDs. Our data suggest that podocalyxin is indispensable for normal murine development and that its mutation could play a role in podocyte-related renal failure and in omphalocele (see Discussion). Materials and Methods Genomic Structure and Peptide Motif Analyses. The 8 exons of cd34 , podxl , and endgl genes were found by sequence analysis and database searches. Human cd34 cDNA (GenBank/EMBL/DDBJ accession no. M81104) was used to identify the human genomic locus on clone 8L2 of chromosome 1q32.2-q32.3 (GenBank/ EMBL/DDBJ accession no. AL035091). Human podxl cDNA (GenBank/EMBL/DDBJ accession no. NM005397) was aligned with the working draft sequence of human chromosome 7 clone RP11-180C16 (GenBank/EMBL/DDBJ accession no. AC008264), and human endgl cDNA (GenBank/EMBL/DDBJ accession no. AF219137) was aligned against the working draft sequence of human chromosome 15 clone RP11-221C9 (Gen-Bank/EMBL/DDBJ accession no. AC023593). For structural predictions of CD34, podocalyxin, and endoglycan, potential O -linked glycosylation sites were predicted using the www.cbs.dtu.dk/services/NetOGlyc server, and potential phosphorylation sites were predicted using subsequence analysis of protein patterns in MacVector (Oxford Molecular). Targeted Disruption of the Podocalyxin (podxl) Gene. A partial mouse podocalyxin cDNA clone (534 bp) was produced by reverse transcription PCR of mouse kidney RNA using primers to the 3 Ј -coding sequence of chicken podocalyxin: 5 Ј -GAATTCG-GCTTCTCGTGGAACTGCTGTGTCACT-3 Ј and 5 Ј -GAAT-TCGGCTTCTCCTCATCTAGGTCATCCTTGG-3 Ј . This probe was used to screen a 129/SvJ phage library in FIXII (Stratagene), and 3 independent mouse podxl genomic clones were identified and purified. A mutant allele of the podxl genomic locus was generated by inserting a neomycin resistance cassette in the antisense orientation between the XbaI site in exon 5 and an engineered XhoI site in exon 8 of the m-podxl gene, thereby deleting the majority of exons 5, 6, 7, and 8. The 5.5-kb XbaI fragment containing part of the 5th exon was subcloned into the XbaI site of the pPNT vector (28) to create the 5 Ј arm of homology in the knockout (KO) vector (29,30). The 3 Ј arm of homology was created by PCR of an 892-bp fragment from the 3 Ј region of the MEP21 coding sequence using primers 5 Ј -GCGGCCGCTTACTAGGCATGGTTTTATG-3 Ј and 5 Ј -CTCGAGACACCTCTGATCTGTCTGCTGGTC-3 Ј , and this was cloned into the NotI and XhoI sites of pPNT. The resulting vector was electroporated into E14 embryonic stem (ES) cells and these were selected for resistance to G418 and against sensitivity to ganciclovir. Of 672 resistant ES cell clones, 384 were screened by Southern blot analysis for presence of a 5-kb HindIII fragment using the strategy depicted in Fig. 2. Blastocyst injection of these ES cells and production of germline transmitting chimeras was performed by RCC Ltd., using standard protocols. Genomic DNA from mouse tail cuts or tissue samples was isolated using the DNAeasy Tissue Kit (QIAGEN). Genotypes were determined by PCR using wild-type and KO specific primers (5 Ј -AGTGAGAGACACATTGGGTAACT-3 Ј wild-type allele specific, 5 Ј -TATCGCCTTCTTGACGAGTTCTT-3 Ј KO allele specific, and 5 Ј -GAGGATTTGTGCACTCTACAT-GTG-3 Ј common primer; GIBCO BRL). The wild-type allele resolves as a 760-bp band, while the KO allele resolves as a 550-bp band. Nucleic Acid Analyses. All hybridization probes were labeled with [ 32 P]dCTP by random hexamer priming as described by Feinberg and Vogelstein (31). For Southern blot analyses, ES cell clones were grown in 48-well plates. Cells were lysed and genomic DNA was ethanol precipitated and digested with HindIII in the plates using standard protocols (32). DNA samples ( ‫ف‬ 40 g per sample) were resolved on 0.7% agarose gels, denatured with NaOH, and transferred to nylon membranes. Membranes were hybridized with radiolabeled probes corresponding to the 2,200bp ApaI fragment of the podocalyxin 3 Ј untranslated region (see Fig. 2). Hybridization of radiolabeled probes and removal of unbound probe was performed in NaHPO 4 /SDS buffer as described by Church and Gilbert (33). In this assay, the wild-type podxl locus resolves as a 4-kb HindIII fragment while the homologously recombined locus resolves as a diagnostic 5-kb fragment. For Northern blot analysis, total RNA was prepared by lysis and fractionation in guanidinium/acetate/phenol/chloroform as described by Chomczynski and Sacchi (34). Approximately 10 g of each RNA was resolved on a 1% agarose-formaldehyde gel 16 Podocalyxin Knockout Leads to Kidney Failure, Omphalocele, and Lethality and blotted onto nylon membranes (GeneScreen; Dupont). [ 32 P]labeled probes were generated from the 490-bp SmaI-HindIII fragment encoding the mucin domain of the mouse podocalyxin cDNA or from a 1,000-bp PstI fragment of the glyceraldehyde-3-phosphate dehydrogenase cDNA as a control for RNA loading (35). Histological Analyses and Immunohistochemistry. Immunoperoxidase staining of snap frozen embryo tissues was performed using anti-PCLP-1 antibody (7), anti-CD34-biotin (BD PharMingen), and anti-CD31 (platelet endothelial cell adhesion molecule [PE-CAM]-1; BD PharMingen), followed by Vector Elite developing reagents (Vector Laboratories) and methyl green counterstain, as described previously (5). For paraffin sections, embryonic day 19 kidneys were fixed in 10% formalin, embedded in paraffin, and sectioned. After deparafinization and hydration the tissue sections were treated with target unmasking fluid (Signet Labs) for 16 h at 90 Њ C (34) and stained with monoclonal antibodies to glomerular epithelial protein (GLEPP)1, nephrin, or control rat IgG. Antibody binding was revealed by staining with biotinylated goat anti-rat antibodies, followed by streptavidin-peroxidase, and 3,3 Ј -diaminobenzidine-developing reagent using ABC reagents and protocols from Vector Laboratories. Sections were counterstained with periodic acid-Schiff's reagent using standard protocols (3). Immunofluorescence staining was performed essentially as described previously (36). Cryostat sections of mouse E19 kidneys were fixed in acetone and incubated with blocking solution (10% goat or donkey serum in PBS). Primary antibodies used were chicken anti-mouse GLEPP1 (courtesy of Dr. Roger Wiggins, University of Michigan, Ann Arbor, MI); rabbit anti-mouse nephrin (courtesy of Dr. Lawrence Holzman, University of Michigan); rabbit anti-mouse collagen ␣ 4(IV) and laminin ␣ 5 (courtesy of Dr. Jeff Miner, Washington University School of Medicine, St. Louis, MO); mouse monoclonal anti-synaptopodin (courtesy of Peter Mundel, Albert Einstein College of Medicine, Bronx, NY); rabbit anti-human Wilm's tumor protein 1 (Santa Cruz Biotechnology, Inc.); rat anti-mouse PECAM-1 (BD PharMingen); and rat anti-mouse laminin, laminin ␤ 1 and ␤ 2 (Chemicon International). For collagen ␣ 4(IV) staining, sections were pretreated with 6 M urea, 0.1 M glycine for 1 h at 4 Њ C (37). Secondary antibodies were FITC-conjugated goat anti-rat (Southern Biotechnology Associates, Inc.), Cy3-conjugated goat anti-rabbit (Jackson ImmunoResearch Laboratories), FITC-conjugated goat anti-mouse (Cappel), and FITC-conjugated donkey anti-chicken (Lampire Biological Laboratories). All incubations were carried out in blocking solution. A Nikon Diaphot microscope and a Hamamatsu digital camera were used for acquisition of immunofluorescence images. Transmission Electron Micrographs. For transmission electron micrographs (TEMs), newborn kidneys were fixed in cold glutaraldehyde/cacodylate buffer. After plastic embedding, one micron sections were stained with toluidine blue. Selected samples containing glomeruli were thinly sectioned, stained with uranyl acetate, and examined by TEM as described previously (38). The most mature glomeruli on the sections with open capillary loops and urinary spaces were examined. Blood, Urine, and Amniotic Fluid Analysis. Newborn mice were left for 30 min under a heating lamp and then killed by decapitation. Blood was collected from the aorta in 50-l heparinized capillary tubes, and urine was collected in 5-l capillary tubes after gentle massage of the bladder. The volume of urine produced from each mouse was measured and a 2-l sample was analyzed for protein content by 10% SDS-PAGE under nonreducing conditions. After electrophoresis, proteins were visualized by silver staining. Blood samples were clotted and serum was analyzed for their creatinine/urea content using a modified creatinine kit (Sigma-Aldrich). Amniotic fluid was collected at 15 d after coitum and 1 l of unconcentrated amniotic fluid was analyzed by 10% SDS-PAGE. As an indirect measure of blood pressure, day 18 embryos were rapidly dissected and placed in 1 ml of PBS with 1 mM EDTA. The umbilical cord was severed and embryonic blood was allowed to flow freely for 60 s. The cord was then clamped, the embryo and placenta were removed, and the number of RBCs released into the PBS was counted. Generation of Podocalyxin-Deficient Mice. To address the role of podocalyxin in kidney, vascular and hematopoietic development we generated podocalyxin-deficient mice by homologous recombination ( podxl Ϫ / Ϫ , Figs. 1 and 2; reference 28). The podxl recombination vector was designed to delete the majority of the coding sequence for exons 5, 6, 7, and 8, and replace them with the neomycin-resistance gene in the antisense orientation ( Fig. 2 A). These exons encode 55 amino acids in the extracellular/juxtamembrane region, the transmembrane region, and the entire cytoplasmic tail of podocalyxin and represent the most highly conserved domains across species (5,7,8). The vector was transfected into E14 ES cells and 384 G418 and ganciclovir-resistant clones were screened for homologous recombination by Southern blot analysis (Fig. 2 B). 16 clones were identified with the appropriate recombination and four were injected into blastocysts. Of the four ES cell clones, one gave Ͼ 90% chimerism in six of the resulting offspring and all of these mice transmitted the targeted allele to their progeny, as determined by PCR analysis (Fig. 2 B). These were backcrossed for more than five generations onto both Balb/c and C57BL/6 backgrounds. Sibling crosses to generate wild-type ( podxl ϩ / ϩ ), heterozygous ( podxl ϩ / Ϫ ), and KO mice ( podxl Ϫ / Ϫ ) consistently yielded offspring of similar phenotype in both genetic backgrounds. To confirm that podocalyxin expression had been ablated in podxl Ϫ /Ϫ mice, Northern blot analysis was performed with RNA isolated from 16-d fetal lungs of podxl ϩ/ϩ , podxl ϩ/Ϫ , and podxl Ϫ/Ϫ embryos, using the 5Ј region encoding the podocalyxin extracellular domain (lying outside the region of recombination) as a probe (Fig. 2 C). This analysis revealed the presence of the expected 5-kb podocalyxin transcript in wild-type embryos, reduced expression in heterozygote, and a complete lack of hybridiz-ing transcripts in podxl Ϫ/Ϫ embryos suggesting a loss of expression from the targeted allele. This was confirmed at the protein level by cell surface immunofluorescence analysis of 15-d fetal liver cells using a monoclonal antibody to mouse podocalyxin ( Fig. 2 D). Hematopoietic cells from podxl ϩ/ϩ and podxl ϩ/Ϫ mice expressed high levels of podocalyxin on their surface, while podxl Ϫ/Ϫ showed no reactivity. From these analyses we conclude that the engineered rearrangement in the podocalyxin locus has resulted in a complete loss of podocalyxin expression in podxl Ϫ/Ϫ mice. Perinatal Lethality, Omphalocele, and Edema in podxl Ϫ/Ϫ Mice. PCR and Southern blot analyses of 6-wk-old progeny from heterozygous crosses between podxl ϩ/Ϫ mice revealed the complete absence of any podxl Ϫ/Ϫ offspring suggesting embryonic or perinatal lethality in mice lacking podocalyxin. To more accurately pinpoint the time of disappearance, embryos were harvested at various stages of development and genotyped by PCR. Although we observed the expected Mendelian frequency of podxl ϩ/ϩ , podxl ϩ/Ϫ , and podxl Ϫ/Ϫ offspring throughout embryonic development, all podxl Ϫ/Ϫ mice died within the first 24 h of postpartum life (Table I). No statistically significant differences were observed in the birth weight of podxl ϩ/ϩ newborns and their podxl Ϫ/Ϫ littermates. Likewise, no significant differences were noted in the organ weight or macroscopic appearance of the lungs, liver, heart, gut, and kidneys of the majority of the podxl Ϫ/Ϫ mice (although two notable defects were observed in a subset of the podxl Ϫ/Ϫ mice, see below). Perinatal lethality persisted when podxl Ϫ/Ϫ mice were delivered by Caesarean section and placed with foster mothers. Thus, the data suggest that the loss of podocalyxin results in perinatal lethality due to defects intrinsic to podxl Ϫ/Ϫ mice. Although the majority of newborn podxl Ϫ/Ϫ mice displayed no overt defects, two significant anomalies were observed in a subset of these mice: edema and omphalocele. Careful analysis of the embryos obtained by Caesarean section shows that ‫%52ف‬ of the podxl Ϫ/Ϫ embryos (3/ 12 at embryonic day 18 and 4/15 at embryonic day 15) exhibited mild to severe edema. This usually appeared as subdermal swelling as early as 15 d after coitum (Fig. 3) and a mildly turgid trunk and appendages at day 18 (data not shown). More strikingly, ‫%03ف‬ of all podxl Ϫ/Ϫ mice were born with an "omphalocele" or herniation of the gut into the umbilical cord (Fig. 4, A and B). No direct correlation was observed between embryonic edema and omphalocele; podxl Ϫ/Ϫ embryos at day 18 were equally likely to have one, the other, or both defects. However, neither defect was ever observed in any day 18 podxl ϩ/ϩ or podxl ϩ/Ϫ mice suggesting a strict correlation with podocalyxin loss. Omphalocele is a normal physiologic process that occurs transiently during the embryogenesis of all mammals. It is known that at midgestation, the rapidly enlarging visceral organs soon exceed the limiting space of the peritoneal cavity. As a result, the developing gut herniates into the umbilical space (this begins at embryonic day 12 in mice; reference 51). Normally this "physiologic omphalocele" is resolved by the subsequent expansion of the peritoneal cavity and retraction of the gut from the umbilical cord. In mice, this retraction is completed by the 16 th day of embryonic development (51). To assess the ontogeny of gut herniation in podocalyxin-deficient mice, timed matings were performed and podxl ϩ/ϩ , podxl ϩ/Ϫ , and podxl Ϫ/Ϫ embryos were evaluated at daily intervals for the presence or absence of omphalocele (Fig. 4 D). As expected, wild-type embryos showed a complete resolution of the physiologic omphalo- cele by embryonic days 15 or 16. In contrast, virtually all podxl Ϫ/Ϫ embryos (13/14) displayed omphalocele up to embryonic day 17. Subsequently (and before birth), 70% of these mice resolved the omphalocele resulting in a 30% incidence in newborns. podxl ϩ/Ϫ mice displayed an intermediate phenotype (8/35 displayed omphalocele on day 17), suggesting partial impairment due to loss of one podxl allele (Fig. 4 C). This phenotype may be linked to the prominent expression of podocalyxin on mesothelial cells lining the serous body cavities (Fig. 4 D). Thus, although omphalocele is only observed in a subset of neonates, the data suggest that all podocalyxin-deficient mice (and a subset of heterozygotes) present with some degree of abnormal omphalocele during development. Podocalyxin Deficiency Leads to Symptoms Consistent with Neonatal Anuric Renal Failure. The observation that only a subset of newborn podxl Ϫ/Ϫ have overt defects, and yet 100% of these mice die perinatally prompted us to perform more detailed analyses of organ function in these mice. Since podocalyxin is most prominently expressed in kidneys, we aspirated bladder contents to analyzed urine from newborns. Strikingly, 100% of the podxl Ϫ/Ϫ newborns (12/ 12) exhibited a lack of urine in the bladder suggestive of severely impaired kidney function (Fig. 5 A). Consistent with anuria (and not proteinuria) SDS-PAGE analysis of blood serum and amniotic fluid proteins showed no significant differences in protein constituents from wild-type and KO mice. Anuria is one of several clinical features of acute renal failure and can lead to intravascular volume expansion and hypertension. Other symptoms include increased blood pressure and elevated levels of serum creatinine and urea. Although direct measurement of blood pressure was not possible in newborn mice, we attempted to measure this indirectly by assessing cardiac output. Day 18 embryos were collected by Caesarean section, the umbilical cord of each embryo was severed and the RBCs were collected in a solution of PBS/EDTA for 60 s. These were then counted as a rough measure of cardiac output/blood pressure (Fig. 5 B). While no significant differences were observed between podxl ϩ/ϩ and podxl ϩ/Ϫ embryos, podxl Ϫ/Ϫ embryos exhibited an ‫-51ف‬fold increase in released RBCs over the 60-s interval (most were released within the first 15 s). Consistent with this observation, podxl Ϫ/Ϫ embryos frequently appeared pale when the umbilical cords were severed in solutions that prevented clotting. This did not reflect an inability to clot as no differences in pallor were observed when the umbilical cords were severed in the absence of aqueous anti-coagulants, and we have not observed any differences in the frequency or function of platelets in podxl Ϫ/Ϫ mice (see below). An increase in blood pressure may offer an explanation for the edema observed in podxl Ϫ/Ϫ embryos: excessive pressure could drive fluid into the extravascular spaces. Although there were no significant differences in serum creatinine/urea levels between wild-type and podxl-deficient mice, it is likely that such changes would only appear after 1 or 2 d of life postpartum (during embryogenesis these would be removed maternally). Newborn mice with anuric renal failure have been described to consistently die in the first day of life (52)(53)(54)(55), and thus it is likely that podxl Ϫ/Ϫ mice die due to kidney failure. Podocalyxin Is Required for Normal Formation of Podocyte FPs in Developing Kidneys. To more clearly define the defects in podxl Ϫ/Ϫ kidneys, we examined glomerular development in wild-type and podxl Ϫ/Ϫ animals. Periodic acid-Schiff staining of sections from newborn kidney revealed the presence of glomeruli in all stages of development in both podxl Ϫ/Ϫ and podxl ϩ/ϩ mice but with subtle differences. In some of the mature juxtamedullary glomeruli of the podxl Ϫ/Ϫ kidneys lucent areas (vacuoles) were observed which were distinct from the normal capillary loops (Fig. 6 A, and data not shown). During embryogenesis the onset of podocalyxin expression correlates with early podocyte differentiation from mesenchymal progenitors at the "Sshaped body" stage of glomerular development (15). To determine whether podocalyxin loss results in a failure of podocyte maturation, kidneys from newborn mice were sectioned and analyzed by immunohistochemistry for expression of two later markers of podocyte differentiation: GLEPP1, a transmembrane tyrosine phosphatase of kidney podocytes (56) and nephrin, a transmembrane protein associated with podocyte SDs that plays a critical role in podocyte function (20,57). Wild-type, podxl ϩ/Ϫ , and podxl Ϫ/Ϫ kidneys all displayed the expected GLEPP1 and nephrin staining on podocytes in the glomeruli (Fig. 6 A, and data not shown), indicating that podocyte differentiation and expression of these maturation markers is not impaired by the lack of podocalyxin. However, the morphology of the podocytes was clearly altered in podxl Ϫ/Ϫ mice: consistent with the periodic acid-Schiff stains, distinct lu-cent areas suggestive of void spaces and vacuoles were observed within the podocytes of podxl Ϫ/Ϫ mice (Figs. 6 A and 7 A). To ensure that this was due to alterations intrinsic to the podocytes and not to alterations in protein expression by the vascular ECs or GBMs, dual-label immunofluorescence analysis was performed with vascular/GBM markers (PECAM-1, collagen ␣4(IV), and laminins ␣5, ␤1, and ␤2) and podocyte markers (Wilm's tumor gene product [58], GLEPP1 [56], nephrin [20,57], and synaptopodin [59]). We found no evidence of loss of expression of any GBM proteins in wild-type and podxl Ϫ/Ϫ kidneys (Fig. 6, and data not shown). The mature glomeruli of the newborn podxl ϩ/ϩ and podxl Ϫ/Ϫ mice showed similar expression levels of collagen and laminin isoforms in mature GBM (Fig. 6, B and C, and data not shown). Normally, podocyte FP and GBM proteins appear to have an overlapping staining pattern at the light microscopy level. This reflects the fact that the thin FPs of podocytes are in very close proximity to the GBM rather than true "colocalization". Interestingly we noted a consistent reduction in the degree of "side-by-side" podocyte and GBM-marker staining in podxl Ϫ/Ϫ kidneys. For example, while dual immunofluorescence-labeling showed the normal close proximity of the podocyte markers nephrin and GLEPP1 with the GBM in wild-type mice (note the red/green proximity in Fig. 6, B and C), there was a marked decrease in the degree of overlap in the podxl Ϫ/Ϫ glomeruli. This indicates either a greater-than-normal distance between the podocyte apical membranes and the GBM (cell thickening) or incomplete coverage of the basement membranes by podocyte FPs. To assess the defects in the podxl Ϫ/Ϫ podocytes more precisely, kidneys were analyzed by TEM. TEM of the most mature glomeruli from newborn podxl ϩ/ϩ and podxl ϩ/Ϫ mouse kidneys showed well-developed MPs and FPs (Fig. 7 A, and schematic in Fig. 8). By contrast, in the podxl Ϫ/Ϫ kidneys, MPs were greatly reduced in number and the FPs and SDs were completely absent. Interestingly, numerous junctional complexes (JCs) (tight junctions [TJs] and AJs) were observed between adjacent podocytes (Fig. 7, A and B) consistent with impermeable cell-cell junctions. Moreover, the podocytes in podxl Ϫ/Ϫ mice completely engulf the vasculature with their cell bodies and the urinary spaces were markedly reduced. Consistent with light microscopic observations (see above) large intracellular vacuoles were present in the podocytes of podxl Ϫ/Ϫ mice (Figs. 6 A and 7 A). On the endothelial side of the GBM the ECs had some, but fewer, fenestrae and in general the EC layer appeared thicker than in wildtype mice (Figs. 7 and 8). These ultrastructural findings are consistent with the lack of urine production in podxl Ϫ/Ϫ mice and this is most likely cause of death in podocalyxindeficient mice (Fig. 8). Vascular Development. Since podocalyxin is also expressed on vascular ECs we next examined the distribution of these cells in whole-mount sections of 16-d podxl ϩ/ϩ and podxl Ϫ/Ϫ embryos by immunocytochemistry. Stains of embryos with CD31 (PECAM-1) and CD34 antibodies show no detectable differences in the expression pattern of these molecules between podxl ϩ/ϩ and podxl Ϫ/Ϫ in brain, gut, kidney, or lung. However, a consistent increase in the expression levels of CD34 was detected particularly in kidney and lung (Fig. 9 A, and data not shown). Quantification of CD34 mRNA from kidney revealed a three to fourfold increase in CD34 transcripts in podxl Ϫ/Ϫ mice versus podxl ϩ/ϩ mice (Fig. 9 B), while the frequency of control hypoxanthine-guanine phosphoribosyl transferase (HPRT) mRNAs were found to be identical in null and wild-type mice. This result suggests that a loss of podocalyxin expression by the vasculature may result in a compensatory increase in the expression of the related sialomucin, CD34. Consistent with normal vasculature in areas of CD34/ podocalyxin coexpression, TEM analyses of newborn lungs from podocalyxin-deficient mice revealed no overt defects in formation of pulmonary capillaries or bronchial-associated epithelia (data not shown). Unfortunately, the third member of the CD34 family, endoglycan, is expressed by a variety of nonvascular cell types making the assessment of its upregulated expression on vessels problematic (unpublished data, and reference 13). We conclude that the vascular development in podxl Ϫ/Ϫ mice is essentially normal, possibly due to the upregulated expression of the related molecule, CD34. Hematopoietic Development is Normal in podxl Ϫ/Ϫ Embryos. Because our own previous studies in the chick (5,50) and recent studies in the mouse (7) have shown that podocalyxin is a marker of the earliest hematopoietic pro- Figure 6. Histological analysis of podxl Ϫ/Ϫ kidneys. (A) Immunohistological analysis of expression of the podocyte-specific tyrosine phosphatase, GLEPP1 in podxl ϩ/ϩ , podxl ϩ/Ϫ , and podxl Ϫ/Ϫ kidneys. The capillary loops of the podxl ϩ/ϩ and podxl ϩ/Ϫ glomeruli are shown in black arrows. podxl Ϫ/Ϫ glomeruli had similar capillary loops but also had numerous lucent vacuoles surrounded by GLEPP1 staining (red arrows). Scale bars, 50 m. (B) Dual-label indirect immunofluorescence of newborn mouse kidney from podxl ϩ/ϩ and podxl Ϫ/Ϫ (top and bottom, respectively) with antibodies to GLEPP1 (donkey anti-chicken FITC labeled secondary antibody), and collagen ␣4 (type IV) (goat anti-rabbit Cy3-labeled secondary antibody). Staining for the apical membrane podocyte marker GLEPP1 and the basement membrane protein collagen ␣4 (type IV) is seen in mature glomeruli in podxl ϩ/ϩ and podxl Ϫ/Ϫ mice. Superimposition of these images shows areas of strong overlap (orange) in the podxl ϩ/ϩ mice. In the podxl Ϫ/Ϫ mice the area of overlap (orange) is diminished representing a greater distance in the localization of the apical membrane marker (GLEPP1) from the basement membrane. (C) Dual-label indirect immunofluorescence of newborn mouse kidney from podxl ϩ/ϩ and podxl Ϫ/Ϫ (top and bottom, respectively) with antibodies to the basement membrane protein laminin ␤2 (FITC-conjugated goat anti-rat) and podocyte slit diaphragm protein nephrin (anti-rabbit Cy3-labeled secondary antibody). Strong staining is seen for both markers in podxl ϩ/ϩ and podxl Ϫ/Ϫ mice. The close proximity of the slit diaphragm (nephrin staining) to the basement membrane (laminin ␤2 staining) can be appreciated in the superimposed images by the presence of overlapping staining (orange) in the podxl ϩ/ϩ mice. In the podxl Ϫ/Ϫ glomeruli the superimposed images show diminished overlap (orange staining) of the expression of nephrin and laminin ␤2. genitors, we performed an extensive analysis of the hematopoietic development in wild-type and podocalyxindeficient mice. Hematopoietic tissues from podxl ϩ/ϩ , podxl ϩ/Ϫ , and podxl Ϫ/Ϫ embryos were stained with antibodies to CD34, Sca-1, c-kit, Ter119, CD41, Mac1, Gr-1, B220, CD3, CD4, and CD8 to assess the numbers of hematopoietic progenitors and erythroid, megakaryocytic/ platelet, myeloid, granulocytic, and B and T lineage cells, respectively. We observed no differences in the frequency or phenotype of any hematopoietic lineages in 15-d fetal liver, 18-d bone marrow, or 18-d spleen of podxl Ϫ/Ϫ embryos. Likewise, we observed no defects in the formation or localization of hematopoietic and vascular cells in podocalyxin-deficient yolk sac, lung, heart, liver, or gut. Thus, the data suggest that podocalyxin is either dispensable for the formation of these cell types, or its loss can be compensated for by other related molecules. Discussion In this study, we have generated mice lacking the sialomucin, podocalyxin. These mice die during the first day of life with severe kidney abnormalities and a pathology consistent with neonatal anuric renal failure. While some of the null mice had edema and/or displayed omphalocele, most appear normal at birth. Thus, despite the expression of podocalyxin by most vascular endothelia, a subset of mesothelial cells and hematopoietic stem cells, the abnormalities in the podxl Ϫ/Ϫ mice were largely confined to the kidney. Role of Podocalyxin in Glomerular Structure and Function. In kidneys, the final stage of glomerular filtrate production occurs when the ultrafiltrate passes through the filtration slits between neighboring podocyte FPs. The specialized cell-cell junctions between podocyte FPs forming the slit diaphragm provides the last barrier for filtrate production (Fig. 8). The permeability of the glomerular filter is highly dependent on the filtration slit surface area and the filtration properties of the slit diaphragm (60,61). The podocytes of podxl Ϫ/Ϫ mice do not form FPs or SDs and instead form impermeable TJs. The absence of SDs and the reduction of filtration slit area due to the lack of interdigitating FPs in the podxl Ϫ/Ϫ mice probably leads to . For ease of presentation, the mesangial cells that would normally link the capillary loops (disrupting the layer of podocytes) have been left out of the diagram. The lack of interdigitating FPs in the podxl Ϫ/Ϫ mice leads to a lack of filtration slit area. This, along with the decrease in the fenestration of the glomerular capillaries, is thought to lead to a decrease in the potential area for filtration and the anuria that occurs in the podxl Ϫ/Ϫ mice. glomerular filters with reduced permeability that result in anuria at birth. It has been speculated that the charge of podocalyxin is required for maintaining the spacing between FPs (15) since podocalyxin is the major constituent of the glycocalyx at their apical surface. This is supported by the fact that the inducible, ectopic expression of podocalyxin in Chinese hamster ovary cells and Madin-Darby canine kidney epithelial cells results in the inhibition of cell aggregation and in an altered organization of junctional proteins in adherent cell monolayers (62). Moreover, ectopic expression was shown to inhibit the formation of electrical-resistant monolayers of epithelial cells, indicating that podocalyxin can block the formation of TJs. Our data clearly support and extend this finding; in the absence of podocalyxin, JCs persist between podocytes and they fail to migrate down the lateral surface towards the basement membrane where they are normally modified to form SDs. The failure of the podocytes in podxl Ϫ/Ϫ mice to form permeable cell-cell junctions (SDs) is consistent with a role for podocalyxin as an antiadhesin on the surface of the podocytes and as a potential regulator of cell junctions. It has been suggested that one way podocalyxin regulates cell-cell junctions is by interacting with the actin cytoskeleton, possibly via the actin-associated protein ezrin. The COOH-terminal tail of podocalyxin (D-T-H-L) is highly homologous to that of endoglycan and CD34 (D-T-H/E-L; Fig. 1 A). All three molecules contain the consensus sequence (X-S/T-X-V/I/L) recognized by proteins with PDZ protein interaction domains (63). PDZ-containing proteins can act as scaffolds to link transmembrane proteins in multiprotein complexes that may include the actin cytoskeleton (64). Indeed, one of us (unpublished data) recently found that the COOH terminus of podocalyxin interacts with NHERF-2, a PDZ domain-containing protein that can link transmembrane proteins to the actin cytoskeleton via ezrin (65). In light of the present studies we feel it is likely that this linkage is important in the modification of JCs in mature podocytes. The vacuoles observed in the podocytes of podxl Ϫ/Ϫ mice are similar to the podocyte vacuoles seen in human or rodent models of renal disease (66)(67)(68)(69). These vacuoles can occur in the setting of minimal change disease or in models where there is extensive podocyte injury (puromycin aminonucleoside-induced nephrosis). They have been hypothesized to result from the abnormal passage of fluid from the basolateral to apical surface of the podocyte in situations where there is extensive FP effacement or disruption of normal slit diaphrams (70,71). Considering the lack of permeable cell-cell junctions and FPs in the podxl Ϫ/Ϫ mice, these podocyte vacuoles may result from the lack of a paracellular pathway for filtrate production and this "salvage pathway" may contribute to the bulk of fluid seen in the urinary space of the podxl Ϫ/Ϫ mice. Podocalyxin Deficiency: Relation to other Podocyte-specific Mutations. Several proteins proposed to be involved in the formation of JCs between podocytes have been identified recently as crucial regulators of podocyte function. These include nephrin/NPHS1, podocin/NPHS2, CD2AP, P-cadherin, ␣␤␥ catenin, ␣-actinin-4, and ZO-1␣ (16,(20)(21)(22)(23)72). All of these proteins are expressed at the SDs between processes and have been speculated to participate in the formation of modified AJs (16,26). Nephrin and P-cadherin are thought to interact by homotypic recognition and to be responsible for maintenance of the filtration slits (16,20,26). CD2AP has been shown to interact with the intracellular domain of nephrin (23) whereas ␣␤␥ catenins bind the intracellular domain of P-cadherin. Human mutations in NPHS2 or ␣ actinin 4 result in proteinuria and FP effacement (21,22). Ablation of CD2AP and nephrin in mice leads to nephrotic syndrome with massive proteinuria and effacement of the FPs. A striking difference between podxl Ϫ/Ϫ mice and mice lacking nephrin or CD2AP is that the latter mice exhibit massive proteinuria and are still able to develop podocyte FPs (20,23) whereas the podxl Ϫ/Ϫ mice lack FPs and produce no urine. These results support the contention that podocalyxin has a fundamentally distinct function in podocyte cell . Y-axis, relative level of cDNA product as determined by SYBR Green™ fluorescence; X-axis, number of PCR cycles; Pink lines, podxl Ϫ/Ϫ mRNA with CD34 primers; purple lines, podxl ϩ/ϩ mRNA with CD34 primers; light green lines, podxl Ϫ/Ϫ mRNA with HPRT primers; dark green lines, podxl ϩ/ϩ mRNA with HPRT primers. One of four representative experiments with kidney mRNA from two independent mice of each genotype. Aside from the proteins found in JCs, only a small number of other membrane proteins located at the apical surfaces of podocytes have been defined. They include podoplanin, a protein that is also expressed on nonpolarized cells (73), and GLEPP1, a transmembrane protein tyrosine phosphatase (38). GLEPP1 appears to regulate podocyte FP structure since deletion of this gene (ptpro) leads to toe-like podocyte FPs that are shorter and broader than the FPs of normal mice. Although ptpro Ϫ/Ϫ mice are viable and develop normally, they have a reduced glomerular filtration rate and after partial nephrectomy display a predisposition to hypertension. Thus, although the defects in ptpro Ϫ/Ϫ mice are much milder than those of podxl Ϫ/Ϫ mice, they also display a reduced glomerular filtration rate due to abnormalities in FP formation. The fact that podxl Ϫ/Ϫ podocytes express the normal repertoire of both apical and junctional-complex podocyte markers suggests that maturation of these cells is relatively normal and that the observed defects in urine production are not due to defects in downstream protein expression. Rather, our results argue that podocalyxin loss leads directly to structural malformations of FPs. Similar to the neonatal lethality observed in podxl Ϫ/Ϫ mice, there are several other reports of anuric mice that die in the first day of life (52)(53)(54)(55). However, it is noteworthy that most of these mutations result in much more severe kidney defects (agenesis) or in pleiotropic defects in other vital organs. Thus, podocalyxin loss represents the most selective, lethal anuria described to date. Role of Podocalyxin in Resolution of Physiologic Omphalocele. In normal mouse development, the gut herniates into the umbilical space through the umbilical ring beginning at embryonic day 12 and retracts back into the peritoneal cavity by the 16 th day of embryonic development (58). This "physiologic omphalocele" is resolved by the expansion of the peritoneal cavity and retraction of the gut from the umbilical cord. While only a subset of podocalyxin-null mice displayed omphalocele at birth, all null mice showed delay in omphalocele resolution in utero. Surprisingly, some heterozygotes showed a similar delay suggesting a dosage effect of this mucin on the resolution of omphalocele. Omphalocele is a relatively common congenital birth defect in man, affecting ‫000,6:1ف‬ children, but the molecular details of its etiology are poorly understood. In many cases omphalocele has been linked to syndromes with abdominal organomegaly or defects in the development of the anterior abdominal wall leading to failure of umbilical ring closure (74). However, examination of podxl Ϫ/Ϫ mice revealed no abdominal organomegaly. Because of the dosage effect and because podocalyxin is expressed by the mesothelial cells lining the peritoneal membrane, we speculate that mesothelial function of this molecule is to provide an antiadhesive surface and facilitate retraction of the gut through the umbilical ring (see below). Development. Despite the widespread expression of sialomucins in many tissues their function is still poorly understood and, until now, ablation of sialomucins have not resulted in a lethal phenotype. This suggests that most sialomucins are either dispensable for normal development or that there are mechanisms that compensate for their loss such as redundancy. The podxl Ϫ/Ϫ mice provide the first example of a sialomucin that is critically required for the normal development and postnatal survival of mice. Since podocalyxin deficiency causes defects in tissues that selectively express podocalyxin but not CD34 (gut mesothelial cells and podocytes), while coexpressing tissues are spared (vascular endothelia and hematopoietic precursors), it is tempting to speculate that CD34 and podocalyxin can, indeed, crosscompensate. Our observation that CD34 expression is upregulated in podxl Ϫ/Ϫ mice is consistent with this idea and it will now be interesting to determine whether podxl Ϫ/Ϫ / cd34 Ϫ/Ϫ double KO mice exhibit hematopoietic and vascular defects (unpublished data). Function and Redundancy of CD34-related Molecules in Two opposing hypotheses have been proposed for the functions of the CD34 family of sialomucins: adhesion and antiadhesion. In favor of the adhesion model both podocalyxin and CD34, when expressed by HEVs, can act as adhesive tethers for activated leukocytes migrating into lymph nodes (8,10). Lymphocytes use L-selectin (a C-type animal lectin) to bind to specific glycoforms of CD34 and podocalyxin expressed on the surface of HEVs and this is the initiating step in lymphocyte extravasation from blood into the lymph nodes. A caveat, however, is that L-selectin binding to these molecules is glycosylation specific and the appropriate modifications have usually only been observed on HEVs. Therefore, it is likely that in most tissues, podocalyxin and CD34 have alternate functions other than adhesion. Based on a number of studies, other researchers have speculated that podocalyxin (and in some circumstances CD34; reference 75) can act as an antiadhesion molecules or molecular "Teflon™" by virtue of its negatively charged mucin domain (1, 2, 15, 62, and see above). One exciting hypothesis is that, in fact, these molecules can serve both adhesive and antiadhesive functions. Under the majority of circumstances, these molecules provide a barrier to adhesion, increase monolayer permeability, and aid in modifying JCs. In the special case of the HEVs, however, these molecules provide dual functions. In the first step they provide tethers for leukocytes expressing L-selectin binding. Subsequently, however, podocalyxin and CD34 move to the junctions between ECs where they act to "spread" the endothelia and facilitate leukocyte transmigration. This is consistent with previous reports showing movement of CD34 to junctions in response to cell activation (75). Analysis of mice lacking multiple members of the CD34 family offers an opportunity to test this model and should clarify this issue.
2014-10-01T00:00:00.000Z
2001-07-02T00:00:00.000
{ "year": 2001, "sha1": "9af427f2e6a55315bf4ceb9ef28eb10a39c7eb44", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/194/1/13.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9af427f2e6a55315bf4ceb9ef28eb10a39c7eb44", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231942800
pes2o/s2orc
v3-fos-license
Three-fold Weyl points in the Schr\"odinger operator with periodic potentials Weyl points are degenerate points on the spectral bands at which energy bands intersect conically. They are the origins of many novel physical phenomena and have attracted much attention recently. In this paper, we investigate the existence of such points in the spectrum of the 3-dimensional Schr\"{o}dinger operator $H = - \Delta +V(\textbf{x})$ with $V(\textbf{x})$ being in a large class of periodic potentials. Specifically, we give very general conditions on the potentials which ensure the existence of 3-fold Weyl points on the associated energy bands. Different from 2-dimensional honeycomb structures which possess Dirac points where two adjacent band surfaces touch each other conically, the 3-fold Weyl points are conically intersection points of two energy bands with an extra band sandwiched in between. To ensure the 3-fold and 3-dimensional conical structures, more delicate, new symmetries are required. As a consequence, new techniques combining more symmetries are used to justify the existence of such conical points under the conditions proposed. This paper provides comprehensive proof of such 3-fold Weyl points. In particular, the role of each symmetry endowed to the potential is carefully analyzed. Our proof extends the analysis on the conical spectral points to a higher dimension and higher multiplicities. We also provide some numerical simulations on typical potentials to demonstrate our analysis. Introduction Weyl points are singular points on the 3-dimensional spectral bands of an operator with periodic coefficients, at which two distinct bands intersect conically. Much attention has been paid to looking for such fundamental singularities in various physical systems in the past few decades [2,4,10,28]. They are the hallmark of many novel phenomena. Many materials such as graphene exhibit such unusual singular points on their energy bands [10,28]. These singular points carry topological charges and play essential roles on the formation of topological states, for instance chiral edge states or surface states [7,18,19,30]. In the past decade, constructing and engineering the conically degenerate spectral points become one of the major research subjects in many fields. Accordingly, understanding the existence of these points on the energy bands and their connections to interesting physical phenomena are extremely important in both theoretical and applied fields. How to obtain and justify the existence of such degenerate points become urgent in various physical systems. For instance, it is well known that honeycomb structures give rise to the existence of Dirac points in 2-D systems. The existence of Dirac points in the periodic system was first reported by Wallace in the tight-binding model and demonstrated in the continuous systems by numerical and asymptotic approaches [1,3,7,33]. However, the rigorous justification on the existence of Dirac points for 2-D Schrödinger equation with a generic honeycomb potential was recently given by Fefferman and Weinstein [17]. They used very simple conditions to characterize honeycomb potentials and developed a framework to rigorously justify the existence of Dirac points. Their framework paved the way for the mathematical analysis on such degenerate points, and their method has been successfully extended to other 2-D wave systems [11,12,21,26]. There are also other rigorous approaches to demonstrate the existence of Dirac points. Lee treated the case where the potential is a superposition of delta functions centered on sites of the honeycomb structure [25]. Berkolaiko and Comech used the group representation theory to justify the existence and persistence of Dirac points [9]. The low-lying dispersion surfaces of honeycomb Schrödinger operators in the strong binding regime, and its relation to the tight-binding limit, was studied in [16]. Ammari et al. applied the layer potential theory to honeycomb-structured Minnaert bubbles [5]. Based on the rigorous justification of the existence of Dirac points, a lot of rigorous explanations on the related physical phenomena have been extensively investigated. For example, the effective dynamics of wave packets associated with Dirac points were studied in [6,14,20,34,35,36]. The existence of edge states and associated dynamics are studied in [8,12,15]. Despite successful applications on the aforementioned analysis of the Dirac points in 2-D systems, the advances in applications such as materials sciences, condensed matter physics, placed new theoretical demands that are not entirely met. Just as Kuchment pointed out in a recent overview article on periodic elliptic operators [23], "the story does not end here". One important missing piece is the analysis of 3-dimensional degenerate points which are referred to as Weyl points. Another piece is the conical points with higher order multiplicities. In the literature, some special structures are proposed to admit Weyl points [29,32,38,39]. However, most constructions and demonstrations are based on either tight-binding models, numerical computations or formal asymptotic expansions. To the best of our knowledge, no similar construction and rigorous analysis as aforementioned literature have been given for Weyl points with higher-order multiplicities. Due to the importance and potential applications of Weyl points in quantum mechanics, photonics and mechanics, such generic analysis is highly desired. This is the goal of our current work. This work is concerned with the L 2 (R 3 )-spectrum of the following 3-dimensional Schrödinger equation where the potential V (x) is real-valued and periodic. By Floquet-Bloch theory [22,23,24], the spectrum of H in L 2 (R 3 ) is the union of all energy bands E b (k), b ≥ 1 for all k in the Brillouin zone. For some specific V (x), two energy bands may intersect with each other conically at some k * . This degenerate point k * in the three dimensional energy bands is called a Weyl point. There are different types of Weyl points depending on the multiplicity of degeneracy. In this work, we shall give a simple construction of three-fold Weyl points, i.e., two energy bands intersect conically with an extra band between them. We shall also rigorously justify the existence of such degenerate points by using the strategies developed in [17]. More specifically, we first propose a very general class of admissible potentials which are characterized by several symmetries. Different from honeycomb potentials in which the inversion and 2π/3-rotational symmetries are the indispensable ingredients, the potentials proposed in this paper have two rotational symmetries in addition to the inversion symmetry. These three symmetries together guarantee the three-fold degeneracy at certain high symmetry points and ensure conical structures in their vicinities. Our analysis in this work involves many novel arguments on the eigenstructures at the high symmetry points in order to explain how the 3-fold degeneracy is protected by the underline symmetries and why the Fermi velocities corresponding to different branches are the same. These important arguments are relatively trivial in the honeycomb case [17]. Our current work not only extends the theory developed in [17] to 3-dimensional systems but also shines some light on the analysis of singular points with higher multiplicities. Our analysis also provides the starting point of future theoretical analysis on these higher order Weyl points, such as the existence of chiral surface states, Fermi arcs and so on [38,39]. This work is organized as follows. In Section 2, we first introduce the lattice Λ and its dual lattice Λ * together with their fundamental cells Ω and Ω * , and then we precisely discuss the existence of high symmetry points k h in Ω * . Section 2 concludes with the Fourier analysis of Λ-periodic functions. Section 3 contains the definition of the admissible potentials characterized by several symmetries. We also review the relevant Floquet-Bloch theory for Schrödinger operators H = −∆ + V (x). In Section 4, we first propose required conditions of eigenstructures at high symmetry point W for some eigenvalue µ * , i.e., H1-H2 and their consequences. We then prove the energy bands in the vicinity form a conical structure with an extra band in the middle. In Section 5, we justify that the required conditions H1-H2 do hold for nontrivial shallow admissible potentials. Specifically, we clearly show the significance of the R and T symmetries to preserve the multiplicity of eigenvalues of H ε = −∆ + εV (x) at W while ε is sufficiently small. Moreover, the justification is extended to generic admissible potentials. Section 6 discusses the instability of the Weyl points and perturbations of dispersion bands of when V (x) is violated by an odd potential W (x). Section 7 provides detailed numerical simulations of the energy bands and Weyl points in different cases for a special choice of admissible potential. In Appendix A, we present the proofs of certain Propositions and Lemmas in Section 4 and Section 5. Notations and conventions Without specifications, we use the following notations and definitions. • For z ∈ C, z denotes the complex conjugate of z. • For x, y ∈ C n , x, y := x · y = x 1 y 1 + ... + x n y n , and |x| := x, x . • For a matrix or a vector A, A t is its transpose and A * is its conjugate-transpose. • f, g D = D f g is the L 2 (D) inner product. In this work, the region D of integration is assumed to be the fundamental cell Ω if it is not specified. Consider the following linearly independent vectors in Here a > 0 is the lattice constant. Define the lattice as follows The parameter a then gives the distance between nearest neighboring sites. The fundamental period cell of Λ is Let q 1 , q 2 , q 3 ∈ R 3 be the dual vectors of v 1 , v 2 , v 3 , in the sense that q ℓ · v j = 2πδ ℓj , ℓ, j = 1, 2, 3. Explicitly, a . Then the dual lattice of Λ is defined as The fundamental period cell of Λ * is chosen to be In this work, we are interested in the following rotation transformation R in R 3 By direct calculations, we can conclude the following proposition. Remark 1 By understanding Λ * k h := k h + Λ * as shifted lattices, we know that k h is a high symmetry point if and only if R leaves Λ * k h invariant, i.e., The following lemma indicates that inside the fundamental period cell Ω * , there exist precisely four high symmetry points. In this work, we only focus on the following specific high symmetry point It follows from (2.5) and (2.7) that Here q 0 := 0. Λ-periodic, Λ-pseudo-periodic functions and Fourier expansions We say that a function f (x) : More generally, given a quasi-momentum k ∈ R 3 , we say that a function F (x) : Let us introduce the Hilbert space where the inner product is Similarly, we define In particular, for k = 0, is the space of square-integrable Λ-periodic functions. Obviously, F (x) ∈ L 2 k,Λ if and only if That is, the mapping gives a one-to-one correspondence between L 2 0,Λ and L 2 k,Λ . Moreover, it is easy to see that That is, the mapping (2.11) is an isometry from L 2 0,Λ to L 2 k,Λ . Due to the Λ-periodicity of functions f (x) ∈ L 2 0,Λ , they can be expanded as Fourier series of the form where f q q∈Λ * ⊂ l 2 (Λ) is the sequence of Fourier coefficients, indexed using the discrete indexes q from Λ * . Explicitly,f where |Ω| denotes the volume of the cell Ω. Such a form (2.12) of Fourier expansions is consistent with Example 1 and is more convenient for later uses. Note that f q q∈Λ * ∈ l 2 Λ * , the Hilbert space of square-summable complex sequences over the dual lattice Λ * . where {f q } is as in (2.13). Rotations R and R * in (2.2) and (2.3) can yield a transformation R for functions Lemma 2 Let k h be a high symmetry point w.r.t. R. Then • R maps L 2 k h ,Λ to itself as a unitary operator. • Define an affine transformation R k h : Λ * → Λ * by (2.15) Then, for any ℓ ∈ Z, one has In particular, • The action R on L 2 k h ,Λ is given by Thus because R * is an orthogonal transformation and both f (y) and g(y) are Λ-periodic in y ∈ R 3 . This shows that R is unitary. Decompositions of periodic and pseudo-periodic functions In the following discussions we only consider the special high symmetry point W. Notice from (2.8) that R 4 W = I on Λ * , and R ℓ W = I on Λ * for ℓ = 1, 2, 3. Each orbit of the action R W on Λ * consists of precisely four points. Let us introduce Then functions F (x) = e iW·x f (x) ∈ L 2 W,Λ can be decomposed into (2.20) Since R 4 = I and R * 4 = I, one has R 4 = I on L 2 0,Λ . Hence eigenvalues σ of the unitary operator R must satisfy σ 4 = 1. In fact, one has Then we have an orthogonal decomposition for L 2 where the eigenspaces are Note that (2.22) also yields an orthogonal decomposition for the space L 2 Let σ be as in (2.21) and Then (2.23) By (2.18), we have we deduce from (2.23) that the Fourier coefficientsf q satisfŷ Combining with general decomposition (2.20), we have the following results. As for equality (2.26), we need only to notice in (2.25) that • We use expansion (2.25) for F (x) to obtain which is in L 2 W,σ , following from the characterization (2.25) for the eigenvalueσ. 3 Eigenvalues of periodic Schrödinger operators Admissible potentials In this work, we introduce the following admissible potentials. where T is the following matrix We remark that the requirements (2) in Definition 2 are the so-called PT-symmetry. Moreover, requirement (4) is a novel symmetry for 3-dimensional potentials which will play an important role in the later analysis for Weyl points. Admissible potentials have the following properties. Corollary 1 Let V (x) be an admissible potential. Then its Fourier coefficientsV q satisfŷ Remark 4 Let us consider the orthogonal matrix T in (3.1). It is easy to see that T maps the lattice Λ * to itself and T * = T −1 . Moreover, T acts on Λ * as follows Typical admissible potentials can be constructed using Fourier expansions. Example 1 Let us define real, even potentials It is easy to see that these V i (x) are R-invariant potentials. Thus, for any real coefficients c i , the potential is also R-invariant. However, V (x) is, in general, not T -invariant. In fact, by noting that is an admissible potential as in Definition 2 for any nonzero real number c. The role of the R-and T -invariance of admissible potentials V (x) can be stated as the following commutativity with the Schrödinger operator H of (1.1) we are going to study. Lemma 4 (1) Transformations R and T are isometric, i.e., The proofs are direct. Periodic Schrödinger operators and Floquet-Bloch theory Let Λ be the lattice defined in (2.1) and V : R 3 → R be an admissible potential in the sense of Definition 2. For each quasi-momentum k ∈ R 3 , we consider the Floquet-Bloch eigenvalue problem where µ(k) is the eigenvalue and the second condition is the pseudo-periodic condition for Φ(x, k). By setting we know that problem (3.2) is converted into the following periodic eigenvalue problem Here the shifted Schrödinger operator H(k) is defined via The general properties of the Schrödinger operator with a periodic potential is given by the Floquet-Bloch theory. We end this section by listing some most important conclusions of this theory without including their proofs. We refer readers to [13,17,23,24,31] for details. Proposition 2 (Floquet-Block theory) (1) For any k ∈ Ω * , the Floquet-Bloch eigenvalue problem (3.3) has an ordered discrete spectrum can be taken to be a complete orthonormal basis of The eigenvalues µ b (k), referred as dispersion bands, are Lipschitz continuous functions of k ∈ Ω * . (3) For each b ≥ 1, µ b (k) sweeps out a closed real interval I b over k ∈ Ω * , and the union of I b composes of the spectrum of H in L 2 0,Λ : Here the summation (3.4) is convergent in the L 2 -norm. Weyl points and conical intersections In this section, we are going to prove the existence of Weyl points on the energy bands of Schrödinger operators with admissible potentials that we propose in Definition 2. The strategy used in this work is inspired by the framework that Fefferman and Weinstein developed for Dirac points in 2-D honeycomb structures [17]. More specifically, (1) we first propose required conditions of eigen structure at W for some eigenvalue µ * , i.e., the conditions H1-H2 below; (2) we then prove the energy bands in the vicinity form a conical structure with an extra band in the middle under these conditions; (3) we justify that the required conditions H1-H2 do hold for nontrivial shallow admissible potentials; (4) we extend the justification of required conditions to generic admissible potentials. Compared to the study on Dirac points for the 2-D honeycomb case, the main difficulties of our current work arise from two perspectives: higher dimension and higher multiplicity. To the best of our knowledge, we have not found rigorous analysis on such degenerate points in the literature. Higher dimension makes the calculations more cumbersome. On the other hand, the higher multiplicity forces us to deal with a larger bifurcation matrix which has more freedoms which we need to reduce, for instance, the relations among the entries of the matrix. Some new symmetry arguments are introduced to conquer these difficulties. Spectrum structure at the high symmetry point W In this section, we are interested in the three-fold degeneracy of the high symmetry point W. So let us consider the W-quasi periodic eigenvalue problem (4.1) We first assume that there exists an eigenvalue µ * such that the following assumption is fulfilled. Then the following proposition characterizes the fine structure of the eigenspace E µ * . A direct consequence of above proposition is that µ * is an L 2 W,i ℓ -eigenvalue of multiplicity 1 for each ℓ = 1, 2, 3. In order to keep the structure of the paper, the detailed proof of Proposition 3 is placed in Appendix A. Bifurcation matrices Under the assumption H1, we always can find an orthonormal basis {Φ 1 (x), Φ 2 (x), Φ 3 (x)} for E µ * as in Proposition 3. However, the choice is not unique and a gauge freedom for each eigenfunction Φ ℓ (x) is allowed. Giving such a basis, let us define a complex-valued matrix M (κ) for κ ∈ R 3 /{0} by It is called the bifurcation matrix which appears naturally in the eigenvalue problem. We shall see in the later section that the leading order structure of the eigenvalues of H(k) for k in the vicinity of W is closely related to M (κ). In this subsection, the main properties of M (κ) and their justifications are provided. We want to remark that M (κ) depends on the choice of the basis set We consider the admissible potential V (x) in the sense of Definition 2. Recall that [H, T ] = 0 can imply T E µ * = E µ * . In other words, there exists a 3 × 3 matrix Q T such that  Recall from Lemma 4 that T : L 2 W,Λ → L 2 W,Λ preserves the inner product, i.e., T F, T G = F, G for all f, g ∈ L 2 W,Λ . It immediately follows that Q T is unitary, i.e., Q * T Q T = I. In other words, {T Φ 1 , T Φ 2 , T Φ 3 } is also an orthonormal basis of E µ * which defines a new bifurcation matrix M T (κ). Namely, Similarly, by using the symmetry R, we can define another bifurcation matrix M R (κ) and the corresponding unitary transformation Q T . In fact, it is easy to obtain However, the explicit form for Q T is unknown to us. One has the following relations for these bifurcation matrices. Proposition 4 For any where R and T are the orthogonal matrices in (2.2) and (3.1). By substituting (4.2) into (4.3), we obtain (4.6) Recall the transformation R : C 3 → C 3 has eigenpairs listed in (2.4). We can then obtain the following structural result for the bifurcation matrix M (κ). Corollary 2 (1) The quantity υ 1 υ 2 υ 3 is gauge invariant in the sense that it does not depend on the choice of the orthonormal basis of E µ * . By direct calculations, one haŝ For (2), by taking the norms in (4.15) and using equalities (4.8), we obtain This leads to the desired invariance of |υ ℓ |. Due to the equalities in Theorem 1 and the invariance in Corollary 2, let us define The quantity υ F of (4.16) is referred to as the Fermi velocity in quantum mechanics. Now we introduce another standing assumption in this paper, which can be simply stated as H2 υ F = 0. Conical structure of the spectrum near W With the eigenstructure at W, we are able to obtain the corresponding eigenstructure when quasi-momentum k is near W. The results are stated as follows. Theorem 2 Suppose that V (x) is an admissible potential in the sense of Definition 2 and consider the Schrödinger operator H = −∆ + V (x). Assume that there exists b > 1 such that µ b−1 = µ b = µ b+1 = µ * is an L 2 W,Λ -eigenvalue of H and the assumptions H1-H2 are fulfilled. Then, for sufficiently small but nonzero (κ x , κ y , κ z ) ∈ R 3 , eigenvalues of H satisfy where υ F is the Fermi velocity defined before, and ξ + ≥ ξ 0 ≥ ξ − are the three (real) roots of the following cubic equation Proof The proof is based on the Lyapunov-Schmidt reduction. Thanks to the eigenstructure at W and the explicit form of the bifurcation matrix which we established in last section, we now only need to do a perturbation expansion and a rigorous justification. Compared to the 2-D honeycomb case [17], we encounter more complicated computations on the bifurcation. We complete it in several steps. 1. Decomposition of spaces. For k = W, we have where µ (0) := µ * . These define a space Consider perturbation k = W + κ, where κ ∈ R 3 is small enough. From the defining equalities in (3.2), one has To study eigenvalue problem (3.3), let us decompose Here the orthogonal complement X ⊥ is taken from L 2 0,Λ . Then can be expanded as 2. Splitting of the equation using the Lyapunov-Schmidt strategy. To solve Eq. (4.19) using such a strategy, let us introduce the orthogonal projections Applying Q and Q ⊥ to Eq. (4.19), we obtain an equivalent system By using (4.19) for F (1) , we have By the assumptions of the theorem on eigenfunctions of H(W), one knows that, when restricted to X ⊥ , H(W) − µ (0) I has a bounded inverse Due to the regularity, the mapping is a bounded operator defined on H s (R 2 /Λ) for any s. From the Theorem 2, we see that the three bands intersect at the degenerate point (W, µ * ). We want to point out that there is a special direction along which two energy bands adhere to each other to leading order. Specifically, if κ ∈ n * R + with n * = 3 , the solutions of (4.18) take the form The result indicates that the three-fold degeneracy splits into a two-fold eigenvalue and a simple eigenvalue in the vicinity of the Weyl point W. We remark here that it is not clear whether the double degeneracy persists by including higher order terms of |κ|. This is an interesting problem but is beyond the scope of the current work. At the end of this section, we characterize the lower dimensional structure of the three energy bands near the Weyl point W. According to the expressions of dispersion bands µ(κ) in (4.17), we study a special case of dispersion equation (4.18) as follows. If κ arg = 0, or equivalently, either of κ x , κ y , κ z vanishes, the bifurcation equation (4.18) has solutions In the transverse plane which is perpendicular to one axis direction, the three dispersion surfaces form a standard cone with a flat band in the middle, see Figure 2 in Section 7. This is exactly the band structure of the Lieb lattice in the tight binding limit [21,27]. To the best of our knowledge, this structure has not been rigorously proved. We demonstrate its existence for our potentials in lower reduced planes. Generally speaking, in the reduced plane, the three dispersion bands do not behave the same as the above case. Note that (−κ) arg = −κ arg . Let us fix a direction n. Then where the superscripts indicate the different choices of bifurcation equations depending on the directions n or −n. We can actually construct three analytical branches of dispersion curves and each branch is a straight line to leading order. In fact, let us define Then for a fixed direction n, the three branches E j (λ), j = 1, 2, 3 are analytical in λ. Next we allow n to vary in a transverse plane. Namely, let n 1 and n 2 be two orthonormal vectors and consider the dispersion surfaces in the plane spanned by n 1 and n 2 . Then where |λ| denotes the length of (λ 1 , λ 2 ). Note, while λ is fixed, κ arg is a continuous variable with respect to λ 1 λ , thus ξλ i depends on λ 1 λ continuously. Consequently, (4.17) exactly admits a cone (may not be standard and isotropic) adhered by an extra surface in the middle (see Section 7 for related figures). Justification of Assumptions H1 and H2 Theorem 2 states that as long as H1-H2 hold, the Schrödinger operator with an admissible potential always admits a 3-fold Weyl point at the high symmetry point W. In this section, we shall justify the two assumptions H1-H2 can actually hold generally. We first examine shallow potentials in which case we can treat the small potential as a perturbation to the Laplacian operator. Then we can conduct the perturbation theory. The main difficulty is to prove the 3-fold degeneracy persists at any order of the asymptotic expansion. We remark that in the 2-D honeycomb case [17], the 2-fold degeneracy is naturally protected by the inversion symmetry. But that is not enough for higher multiplicity. What are the required arguments on the 3-fold degeneracy? We will answer this question in our analysis by imposing novel symmetry arguments. Remark 5 The requirement V 1,0,0 > 0 in Theorem 3 can be replaced by V 1,0,0 < 0. In the latter case, one has the second, third and fourth bands intersect at the Weyl point (W, µ * ). The proof of Theorem 3 is inspired by the methods in [26], where the 2-fold Dirac points in the 2-D honeycomb structure is studied. The main difficulty in the present case is the justification of the three-fold degeneracy of the perturbed eigenvalue µ * at W. Recall that the two-fold degeneracy is protected by the PT-symmetry of V (x) in the 2-D honeycomb case. The potential in our work also possesses the PT-symmetry so that a two-fold eigenvalue µ * at W is guaranteed. However, this is not adequate to admit the three-fold degeneracy of µ * . In fact, we need to combine T -symmetry to ensure that another eigenvalue is the same as µ * at W. This is the main difference compared to the analysis of the previous work. In the following proof, we only list the key calculations and point out the new ingredients. We begin to prove Theorem 3. 1. Recall that µ (0) = |W| 2 is the eigenvalue of the Laplacian −∆ of multiplicity 4. Moreover, µ (0) is also a simple L 2 W,i ℓ -eigenvalue for ℓ ∈ {1, 2, 3, 4}, with the corresponding Similar to [36], by applying Lyapunov-Schmidt reduction to (5.1), we obtain the expression for µ ε for sufficiently small ε We now turn to the calculation of Φ ℓ (x, W) and the coefficients (5.5) into (5.4), and noticing that V (x) is even, it follows that Here one shall notice that the O(ε 2 ) terms in µ ε 1,2 and µ ε 3 are undetermined. This means that we could not assert that µ ε 1,3 = µ ε 2 . However, it follows from (5.7) that these eigenvalues are ordered so that Then the above analysis shows that The next step is to verify that µ ε 1 is really a three-fold eigenvalue, i.e., dim E µ ε 1 = 3, with the help of the following lemma. The detailed proof of Lemma 5 is displayed in Appendix B. Remark on Weyl points in generic admissible potentials Theorem 3 studies the 3-fold Weyl points for the Schrödinger operator with shallow admissible potentials: H ε = −∆ + εV (x) for ε = 0 and small. In this subsection we make some remarks on the extension of these results to generic potentials, i.e., ε = O(1). Following the arguments established by Fefferman and Weinstein for the existence of Dirac points in 2-D honeycomb potentials, see [17,14], we claim that the assumptions H1 and H2 hold for some (W, µ * ) except for ε in a discrete set C of R. Consequently, the conclusions of Theorem 3 also hold, i.e., there always exists a 3-fold Weyl point, for the Schrödinger operator The main idea is based on an analytical characterization of L 2 W,λ -eigenvalue of H ε . By a similar argument on the analytic operator theory and complex function theory strategy [17,14], it is possible to establish the analogous result. Due to the length of this work, we omit the details and refer interesting readers to [14,17]. Instability of the Weyl point under symmetry-breaking perturbations In the preceding sections, we have demonstrated that the admissible potentials generically admit a 3-fold Weyl point at W. The admissible potentials are characterized by the inversion symmetry, the R-symmetry and the T -symmetry. Actually we have seen the 3-fold degeneracy at W and conical structure in its vicinity are consequence of combined actions of these symmetries. In this section, we shall discuss the instability of the 3-fold Weyl point (W, µ * ) if some symmetry is broken. More specifically, we only show the case where the inversion symmetry is broken which can be compared with the results to the 2-fold Dirac points in 2-D honeycomb case. The calculation of the case where T -symmetry is broken is very cumbersome and we shall not give detailed discussion and only give numerical examples in Section 7. Consider the perturbed eigenvalue problem where V p (x) is real and odd, and δ is the perturbation parameter which is assumed to be small. We expand µ δ and Ψ δ (x) near the 3-fold Weyl point (W, µ * ) as where Ψ (0) is the unperturbed eigenfunction corresponding to the the unperturbed eigenvalue µ * . We have stated in Theorem 2 that Calculations analogous to those in the proof of Theorem 2 can lead to a system of homogeneous linear equations for α 1 , α 2 , α 3 and M 2 includes higher order terms. Therefore µ δ is the solution for the perturbed eigenvalue problem (6.1) if and only if µ (1) Following a standard perturbation theory for Floquet-Bloch eigenvalue problems, we obtain that the solutions of (6.2) satisfy where µ is the leading order effect of the perturbation which solves the equation To understand the problem, it is key to compute the explicit form of M 1 . Note that Similarly, (6.5) Combining (6.4) and (6.5), we obtain respectively. Obversely, υ ♯ 1 is real. Let us assume that both υ ♯ 1 and υ ♯ 2 are nonzero. Substituting (6.6) into (6.3), we obtain Then we can conclude from (6.7) that the 3-fold degenerate point (W, µ * ) splits into 3 simple eigenvalues under an inversion-symmetry-broken perturbation. More precisely, The above analysis implies that the 3-fold Weyl point does not persist if the inversion symmetry of the system is broken. We also include the numerical simulations for a typical admissible potential with an inversion-symmetry-broken perturbation in Section 7, see Figure 2. It is seen that the 3 bands do not intersect at W and there exist two local gaps. We remark that if T -symmetry is broken and inversion-symmetry persists, the 3-fold degenerate point split into a 2-fold eigenvalue and a simple eigenvalue, see Figure 3 in Section 7. The reason is that the inversion symmetry naturally protects the 2-fold degeneracy which is similar to the 2-D honeycomb case. Due to the length of this work, we shall not include the detailed calculations while some of main ingredients can be found in our analysis to the bifurcation matrix M (κ) in Section 4. Numerical results In this section, we use numerical simulations to demonstrate our analysis. The numerical method that we use is the Fourier Collocation Method [37]. The potential that we choose is It is evident that V (x) is an admissible potential in the sense of Definition 2. According to our analysis-Theorem 2 and Theorem 3, the first three energy bands intersect conically at W. In the following illustrations, we plot the figures of first three energy bands in vicinity of W. Since the full energy bands are defined in R 3 , it is not easy to visualize such high dimensional structure. We just show the figures in the reduced parameter space, i.e., energy curves with the quasi-momentum being along certain specific directions and energy surfaces with the quasi-momentum being in a plane. We plot dispersion bands µ(k) near W along a certain direction n, i.e., µ(λ) = µ(W + λn). (7. 2) The dispersion curves µ(λ) along three different directions are displayed on the top panel of Figure 1 where we choose three different directions In the first two cases, we see that the three straight lines intersect at λ = 0, i.e., at the Weyl point. In the last example, we only see two straight lines intersect since one straight line is two-fold degenerate to leading order, see discussions in Section 5. The numerical simulations agree with our analysis given in Theorem 2. The dispersion surfaces µ(λ 1 , λ 2 ) are displayed on the bottom panel of Figure 1 where in all cases n 1 = (1, 0, 0) and respectively. From the figure, we see that the three dispersion surfaces intersect at the Weyl point. The first and third bands conically intersect each other with the second band in the middle. This result also agrees well with our analysis. We next verify the instability of conical singularity under certain symmetry-breaking perturbations. A perturbation is added to the above admissible potential (7.1). In other words, we consider the Schrödinger operator H δ i = −∆+V (x)+δV p i (x), where V p i (x), i = 1, 2 denote the perturbation potential and δ a small parameter. In our simulations, we choose δ = 0.01. • We first examine the role of PT-symmetry. The perturbation that we choose is Obviously, V p 1 is odd and thus breaks PT-invariance of V (x). We plot the same energy band functions of H δ 1 as shown in Figure 2. We see that the three energy band functions separate with each other and two gaps open. • To see the significance of T -invariance in the formation of three-fold conical structures, we consider the perturbation V p (x) which breaks T -invariance. In our simulation, we choose Obviously, the perturbed potential (7.4) possess R-invariance and PT-invariance, but does not have the T -symmetry since T q 1 = q 3 − q 1 . As before, we display the energy curves and surfaces near W in Figure 3. It is shown that the original three-fold degenerate cone structure disappears and breaks into one simple and one double eigenvalue. The nearby structure near the double eigenvalue is not naturally conical. It may correspond to other interesting phenomena but is beyond the scope of this paper. respectively. Bottom Panel: Energy surfaces µ(λ 1 , λ 2 ) with the quasi-momentum along two directions n 1 , n 2 , where n 1 is chosen to be (1, 0, 0) and n 2 equals (d)(0, 0, 1); (e) 0, 2 ; (f) 0, 3 5 , 4 5 . The three energy bands intersect conically at the origin, i.e., at the Weyl point. The setup is the same as that in Fig. 1. The 3-fold degenerate point splits into a two-fold and a simple eigenvalues. The two-fold degeneracy comes from the inversion-symmetry of the system which we keep. There is no general conical structure near the perturbed two-fold degenerate point. A Proof of Proposition 3 The purpose of this appendix is to give a detailed proof to Proposition 3. We first prove the following lemma. Lemma 6 Let µ * be an eigenvalue of H(W) of eigenvalue problem (4.1) with the corresponding eigenspace E µ * . If E µ * ⊂ L 2 W,i ⊕ L 2 W,−i , then dim E µ * is even. • Case 2: One of c 1 and c 3 is nonzero and another is zero, say c 1 = 0 and c 3 = 0. Then we have equalities Since the multiplicity in L 2 W,i is one, we conclude from the last equality that Φ ′′ 1 (x) = αΦ 1 (x) for some α. Consequently, we conclude from the first equality that Φ ′′ 2 (x) ∈ E µ . • Case 3: Both c 1 and c 3 are nonzero. Basing on the decomposition (A.1), one has where k 1 = c 1 (i + 1) and k 2 = c 3 (1 − i). By the symmetry, we have the following conclusion.
2021-02-18T02:15:38.991Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "450d7fd786cd22b1403bc49a2ec2ed9f9220df6d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "450d7fd786cd22b1403bc49a2ec2ed9f9220df6d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
201696810
pes2o/s2orc
v3-fos-license
Towards an Underground Utilities 3D Data Model for Land Administration : With the pressure of the increasing density of urban areas, some public infrastructures are moving to the underground to free up space above, such as utility lines, rail lines and roads. In the big data era, the three-dimensional (3D) data can be beneficial to understand the complex urban area. Comparing to spatial data and information of the above ground, we lack the precise and detailed information about underground infrastructures, such as the spatial information of underground infrastructure, the ownership of underground objects and the interdependence of infrastructures in the above and below ground. How can we map reliable 3D underground utility networks and use them in the land administration? First, to explain the importance of this work and find a possible solution, this paper observes the current issues of the existing underground utility database in Singapore. A framework for utility data governance is proposed to manage the work process from the underground utility data capture to data usage. This is the backbone to support the coordination of different roles in the utility data governance and usage. Then, an initial design of the 3D underground utility data model is introduced to describe the 3D geometric and spatial information about underground utility data and connect it to the cadastral parcel for land administration. In the case study, the newly collected data from mobile Ground Penetrating Radar is integrated with the existing utility data for 3D modelling. It is expected to explore the integration of new collected 3D data, the existing 2D data and cadastral information for land administration of underground utilities. Introduction Rapid urbanization creates a strong need to optimize land use in densely populated cities. Attention is thus shifting from the very limited available space above ground to generation and increased use of underground spaces. Comparing to the above ground, underground is an unseen space. The trench for the building and maintenance of underground infrastructure costs a lot of money, as well as faces high risks. A prerequisite for including the underground in urban planning is the availability of sufficiently complete, accurate and up-to-date 3D maps of the underground. However, such maps are not yet widely available, if at all, and the required data acquisition is much more challenging than for spaces above ground. To observe the existing data, we zoom in to a corner of the Marina Bay region. Figure 2a presents five layers of different power grid networks. In the real world, the five different power grid networks may be located at the same place and different depths. However, these data have the same x, y value in the database, which makes them impossible to identify in the vertical space and distinguish them in 2D. All of the existing data are as-build data. We can not trust them to present the real situation of underground utility networks. From Figure 2b, the limited attributes are provided from the current database. Only the main water pipes have a diameter. Most of them have 2D geospatial information. In addition, data owners have more details of existing utility data. However, most of them are 2D data as well. Depending on the requirement of the application, some data owners try to collect 3D data. There are some issues during the data capture to usage. Without the utility survey standard, some of them only use the traditional survey method to get the 3D points data of pipelines and overlay on the existing data. Nobody can guarantee the quality of these data. Meanwhile, because of the limitation of the existing data model, it is difficult to integrate 3D data with the existing 2D data. Update cycles were observed to be infrequent and slow, which is once per six months. We not only need time information in the data model to maintain utility database frequently, but also should improve data governance procedures for updating. In general, some issues prevent these data from being sufficient for urban planning, land administration, and on-site work. In fact, many existing databases, not only the ones in Singapore, contribute incompletely to the spatial understanding of the underground because of similar restrictions. In particular: (a) An example of power grid data (b) The attributes of existing utility data • The data are often only 2D i.e., lacking depth information entirely, or 2.5D (i.e., featuring depth as an attribute to a horizontal position rather than as an independent coordinate. Furthermore, the depth information may be sparse with depths measured at few locations only, e.g., at accessible manholes, and it may be ambiguous because it is not always clear whether the values represent depth relative to a specific surface with unknown elevation or height relative to an established height datum. • It is unknown whether the data represent the current situation, the possibly different as-built state, or just the as-designed state. Furthermore, the geometric accuracy and the completeness of the area often unknown. • Much of the attribute information (e.g., diameter, material, installation date) required to support specific applications is not available or does not represent the appropriate level of detail. • There is a lack of standards for organizing the data and semantic information of underground utilities, impairing data sharing and use of the shared data. Overall, the reliable and accurate 3D data of utility networks are sorely demanded. Therefore, the Singapore-ETH Centre together with the SLA and the Geomatics Department of the City of Zürich have started a related project under the name "Digital Underground" [2]. The initial goals of this project are to develop a road map, a data model and a concept for deriving a unified and complete 3D map of the relevant underground structures (in particular of utilities and spaces like corridors or tunnels). Collecting best practices for underground utility mapping is a special focus within the project. Figure 3 describes the workflow of data governance for 3D underground utility mapping. In the data capture, different types of survey techniques (e.g., Ground Penetrating Radar (GPR), Gyro-based system) are explored and compared to find the optimal underground utility survey approach. After the data processing, the newly collected data should be integrated into the existing database aiming to improve the information of underground utility. As the backbone of the 3D underground utility map, the 3D consolidated database of underground utilities should be developed for data storage. This is a loop workflow. The data capture could improve and update the database. At the same time, the underground utility database should provide information to support data capture. In order to organize these four steps, we need two main components in the data governance. One is the framework to manage different roles and communication between them in data governance. The other is the underground utility data model, which is a conceptual model to describe the structure and content of geodata independent from the used hard-and software systems. It will provide the standard for the presentation of geometrical information, data quality management and various applications. This paper focuses on the design of the framework of data governance and underground utility data model. To ensure legal compliance, efficiency, and resilience of these utility networks, the reliable 3D underground utility data could shed light on their ownership and operation [3]. Then, the underground utility data can be used in various applications. To provide sufficiently and consistently accurate information about underground utilities, it is necessary to fill the gap between engineering practices and mapping disciplines. Meanwhile, we need to find the solution for how to use the existing data and integrate it with newly collected data. Here, we focus on underground utilities, ignoring other underground structures that eventually need to be represented in the same 3D database as the utilities. This work aims at bridging the gap between underground utility surveying and data governance for land administration. Our proposal addresses the following: • The organization of different phases and roles from data capture to usage. It is necessary to make a clear definition of different roles. During this work process, the communication between different roles (e.g., data producers, owners and users) is very important. • Different roles have different rights to access, change, delete or add data. These permissions must be defined and maintained administratively. • Building and updating the 3D map of the underground requires integration of datasets of a different type, quality and source. Data may originate from recent surveying e.g., using GPR or self-contained sensors tracking their movement through a pipe. Data for building a map may also be derived from other databases. This integration requires handling various data formats, and quantifying and properly taking into account the respective data quality. • The underground data need to be convertible into the data formats required by a variety of different applications and end users without loss of relevant information. Subsequently, we first introduce related works on 3D underground utility data acquisition and review the underground utility data governance for land administration in some countries or regions. In Section 3, we propose a framework to resolve the above issues about data governance and explain the design of a 3D underground utility data model. In Section 4, we briefly summarize a Singapore case study covering the work process from large scale GPR-based data acquisition to 3D visualization. We conclude with a summary and an outlook on future work. The technologies for 3D underground utility data acquisition Information about the buried utility networks can be retrieved without any excavation underground utility mapping using non-destructive technologies. However, this is more challenging than above ground mapping. Established approaches for surveying (e.g., photogrammetry, laser scanning, total station measurements or global positioning system) require clear line-of-sight between the instrument and the points to be measured, or between these points and the satellites. They are applicable to (parts of) utility networks while those are exposed in an open pit, e.g., during construction. In some special cases, and with considerable effort, it may even be possible to use such technologies inside buried utilities. However, underground utility mapping comprising detection, location and identification of buried utilities requires approaches without excavation [4,5]. Subsurface geophysical technologies [6,7], such as Ground Penetrating Radar or Electromagnetic Locators, can be used for this purpose [5,8]. In addition, a gyroscope-based system [9] is available for measuring the trajectory of certain utilities (newly laid pipelines with a suitable radius through which the measurement system can travel). Table 1 lists the technologies used for utility mapping with a general review of their accuracy. As positioning using a GPR requires manual processing, manufacturers typically do not mention any type of horizontal or depth measurement accuracy. However, surveying standards such as PAS128 [10] provide some accuracy indications for GPR. According to PAS128, a horizontal accuracy of 250 mm or + 40% of detected depth (whichever is greater) can be achieved when using one of Pipe and Cable Locator (PCL) and GPR, and a horizontal accuracy of 150 mm or + 15% of detected depth (whichever is greater) when using both. Additionally, PAS128 indicates that a depth measurement accuracy of 40% of buried depth can be achieved when using one of PCL and GPR, and 15% of buried depth when both are used. However, PAS128 does not elaborate on how these numbers have been established. In this paper, we focus on GPR due to its popularity in underground utility mapping [5] and on a gyroscope-based system as it is not limited by the depth of the pipeline, by other utilities nearby or by electromagnetic disturbances [9]. GPR is a widely used technology for characterizing structures in the underground. It is based on recording the delay and power of electromagnetic (EM) signals scattered and reflected at discontinuities of the permittivity. Such discontinuities are associated with differences in materials or differences in material properties allowing for detecting, e.g., man-made objects, holes, and layers of different composition or water content in the underground [11,12]. GPR is used for a variety of applications, among them geophysical exploration, archaeology, and inspection of buried utility networks [13,14]. Depending on the type of transmitted signals, impulse radar systems and continuous wave radar systems that are distinguished, with the former being more common [15]. The penetration depth, i.e., the maximum depth at which discontinuities can be detected using GPR is on the order of a few centimeters to a few tens of meters, depending on the soil characteristics, transmission power, signal stacking time and the frequency, which typically ranges from 10 MHz to 4 GHz. Lower frequencies require bigger antennas but facilitate higher penetration depths. Higher frequencies, on the other hand, yield better spatial resolution and thus allow for correctly locating smaller objects or distinguishing objects at smaller distances [13]. 3D information is obtained by moving the radar antennas along the ground surface, recording data quasi-continuously, and subsequently analyzing the data tomographically. Figure 4a shows two examples of GPR instruments, one being integrated with a mobile mapping trailer, and the other one a manually pushed cart. Although GPR measurement can be very accurate, the responses may vary according to the measurement. A so-called B-scan (i.e., a 2D distance-depth representation of the underground) (see Figure 4b for an example) can be very challenging and is normally done by an experienced radargram analyst. The experience can be generated from a series of signal traces along a trajectory. B-scan normally represented by black and white colours indicative of the different signal strengths and polarities of the objects. These signals are analyzed for anomalous responses. If the positions of these anomalies form a linear line, it is interpreted as a utility feature. The interpretation of B-scan is subjected to the expertise of the radargram analyst or GPR specialist. Such interpretation experience can be gained from a regularly used system of proper training provided by the manufacturer or consultant. Gyroscope-Based Systems Utilities with a diameter of more than about 5 cm through which a probe can travel may be accessible to mapping with an inertial measurement unit (IMU). The IMU measures the 3-axis acceleration and 3-axis rotation rates that can be integrated over time yielding position and orientation changes of the unit. If the unit is mounted within a probe and the probe travels through the utility (typically a pipe), it can record the trajectory of the probe-and thus the 3D coordinates of points along the axis of the utility [9]. The potential benefits of such a measurement system are that (i) it can acquire the as-built information of the suitable utilities even if they are buried at a depth exceeding the penetration depth of GPR, (ii) the location can be geometrically more accurate than using above-ground measuring technologies for the location of underground structures, (iii) it can acquire data irrespective of the properties of the surrounding underground (e.g., soil composition, water content) and of electromagnetic fields, and (iv) that the probe can be equipped with additional sensors capturing more information than just the coordinates (e.g., diameter, the radius of curvature, corrosion). Major disadvantages are that (i) only pipes with sufficient diameter, sufficient minimum radius or curvature and accessibility can be measured, (ii) depending on the measurement system, the pipe needs to be empty during the measurement i.e., the service of the utility is interrupted, (iii) the accuracy of the 3D coordinates degrades rapidly with time such that only short parts of the utility, with known coordinates of the start and end point, can be measured if high accuracy is needed, and (iv) additional provisions may be required, e.g., short periods through which the probe remains stationary while moving fast at others. Figure 5b shows an example of such a probe and a 3D map of utilities mapped using it. At present, GPR seems to be of paramount importance for mapping the underground utilities. However, there are others' current technology that overcomes the shortcoming of GPR available on the market, such as laser scanning or gyro-based system. No single detection technique can detect the entire type of utilities in every location. Hence, GPR is not the only solution for underground utility mapping, as using more technologies increases the detection capability, coverage, efficiency and accuracy. Irrespective of the data acquisition technologies chosen, the information extracted from the measurements, in particular 3D locations, needs to be integrated with attributes of the respective utilities, e.g., type and dimension, in a geospatial database to support 3D visualization, urban planning and other applications. The Review of Underground Utility Data Governance Some utility data models have been developed for storage, visualization, exchange, and analysis in the geospatial domain. Obviously, the general data model is not enough to reach all the requirements from different users. In order to develop the 3D data model for the land administration of underground utilities, this work reviews the underground utility data governance in land administration from some countries and the existing data models that are related to underground utility networks and land administration. Underground Utility Data Governance for Land Administration The rapid urbanization and increasing complexity of urban spaces worldwide present an urgent need to provide much more and precise information for land usage. Obviously, 2D cadastral information and visualization are not enough for current land administration. During the past decade, a number of works have been conducted to study on the 3D cadastre from various aspects, such as legal, organization and technique [16][17][18]. The Land Administration Domain Model (LADM) [19] is an important legal framework to define and integrate concepts and terminology of Land Administration for 3D representations. As an international standard, the LADM provides a flexible conceptual schema from three main aspects: organizations, rights and spatial in formations [17]. The integration of 2D and 3D information in the LADM can provide solutions for 3D cadastre. The LADM has two classes (LA_LegalSpaceutilityNetworke and ExPhysicalUtilityNetwork) specifically describe information about the underground utility, which is not enough to define the 3D geometric and topological characteristics and support to land administration of underground utility. In recent years, some researchers or government agencies have begun to consider the cadastre for underground infrastructures. To analyze the impact of 4D cadastres in the registration of underground utilities, Döner et al. [20] compared the physical and legal registration of utilities in three countries (Turkey, the Netherlands and Queensland, Australia). Obviously, all of them are supported by a 4D cadastral registration. Pouliot and Girard [18] provided a discussion about the integration of underground utility networks in the land administration system. Based on the case study of Quebec, they discussed three key questions in the following: • Do we need to register underground objects? • Should underground networks be registered in the Land Register, with the same specifications as land parcels? • Which information should be part of the registration process? Some countries and institutions have implemented or at least conceptualized the 3D mapping of underground utility network and their management in a related cadastral system. Until now, a few countries have utility data with cadastral information and related legislation, includes Switzerland, The Netherlands, Turkey, United Kingdom, Serbia, Sweden, Croatia [21,22]. In Switzerland, the Canton of Zürich started to establish a comprehensive Canton-wide utility cadastre map based on the Cantonal Act on Geoinformation of 2011 [23], derived from the Federal Act on Geoinformation of 2007 [24] and the Cantonal Regulation on Utility Cadastre of 2012 [25]. The regulation sets a deadline for each municipality to deliver and maintain a digital utility map latest until 2021. The City of Zürich has its own utility cadastre since 1999 and set up a governance framework with the corresponding utility providers [26]. Figure 6 shows an example of the utility map of the City of Zürich. The utility cadastre is a subset of the utility documentation of the utility owners. The most important media are included: gas, water, sewage, district heating, power, and telecommunications. SIA 405 [27] is a well-defined standard by the SIA (Swiss society of engineers and architects) for the exchange and publication of utility data. The data model LKMap, part of SIA 405, was introduced to define a unified visualisation/presentation of the utility map. The data are automatically delivered through well defined interfaces at least once a week by the utility owners to the cadastre operator (GeoZ) (central data storage). The utility owners are surveying and using partly 3D coordinates. During the exchange of information between owners and the operator, the information is not yet considered. A number of laws related to the exchange of information on utility location exist in the Netherlands. In 2018, the law for storage and exchange of underground utility information was amended. To accommodate the changes introduced by that law as well as the EU INSPIRE guidelines, the KLIC-WIN program was launched. KLIC-WIN is a program (initiated by the digging sector in the Netherlands) that guides, develops and implements changes triggered by the introduction of both the WIBON, which is the law on information exchange of above ground and underground networks, and the new EU INSPIRE guidelines for utility network information retrieval. KLIC-WIN aims to introduce some changes that are required to comply with the new WIBON law and INSPIRE guidelines: • Representation of utility information according to a new information model, • The ability to (optionally) centrally store utility information at Kadaster, • The gradual change of utility data formats for delivery to end users (from raster now to vector data in 2019 and/or beyond). Furthermore, Serbia extends its LADM based country profile to include utility information for utility network cadastre [28]. Based on this data model, they will develop a system to register and maintain the ownership of the underground utility network. The United Kingdom began the registry of underground utilities and created a national underground assets mapping platform in 2018. The register aims to show where electricity and telecom cables, and gas and water pipes are buried and is intended to prevent both accidents and disruption to the economy. In Croatia, the utility cadastre information contains the type, purpose, basic technical features, and location of built utility lines, and lists the names and addresses of their managers [29]. The Croatia changed the law to organize the physical registration of utilities at a national level since 2016 [30]. Moreover, Canada has developed 3D maps of underground utility networks as well [18,31]. In general, some countries have 2D visualization of utility networks on cadastral map, legal document about utility data governance, registration of legal ownership of utility networks by law. Most of them begin to develop the 3D/4D utility cadastre. All of the current work is just beginning and ongoing. This has been a new challenging topic in recent years. The 3D Data Model for Underground Utility Networks The CityGML utility network Application Domain Extension (ADE) [32] focuses mainly on three aspects: (i) the general 3D geometric of network components; (ii) the 3D topographical structure of the entire utility network; and (iii) the functional information of different types of the network [32,33]. Based on the general concepts of the utility network, different types of utility networks can be implemented with their specific function [32]. Moreover, the interdependence between utility network features and city objects can be presented in 3D space [34]. Because this data model is an extension of CityGML [35], which is the popular standard for 3D city modelling (e.g., building), it is beneficial to integrate information of utility networks to the infrastructures to support urban planning and the other city studies. However, it does not consider the accuracy of the data. Some works begin to extend the existing data model to consider many more details about utility networks, such as [36], represent geographical uncertainties of utility locations based on CityGML Utility Network ADE. The Industry Foundation Classes (IFC) utility model [37] is an ISO standard for data exchange of buildings in the architecture and civil engineering domain [32]. In the utility part, it describes 2D and 3D geometries of utility elements. Meanwhile, two different ways of connection are defined to describe the relationship between supply service components within the building, which is a logical and physical connection. In addition, it has a comprehensive semantic definition of utility network objects. However, this standard only focuses on the building level and lacks spatial information. The INSPIRE Data Specification on Utility and Government Services-Technical Guidelines [38] organize the basic information of utility networks and administrative services of utility networks in a city or country range. It is a part of INSPIRE, which is a standard of the European Union to describe the spatial information of infrastructures. However, the INSPIRE Utility networks are lacking a definition of 3D geometric information of utility networks. ESRI Utility Network model [39] provides a GIS-based utility solution to represent the basic logical and physical structure of all types of utility networks, which is composed of edges and junctions. This model is a general utility data model to represent the 2D geometric information and connections of the utility networks. Until now, there has not been an international standard that has been widely used for 3D modelling of underground utility [40]. Although some existing standardized data models have been developed to integrate multi-utility networks, they can not guarantee the information to be reliable [3]. In order to develop a comprehensive utility database, we have the challenge to integrate different types of utility datasets from multiple surveying techniques, as well as the existing 2D data. Table 2 compares four popular utility data models relevant to the objectives of this work. Obviously, most of the existing utility data models are to focus on the 3D representation, and include 3D geometric and topological information. The existing data models provide a good reference to describe the geometric and spatial information of utility networks in 3D. Nevertheless, none of them considers the accuracy of data of underground utility networks. On the one side, the survey technique directly impacts the data accuracy. However, industry service providers are not usually aware of these extensive standards [3]. On the other side, different applications might use data at different levels of accuracy. Hence, we need an ideally 3D utility data model to support mapping procedures and control accuracy of underground utility network data. On the basis of their discussion and the situation of Singapore, it is necessary to register the utility segments as the legal objects in the land administration system, which helps to identify the ownership of underground utility. An integral approach needs to be developed based on legislative and technology solutions. It is essential to establish a degree of reliability and consistency between data produced by different service providers. It is essential to standardize the practices regarding the use of those techniques and various information management. In the underground utility data model, land parcel, as an important role in the land administration, should be connected to the underground utility networks [18,21]. A Framework for Utility Data Governance From data capture to usage, the whole work process includes several participants in different stages. Hence, in order to improve the communication between different organizations at each phase, our previous work [3] proposed a framework for underground utility data governance. After observing the current work process in Singapore, this framework has been improved to organize the entire work process (Figure 7). This framework consists of five roles that are listed in the following: • The data producer is the surveying constructor and/or surveyor in the data regulatory bodies' organization. In the utility survey phase, the data producer captures data in the field work and submits data to the utility network database. • The data owner manages their collected data. This role could be companies or data regulatory bodies. • Data regulatory bodies are government agencies, such as SLA or Public Utilities Board (PUB) of Singapore. They manage their utility data based on their utility network data model. The data regulatory bodies should provide clear permission for data integrator to use and the predefined subset of utility data. • The data integrator integrates all kinds of utility network data and manages the utility cadastre information in a city or country. In the phase of utility cadastre management, the data integrator should provide the required information for the application to users. This role builds a bridge between the data regulatory bodies and users. • Data users can use utility data for utility cadastre management applications. In this work process, the surveyor as data producer captures data during the field work. After that, the data will be submitted to data owner (e.g., PUB) who needs to manage their own utility networks data. According to the requirements of government, the utility data will be submitted to data regulatory bodies (e.g., PUB and SLA). There are two options for data submission. A general utility network data model will be designed as a standard to manage underground utility data for data regulatory bodies. If the data regulatory body does not have any utility data model, they can use this standard data model. If they have their utility data model, they can continue to use it or change to use the standard one. A consolidated 3D utility data model will be designed to support utility cadastre management. The data integrator (e.g., SLA) needs to integrate data of different kinds of utility networks. The LADM plays as a connection component to build a relationship between the general utility network data model and utility cadastre data model in the utility cadastre management. Meanwhile, the LADM will connect the underground utility network to the land administration of above ground. Finally, the underground utility data model should support applications in land administrative management. 3D Underground Utility Data Model for Land Administration Current work focuses on the conceptual design of a 3D underground utility data model and connects it to land administration. In order to understand the demands of underground utility data users, a workshop was organized to learn the work process and needs of land administration in Singapore. This studying includes four application domains: land acquisition and purchase, planning and coordination, land transfer and sale, and land leasing. Currently, the existing data sources are the hardcopy of the utility network, 2D CAD and 2D geospatial information. There is an urgent demand of 3D geospatial information of underground utility and space to evaluate underground environment and support reallocation, land sales and the other applications. Therefore, the 3D underground utility data model includes three packages to organize the basic information and structure of utility networks, utility survey information, and the land administration information (Figure 8). In order to connect the 3D underground utility data model to the information of land administration, these three packages inherit from the Singapore cadastral data model and LADM (ISO 19152). Meanwhile, the geometric and spatial definition are inherited from the spatial schema data model ( [41]). The Utility Networks package describes the basic information of utility networks, which includes geometric, spatial and physical information. Based on the partonomy (part-whole) relationships, this work defines the hierarchy of utility networks in three levels ( Figure 9). The macro-level is the whole utility networks, which is described by the UtilityNetwork class with the basic information of utility networks, such as the type, and material of utility networks. The meso-level is the surface of the utility networks, which is the part of the utility networks. The surface could be the tunnel, duck, manhole and the other types of space in the utility networks. Hence, the aims of UtilityNetworkSurface class are to describe the types and 3D geometric information (e.g., diameter) of surface. The micro-level is the basic elements of utility networks, which includes nodes and segments of utilities. The node is a connection point in the network, which is defined by the UtilityNetworkNode class. The segment is the line segment of the utility, which is defined by the UtilityNetworkSegment class. The relationship between micro and meso level helps to transform 2D to 3D data as well. Figure 10 shows the relationships of different classes in the Utility Network package and basic attributes of each class. The values of utility networks type inherit from LA_LegalSpaceUtilityNetwork in the LADM (ISO 19152) [19]. The LA_UtilityNetworks class aims to describe the land administration information of utilities. On one side, it connects to the utility network surface in order to identify the land administration information of different parts of utility networks. On the other side, it connects to the cadastral parcel from the Singapore cadastral data model and LADM [19]. The spatial relationship is used to describe the relationship of cadastral parcels and utilities, which includes contain, cross and touch. This class could support ownership management of utilities and land administration management. The Utility Survey class aims to organize utility survey information. It could help to manage survey status and accuracy of data. The Utility Survey class inherits attributes of the survey from the Singapore cadastral data model. Furthermore, the ground conditions and survey methods are related to the accuracy of data directly. Hence, the Utility Survey class integrates information from Standard and Specification for Utility Survey in Singapore [42]. Meanwhile, the Utility Survey class builds the connection between utility networks and LA_Point, LA_BoundaryFace and LA_SpatialSource in the Surveying and Representation package. The Evaluate attribute describes the method to check the accuracy of surveying data. If the accuracy of the data is unknown, the value of Evaluate is null. In future work, the accuracy level should be defined to be based on the depth level, soil condition and survey method. Case Study This initial study aims to integrate of GPR data and the existing underground utility data and land cadastral data in the form of the geospatial database. It aims to find a reasonable work process to bridge the gap between data capture and application. Moreover, this implementation can help to improve the design of a 3D data model for underground utility. Study Area and Datasets This initial study was conducted at around Lorong 2, 3 and 4 at Toa Payoh, which is located in the northern part of Singapore. This is one of the pilot study sites in our project to deploy a mobile mapping platform, namely Pegasus: Stream (https://idsgeoradar.com/products/ground-penetratingradar/pegasus-stream) combines a Stream EM GPR (IDS Georadar, part of Hexagon, Switzerland) and Leica Pegasus Two (Leica geosystem AG, part of Hexagon, Switzerland) photo and laser scanner for massive 3D mapping of above and underground features. The data captured by the Pegasus: Stream is geo-referenced using an on-board GNSS receiver and IMU and a distance measurement instrument (DMI). The Stream EM GPR contains a large number of array antennae, with dual frequencies (200 MHz and 600 MHz). The antennae transmit and receive in two distinct polarizations (HH and VV), allowing the reconstruction of a 3D underground utility network with a single pass of the GPR. Table 3 shows the technical specification of the Stream EM GPR. The scanning site is a 1.8 km long bi-directional 4-lane asphalt road in an inland area of Singapore that has seen development since the 1960s. This study was conducted to investigate the feasibility of GPR for large scale underground utility mapping for the purpose of improving the quality of existing utility map information. The data were collected at a driving speed of about 15 km/h. All of the acquired data were post-processed and interpreted to detect and extract underground utilities using a commercial off-the-shelf processing software along with the GPR system. At the current stage, we do not use point cloud data of the above ground. The identified utilities were then transferred to CAD/GIS format with x, y, z values as points and lines for 3D data modelling and visualization using the same processing software. Figure 11 shows an example of GPR data in CAD (Figure 11a) and GIS (Figure 11b) format. The existing datasets from Geospace and cadastral data from Singapore Land Authority were used as secondary data to obtain or improve the attributes of utilities that were extracted from the radargram and to explore the relationship between the above land administration information and underground utilities. These existing utility data are as-build data from utility services (e.g., power, water, gas, telecommunication and sewerage) and cadastral information in 2D form. Of these datasets, it contains only a small portion of the information that has a diameter with updated time and type. It possesses challenges for land planning with such limited information. 3D Visualisation To develop the 3D utility data model for land administration, the underground utilities need to be connected to the land parcels. Figure 12 explains the work process in this case study. The data model is designed in UML and exported to XML format, which can be imported into ArcGIS as a geodatabase schema. Based on the database schema, the GPR data can be loaded as utility network components in polyline and point. According to the information from the existing utility data and GPR data, the utilities can be modelled in 3D (multipath). The 3D modelling is realized manually in the ArcSence and CityEngine. In order to get the related land administration information, the utility networks data can be integrated with a cadastral parcel through their spatial relationships. Because the existing cadastral data are in 2D, the current work only considers the pipeline within the cadastral parcel in 2D. In order to improve the accuracy of data in 3D, the current cadastral data have to be extended to 3D so as to support more spatial relationships (e.g., cross and touch). Figure 13 shows an example of 3D visualization of utilities with objects above ground. As shown in the figure, the selected pipeline is highlighted in pink. The information shown in the pop window includes spatial data from GPR and other attributes about the underground utility survey and land cadastral information above ground. Discussion This is a simple implementation to explore the work process of 3D modelling of underground utility from the GPR data and existing 2D data. Because GPR cannot capture the diameters, material and some attributes of utilities, it is necessary to extract these information from the GeoSpace database for 3D modelling. Depending on the spatial relationship (e.g., overlap, within) of the GPR data and existing utility data, some of the utilities from GPR data can be connected to the existing utility data. Because of two main limitations, there is a big challenge to improve the accuracy of data during the manual integration of the GPR and existing data. First, the existing utility data are as-build data which may not be reliable enough for updating work. Second, the existing utility data are in 2D data, which is difficult to identify utilities accurately. Hence, the future work needs to find the solution to detect much more attributes of utilities from GPR data. In addition, the tentative integration of underground utility and land cadastral data helps to improve the development of the data model for land administration. Conclusions This paper proposes to develop a consolidated 3D data model of underground utilities for land administration. The work includes two parts. On the one hand, a framework for data governance is designed to organize the workflow of utility data survey, management and application through five roles. Through the understanding of current workflow in the utility data usage, this work needs to clearly define the operations and rights of each role in the work process of 3D underground utility mapping. On the other hand, a 3D data model of underground utilities is designed with 3D spatial information, i.e., utility survey information, and land administration information of underground utilities. In order to fill the gap between data capture and usage, this data model has the following main tasks: • Integrating utility networks data from varying non-destructive surveying technologies. Moreover, it proposes an idea to manage the data accuracy based on the parameters, ground condition and other information during the field survey. This is a first step towards bridging the gap between data acquisition and data management for 3D underground utility mapping. • Integrating the existing data and GPR data. As mentioned earlier, GPR data cannot get the diameters and types of utilities. This way helps to improve the attributes of utilities from GPR data. Moreover, it is also a process to transform utility data from 2D to 3D. • In the data integration, the key step is to connect the utility network data model with the LADM for 3D cadastral management of underground utility in Singapore. It is useful to support ownership management applications and build the relationship between utilities and land parcels. Such a reliable and consolidated centralized repository of underground utility data will provide a crucial basis for land administration of underground infrastructures. A case study is implemented based on the GPR data from the large scale mobile underground utility mapping. The initial implementation transform GPR data from CAD to GIS format and 3D visualization of utilities based on the 3D utility data model. In order to get land administration information, the utility networks have been connecting to the cadastral parcel. The accuracy and details of utility networks need to be improved in future work, such as the spatial relationship between utilities and cadastral parcels. To fully support the land administration of underground space, the 3D utility data model should eventually be extended to include other underground objects and infrastructures in the future, such as underground substations, pedestrian links, common services tunnels, road and rail networks, etc. This is an ongoing work and in the initial stage. Two main aspects of limitations need to be improved in future work. First, for the accuracy of utility data. Obviously, the GPR data are not enough to provide comprehensive 3D underground utility networks. The other kinds of data (e.g., Gyroscope) should be integrated to provide more precise attributes for underground utilities. Moreover, the details of the shapes and structures of utilities need to be improved. Second, the next step of the data model development will improve the definition of land administration for underground utilities. Additionally, in order to develop a comprehensive underground utility database, it is necessary to explore the methods to use the existing data and integrate it with newly collected data. The 3D data model should be extended to be 4D (3D + time) to support data updating. A showcase will be developed to realize land administration of underground utility based on a 3D underground utility data model. This will work with a selected agency as data regulatory body and the preferred data integrator. They will help us to evaluate and improve the framework and definition of the data model. After that, recommendations from this showcase will be used to extend the data model include other underground infrastructures and develop the platform of underground space management to support various applications in Singapore.
2019-08-28T14:56:25.685Z
2019-08-21T00:00:00.000
{ "year": 2019, "sha1": "c939e8d6adb96c2316212ae11f1268051e70dc35", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/17/1957/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a81e0943b0eb3fe48a757aab85e0fffdee6978b5", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
248736760
pes2o/s2orc
v3-fos-license
The Effect of Buginese Language Transfer on Students’ English Pronunciation: A Case Study at SMAN 4 Barru ____________________ Abstract _________________________________________________________________ Indonesia is a country consists of various cultures and possesses hundreds of native language. Therefore, in the process of l2 acquisition, the impact of L1 on English articulation certainly is seen as a tough obstacle for the Indonesian EFL learners. In SLA, it is known as language transfer. Buginese language as one of the native language existed in South Sulawesi also gave positive and negative transfer towards English pronunciation. It was proven through a qualitative case study employed towards 20 students from XI IPA 2 at SMAN 4 Barru. To obtain the data, several methods were undergone such as questionnaires, students’ recording, interview and observation. The results of the study showed that Buginese language gave major negative transfer towards vowels / ə / and /æ/, diphthongs / ɪə /, /e ə /, / ʊə / and / əʊ /, consonants /p/, /f/, / ŋ / and /n/, and also clusters skr/, /spl/ (initial), /sk/, and /bl/. Moreover, this language gave minor INTRODUCTION Language is inseparable from human being as it is road for communicating to each other. As we live in the globalization era, the demand for learning foreign language especially English is increasing as it becomes a communication tool among people around the world. To fluently speaking in English, a number of sub-skills are a must for the EFL learners to master involving vocabulary, grammar, pragmatics, pronunciation and others (Fraser, 2000). The most supporting subskill according to Fraser is pronunciation, as for the speaker with good pronunciation is still understandable even it contains errors within, and speaker with bad pronunciation leads to misunderstanding in communication. However, like any other aspects of English, certainly there will be many affecting factors that might become obstacles for the learners during learning pronunciation. Kenworthy (1978) divided the factors that affect the pronunciation learning into the native language, age factor, amount of exposure, phonetic ability, attitude and identity, and motivation. Zhang (2009) on the other hand, proposed that factors affecting pronunciation are categorized into two areas, which are named internal and external factors. Internal factor focuses on L2 learner themselves, and involves biologic factor (i.e. age, ear perception, and aptitude) and individual differences (i.e. personality, attitude, motivation, identity, individual efforts, and goal setting). External factor involves L2 learner's learning environment, and relates to learner's native language, exposure, and educational factors. The impact of native language on English articulation is certainly a tough obstacle for the Indonesian EFL learners as Indonesia consists of various cultures and possesses hundreds of native language. In the process of acquiring the second language, the influence of the prior language is called language transfer or cross-linguistic influence. It is in line with Saville-Troikes' (2006) argument that in acquiring second language there is a general agreement that cross-linguistic influence, or transfer of prior knowledge from L1 to L2 is one of the processes that is involved in Interlanguage development. The language transfer brings positive and negative effect towards second language acquiring. It is described as positive transfer when an L1 structure or role is used in an L2 utterance and that use is appropriate or correct, meanwhile when an L1 structure or role is used in an L2 utterance and that use is inappropriate and considered as error, it is considered as negative transfer. There were several studies that investigated the positive and negative transfer from L1 towards L2. For instance, Dewi (2013) The realization of language transfer also happened to one of native language in South Sulawesi namely Buginese language. Buginese people are bilingual speakers as they use Indonesian in formal settings and Buginese language in informal context such as daily communication. The strong accent and different phoneme production usually become obstacles for them to learn new language (Nasir, 2016). The same matter also goes to English as second language, thus, a throughout investigation related to which phonemes in segmental features that affected by Buginese language need to be done. Previously, several studies have been conducted to examine sounds that were difficult to be uttered by Buginese speakers for instance, /f/ and /v/, /θ/ and /d/, /s/ and /ž/, vowel /ae/ and diphthongs /iə, uə, əu, eə/. The reasons lead to the obstacles are the different sound system between Bugis and English and also strong/heavy accent from the dialect. (Nurpahmi, 2013;Padilah et.al, 2018) Apparently, the previous study only observed the comparison of both languages towards common speakers. The current study meanwhile attempts to examine both positive and negative transfer that occurred in English segmental features resulting from Buginese language towards students. Moreover, the role from the teacher also needs to be analyzed as they also give contribution in improving students' English pronunciation. The results of the study are expected to be a beneficial discovery for the teachers and students especially in South Sulawesi. METHOD This is a qualitative case study that using field note to obtain the data needed. The participants of the study were the students of class XI IPA 2 of SMAN 4 Barru. The total number of students in the class was 24 that later being limited to 20 as the requirement of the research were the students who originally come from Barru Regency. Various ways are undergone to get the data for instance; questionnaires that employed to obtain the data about students' origin and background, students', recordings to get the data about their pronunciation. The students are required to read an English text, a list of sentence, and target words that represented initial, middle and final position of each sound, interviews to obtain information related to the role of the teacher in helping the students to improve their English pronunciation, and last, observation to get information related to teacher's contribution in real situation. RESULTS AND DISCUSSION The results of the data lead the study to several arguments. It is divided into five parts and explained as follow. English Vowels Affected by Buginese Language The analysis of the entire English vowels sound leads to three final results. First, Buginese language gave minor effects towards /ʌ/, /ɪ/, /e/, /ʊ/ and /ɔ/. These sounds are identified as unproblematic for the students to pronounce. The causing factors that bring the easiness are the facilitation from Buginese language and also Bahasa Indonesia as those sounds are exist in both of the language's sound systems. This situation is known as positive transfer. As stated by Seville-Troike (2006) who argued that in interlanguage development, transfer from prior language is one of the processes happening towards second language acquisition. As there were positive and negative form of transfer, apparently positive transfer is a condition where the structure and rule of L1 suitable to be applied in L2. Second, Buginese language gave minor negative transfer towards vowels /i:/, /ɑ:/, /ɔ:/, and /u:/ and also vowels /ɒ/. From the findings results, it could be stated that the entire words that represent long vowels were substituted into short vowels /ɪ/, /ʌ/, /ɔ/ and /ʊ/, meanwhile, sound /ɒ/ was tended to be pronounced into /ɔ/. The phenomenon was due to the inexistence of both long vowels and /ɒ/ in their first language namely Buginese language. Even though the inexistence of the sound occurred in Buginese language, it could not be said that the first language was the main cause of the negative transfer. Other factors might come from the inexistence of the sound in Bahasa Indonesia and teachers who did not introduce the sound to the students because of lack of time in teaching English. In addition, the major negative transfer from Buginese language could be seen in vowels /ə/ and /ae/. Buginese language recognized both sound /e/ and /ə/ in its sound system, for instance [mʌegʌ] (many) and [mʌkʌtə] (itchy). However, in pronouncing the entire words in during the recording, I realized that the students overused the sound /e/ and substituted it from sound /ə/, such as in the word 'development' [dɪˈvɛləpmənt] that pronounced as [dɛfɛlɔfmɛn]. This phenomenon was one of the negative transfer that comes from Buginese language as Buginese people most frequently using sound /e/ in their daily communication. Moreover, the students had difficulties in uttering sound /ae/ in word 'act' [ˈaekt] and 'character' [ˈkaerɪktə] and tended to substitute the current sound with /ʌ/. English Diphthongs Affected by Buginese Language Dealing with the data gave me several final results related to the effect of Buginese language. First, it could be seen that Buginese language gave minor positive transfer to diphthongs such as /ɔɪ/, /eɪ/ (middle and final), and /aɪ/ (middle). From the result, it could be concluded that students have no difficulties to produce those sounds and the effects of Buginese language as L1 was one of the factors that facilitated the positive transfer. According to Nurpahmi (2013), Buginese sound system recognized more diphthongs than English. There are /aɪ/, /eɪ/, /aʊ/, /ɔe/, /ʊɪ/, /ɔɪ/, /ʊe/, /aɪ/, /ʊa/, /ɪa/ and /ɪʊ/. Apparently in her study, she confirmed that there were four diphthongs that exist both in Buginese language and English, namely /aɪ/, /eɪ/, /aʊ/, /ɔɪ/ and according to students' pronunciation result; the familiarity of the sounds makes them easy to pronounce the represented words. Moreover, there were some diphthongs that did not receive any negative transfer from Buginese language to the students. It is /eɪ/ (initial), /aɪ/ (initial and final) and /aʊ/. Even though the students were familiar with those sounds, they seem have difficulties in pronouncing the represented words. The examples were 'agent' that tend to be pronounced as [ʌgɛn], 'aisle' as [ɛisli], 'sky' as [skɪ] and others. As stated by Seville-Troike (2006), intralingual errors are the result of incomplete learning of L2 rules or overgeneralization of them and not attributable to cross-linguistic influence. So, the errors made by the students can be categorized as developmental or intralingual errors which due to the limited and incomplete L2 learning that lead to confusion to choose the correct use of sound. Buginese Language The final result of the recordings brought several arguments that later divided into how Buginese language affected positively or negatively towards the consonants. First, Buginese language gave minor positive transfer towards consonants such as /b/, /d/, /g/, /h/, /k/, /l/, /m/, /r/, /s/, /t/, /w/ and /y/. As these sounds existed in the speech sounds of Buginese language, therefore, the students were facilitated and did not feel any difficulty in pronouncing the sounds. Apparently, they categorized as receiving minor positive transfer from Buginese language due to many factors that assisted students' easiness to utter them and not only from Buginese language. Other factor that supported the facilitation was Bahasa Indonesia that the students have learned in school. Second, Buginese language gave minor negative transfer to the consonants sounds such as /ʤ/, /ʒ/, /z/, /v/, /ð/, /θ/, /ʧ/, and /ʃ/. I classified that Buginese language only gave minor negative transfer and not major as there were other factors that affecting the transfer for instance, Bahasa Indonesia and spelling interference. For sound /ʤ/, even though it existed in both speech sound of Buginese language and Bahasa Indonesia, they tend to substitute the sound into /g/ in the word 'religion' and 'privilege' in the middle and final position. Other factor might influence the substitution and one of that was spelling interference. In addition, the influence of Buginese language and Bahasa Indonesia were also noticed in sound /ʧ/ where the students had tendency to pronounce the sound as sound /c/ that existed in both language. It was in line with Ramelan's argument in Mulya (2019) that Indonesian students tend to substitute sound /ʧ/ with sound /c/ as in word [cantik] (beautiful) which is more alveolar and not rounding. In the middle position for the word 'eventually' instead, they changed the sound /ʧ/ into /t/, so it could be said that they tend to utter the word exactly as how it is written. In the other hand, /ð/, /θ/, and /ʃ/ were sounds in English that did not exist in speech sound of both Buginese language and Bahasa Indonesia. Therefore, students tended to pronounce those sounds into the nearest sound in their first language; for instance, /ð/ becomes /d/, /θ/ becomes /t/, and /ʃ/ becomes /s/. In addition, students could not pronounce the sound /z/ and /v/ in initial, medial and final position. They had tendency to change sound /v/ with /f/ and /p/ while /z/ is changed into /s/. Third, Buginese language gave major negative transfer to the consonant sound /p/, /f/, /ŋ/ and /n/. In observing students' pronunciation, I found out that the substitution between sound /p/ and /f/ were done by the students naturally and unintentionally. For instance, in pronouncing 'politician' and 'paper', some students pronounced it with sound /p/ at the first meeting but later they changed the sound into /f/ until it became 'folitician' and 'fafer'. Moreover, the substitution between /ŋ/ and /n/, or vice versa also happened in students pronunciation. English Consonant Clusters Affected by Buginese Language The Buginese language gave major negative transfer towards clusters such as /skr/, /spl/ (initial), /sk/, and /bl/. It could be seen from the students' result that in pronouncing word 'screw', 'splash', 'skill' and 'black', they tended to add sound /ə/ between the clusters. For example, 'splash' becomes [səplaeʃ], 'skill' becomes [səkɪl], 'screw' becomes [səkrɔu], and 'black' becomes [bəlek]. Other than that, the word 'establish' from the middle position of cluster /bl/ also got affected by Buginese language. The students tend to add sound /ɪ/ between the clusters until the word was pronounced as [ɪsˈtʌbɪlɪs]. Teacher's Role in Improving Students' English Pronunciation To collect the data related to this research question, I applied the interview and observation as the instruments. From the interview with the teacher and observation in the classroom, I found several arguments related to pronunciation teaching. First, the teacher argued that she trained and monitored students' pronunciation every time they read a passage or sentences in the class, but the reality showed that she only gave correction towards students' pronunciation when they learning new vocabularies or whenever they failed to pronounce correct words and this only happened once or twice throughout the meeting. Harmer in Gilakjani (2016) argued that many teachers are paying attention more to skill such as grammar and vocabulary to help foreign learners in listening and reading until the importance of pronunciation were abandoned. In addition, the allocation of time in 2013 curriculum that still was seen as the consideration made by the teacher to divide the time wisely and preferred to teach other skill rather than pronunciation. Second, the teacher admitted that in the learning process, dictionary was a crucial tool that facilitated the students to acquire not only new vocabularies but also pronunciation. However, in real situation, bringing a dictionary for English subject was not a necessity for the students and was considered more to a formality only. In coping with the situation, the teacher needs to have self-awareness about the importance of teaching pronunciation by at least asking the students to bring dictionary and make them pronouncing the correct words. Last, the native language of both teacher and students also became highlighted issue that need to be concerned by the teacher. Kenworthy (1987) stated that native language was one of the factors that affect learner's pronunciation along with the age factor, amount of exposure, phonetic ability, attitude and identity, and motivation. . The native language effect was undeniable matter that later become special features or characteristics that called accent. However, every native language brought negative transfer to English language learning, therefore, the teacher need to decrease its effects by providing correct and proper pronunciation for the students. CONCLUSION The conclusions of the study bring to several arguments. Buginese language gave minor positive transfer towards sound /ʌ/, /ɪ/, /e/, /ʊ/ and /ɔ/ as the sound also exists in Bahasa Indonesia and the positive transfer may affected by both of them. Moreover, it gives minor negative transfer towards long vowels such as /i:/, /ɑ:/, /ɔ:/, and /u:/ and also vowels /ɒ/. The strong influence of the language could be seen in two vowels namely /ə/ and /ae/. Next, from seven consonant clusters that I investigated, the Buginese language gives major negative transfer towards clusters such as /skr/, /spl/ (initial), /sk/, and /bl/. It could be seen from the students' result that in pronouncing word 'screw', 'splash', 'skill' and 'black', they tended to add sound /ə/ between the clusters, while for the word 'establish' from the middle position of cluster /bl/ tended to be added sound /ɪ/ between the clusters. Last, the teacher's effort in improving students' pronunciation is still insufficient. It is confirmed that teacher's awareness to provide correct pronunciation to the students still lack. The factors causing the lack are, first, less attention given by the teacher in teaching pronunciation where teacher prefer to teach other skills rather than pronunciation. Other than that, limited allocation of time to teach pronunciation and less awareness about the using of media such as dictionary also become the causing factors. Last, the native language of both teacher and students is causing factors that undeniable and the negative transfer of it needs to be decreased.
2022-05-13T15:03:05.617Z
2019-07-04T00:00:00.000
{ "year": 2019, "sha1": "4024331f78179e40e7a07297f5986f3500369c10", "oa_license": "CCBY", "oa_url": "https://journal.unnes.ac.id/sju/index.php/eej/article/download/30966/13584", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c8718a08f6e8cef7db618955c235e3317e2b281d", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [] }
55030010
pes2o/s2orc
v3-fos-license
Global Stability of Positive Periodic Solutions and Almost Periodic Solutions for a Discrete Competitive System a 2 (t)x 2 (t)/(1 + d 1 (t)x 1 (t)), a 1 (t)x 1 (t)/(1 + d 2 (t)x 2 (t)) denote the competitive response function, respectively. All the coefficients above are continuous and bounded above and below by positive constants. The discrete-time systems governed by difference equations recently have been won wide-spread attention and applied in studying population growth, the transmission of tuberculosis and HIV/AIDS and influenza prevention and control (see [2, 3]), just because discrete-time models conform better to the reality than the continuous ones, especially for the populations with a short life expectancy or non-overlapping generations. In addition, some works about the bifurcation, chaos, and complex dynamical behaviors of the discrete specie systems have been done (see [4, 5]). In practice, according to the discrete data measured, the discrete-time models commonly provide efficient computational models of continuous models for numerical simulations (see [2, 3, 6–11]). Therefore, we derive the discrete analogue of system (1) by using the same discretization method (see [11]): From an evolutionary perspective, because of the selectivity of species evolution, the periodically varying environments are of vital importance for survival of the fittest.For instance, any periodic change of climate tends to impose its period upon oscillations of internal origin or to cause such oscillations to have a harmonic relation to periodic climatic changes (see [11][12][13][14][15][16]). Therefore, the coefficients of many systems constructed in ecology are usually considered as periodic functions (see [12,13]).Not long ago, Wang (see [12]) studied a delayed predator-prey model with Hassell-Varley type functional responses and obtained the sufficient conditions for the existence of positive periodic solutions by applying the coincidence degree theorem.Many excellent results concerned with the discrete periodic systems are obtained (see [14][15][16]). In nature, however, there hardly exists necessarily commensurate periods in the various environment components like seasonal weather change, food supplies, mating habits and harvesting, and so forth.Compared with the periodic systems, we can thus incorporate the assumption of almost periodicity of the coefficients of (1) to reflect the timedependent variability of the environment (see [6,[8][9][10]17]).Recently, Li et al. (see [18]) have proposed an almost periodic discrete predator-prey models with time delays and investigated permanence of the system and the existence of a unique uniformly asymptotically stable positive almost periodic sequence solution.Afterwards, by using Mawhins continuation theorem of the coincidence degree theory, reference [19] achieved some sufficient conditions for the existence of positive almost periodic solutions for a class of delay discrete models with Allee-effect. Notice that the investigation of periodic solutions and almost periodic solutions is one of the most important topics in the qualitative theory of the difference equations.In this paper, based on the ideas mentioned above, for system (2), one carries out two main works. (ii) Furtherly, one discusses the almost periodic solutions of system (2) with positive almost periodic coefficients. The organization of this paper is as follows.In Section 2, we present some notations and preliminary lemmas.In Section 3, we seek sufficient conditions which ensure the existence and global stability of positive periodic solutions of system (2).In Section 4, we further investigate the existence, uniqueness, and uniformly asymptotic stability of positive almost periodic solutions for system (2) above.In Section 5, we present an example and its numerical simulations are carried out to illustrate the feasibility of our main results.In Section 6, a conclusion is given to conclude this work. Notations and Preliminaries Lemmas Throughout this paper, the notations below will be used: where {ℎ()} is a bounded sequence and Z + = {0, 1, 2, 3, . ..}. Denote by R, R + , Z, and Z + the sets of real numbers, nonnegative real numbers, integers, and nonnegative integers, respectively.R 2 and R are the cones of 2-dimensional and -dimensional real Euclidean spaces, respectively.Definition 1 (see [10] is referred to as the -translation number of (). Lemma 3 (see [10]).{()} is an almost periodic sequence if and only if for any sequence { } ⊂ Z there exists a subsequence { } ⊂ { } such that (+ ) converges uniformly on ∈ Z as → ∞.Thus, the limit sequence is also an almost periodic sequence.Furthermore, we consider the following almost periodic difference system: where ℎ : Z + × C → R , C = { ∈ C : ‖‖ < }, and ℎ(, ) is almost periodic in uniformly for ∈ C and is continuous in . The product system of ( 7) is in the following form: and [20] obtained the following lemma, where ((, ), (, )) is a solution of (8). Moreover, suppose that there exists a solution () of system (7) such that ‖ ‖ ≤ * < for all ∈ Z + ; then there exists a unique uniformly asymptotically stable almost periodic solution () of system (7) which satisfies |()| ≤ * .In particular, if ℎ(, ) is periodic of period , then system (7) has a unique uniformly asymptotically stable periodic solution with period . Existence and Stability of Positive Periodic Solutions Apparently, the permanence of system (2) can be obtained according to Lemmas 5 and 6.In the following, we will show the existence and stability of positive periodic solutions of system (2).To this end, let us assume that all the coefficients of system (2) are -periodic; namely, Lemma 7 (see [16]).If the assumption (10) holds, then system (2) has at least one strictly positive -periodic solution and is denoted by 2) is globally stable if each other solution ( 1 (), 2 ()) with positive initial value defined for all > 0 satisfies lim Now, we present the main results. Theorem 9. Let the following assumption and ( 10) hold; then the positive periodic solution of system ( 2) is globally stable. Denote exp 1 () = 1 ()/ * 1 () and exp 2 () = 2 ()/ * 2 (); then we have which, according to the mean value, yields where all the constants 1 , 2 , 3 , 4 ∈ (0, 1).Obviously, together with (14) we can find a sufficiently small such that It follows from Lemmas 5 and 6 that there exists an 0 such that > 0 ; we have Then one obtains the fact that both * 1 () exp( Similar to the arguments as above, we must have We denote Existence and Stability of Positive Almost Periodic Solutions In this section, we discuss the existence of positive almost periodic solutions of system (2). For any ∈ Z + , assume that + ≥ 0 when is large enough.By an inductive argument of system (2) from + to + + , where ∈ Z + , one obtains Hence, (25) yields Let → +∞; one has It is easy to see that ( * 1 (), * 2 ()) is a solution of system (2) on Z + for arbitrary , and Then we get (29) due to which is an arbitrarily small positive constant: This completes the proof. Finally, we are ready to state our main result in this section. Example and Simulations In this section, we only give the following example about almost periodic solutions to check feasibility of the assumptions of Theorem 11 considering that the simulation about periodic model is similar.(42) Clearly, the assumptions of Theorem 11 are satisfied and all the coefficients are appropriate.Hence, system (40) admits a unique uniformly asymptotically stable positive almost periodic solution.From Figure 1, we easily see that there exists a positive almost periodic solution ( * 1 (), * 2 ()), and the 2-dimensional and 3-dimensional phase portraits of almost periodic system (40) are revealed in Figure 2, respectively.Figure 3 shows that any positive solution ( 1 (), 2 ()) tends to the almost periodic solution ( * 1 (), * 2 ()). Conclusions In this paper, we consider a discrete two-species competitive model whose periodic solutions and almost periodic solutions are discussed, respectively.By the scale law and meanvalue theorem, a good understanding of the existence and stability of positive periodic solutions is gained.Furthermore, by constructing Lyapunov functions, the conditions on the asymptotic stability of the positive almost periodic solution are established.The assumption in (10) implies that the () should be suitably large.
2018-12-14T00:44:45.394Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "3d5a6d4caa965fede21d558ee5c835af4c688f7b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2015/658758.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d5a6d4caa965fede21d558ee5c835af4c688f7b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8197740
pes2o/s2orc
v3-fos-license
Geo-spatial Hotspots of Hemorrhagic Fever with Renal Syndrome and Genetic Characterization of Seoul Variants in Beijing, China Background Hemorrhagic fever with renal syndrome (HFRS) is highly endemic in mainland China, and has extended from rural areas to cities recently. Beijing metropolis is a novel affected region, where the HFRS incidence seems to be diverse from place to place. Methodology/Principal Findings The spatial scan analysis based on geographical information system (GIS) identified three geo-spatial “hotspots” of HFRS in Beijing when the passive surveillance data from 2004 to 2006 were used. The Relative Risk (RR) of the three “hotspots” was 5.45, 3.57 and 3.30, respectively. The Phylogenetic analysis based on entire coding region sequence of S segment and partial L segment sequence of Seoul virus (SEOV) revealed that the SEOV strains circulating in Beijing could be classified into at least three lineages regardless of their host origins. Two potential recombination events that happened in lineage #1 were detected and supported by comparative phylogenetic analysis. The SEOV strains in different lineages and strains with distinct special amino acid substitutions for N protein were partially associated with different spatial clustered areas of HFRS. Conclusion/Significance Hotspots of HFRS were found in Beijing, a novel endemic region, where intervention should be enhanced. Our data suggested that the genetic variation and recombination of SEOV strains was related to the high risk areas of HFRS, which merited further investigation. Introduction Hantaviruses are rodent-borne pathogens with a worldwide distribution. More than 50 hantaviruses have been found in the world [1][2][3], each of which appears to have coevolved with a specific rodent or insectivore host [4]. As with other members of the family Bunyaviridae, hantaviruses are enveloped, negative-sense RNA viruses. The genome consists of three segments, designated as large (L), medium (M), and small (S). They respectively encoded the RNA polymerase, the glycoprotein precursor (GPC) protein that is processed into 2 separate envelope glycoproteins (Gn and Gc), and the nucleocapsid (N) protein [1][2][3][4]. As nucleoproteins of many negative-strand RNA viruses, hantaviral N protein is a multifunctional molecule involved in various interactions during the life cycle of the virus. It has important functions in the viral RNA replication, encapsidation and also in the virus assembly [5]. Hantavirus can cause two kinds of human illnesses, hantavirus cardiopulmonary syndrome (HCPS) and hemorrhagic fever with renal syndrome (HFRS). HCPS is caused by New World hantaviruses circulating in north America and south America. HFRS, a disease characterized by renal failure, hemorrhages, and shock with a case fatality of 0.1% to 10%, occurs primarily in Asia and Europe [1][2][3]. HFRS is highly endemic in mainland China accounting for 90% of the total cases reported in the world [6]. Although integrated intervention measures involving rodent control, environment management and vaccination have been implemented, HFRS remains a significant public health problem with more than 10,000 human cases diagnosed annually [7]. Hantaan virus (HTNV) and Seoul virus (SEOV) mainly carried by Apodemus agrarius (striped field mouse) and Rattus norvegicus (Norway rat), respectively, were known to be the crucial causative agents of HFRS in China [7,8]. In addition, Amur virus (AMRV) and Puumala virus (PUUV) were detected recently from Apodemus peninsulae and Clethrionomys glareolus respectively in northeastern China [9,10]. HFRS mainly occurred in rural area in the past. But recently, the endemic areas of the disease have extended from rural to urban areas and even to city centers [11]. Beijing metropolis is a newly affected region of HFRS, where the incidence of the disease has rapid increased since 1997 and the cases have been reported in all the 18 districts. The HFRS incidence seemed to be diverse considerably in difference places of Beijing according to the report from Beijing Center for Disease Control and Prevention (CDC). Previous epidemiological surveys revealed that hantaviruses detected in Beijing were all SEOV strains [12,13,14]. Although the environmental factors were related to the SEOV infectivity in rodent hosts and humans [15,16], the hotspots of HFRS remained unclear and environmental factors weren't able to explain fully the distributional variation in incidence of disease in human. The objectives of this study were to detect ''hotspots'' of HFRS in Beijing metropolis for effective control, to characterize variance of SEOV from the novel endemic region, and to investigate the possible association between SEOV genetic clusters and HFRS ''hotspots''. Ethics Statement The research involving human materials was approved by the Ethical Review Board, Science and Technology Supervisory Committee at the Beijing Institute of Microbiology and Epidemiology. The informed consents were written by patients or their guardians and the related information was used anonymously. The research involving animal samples was approved by Animal Subjects Research Review Boards of the authors' institution and the study was conducted adhering to the institution's guidelines for animal husbandry. Data Collection and Spatial Scan Analysis Records on HFRS cases reported in Beijing between 2004 and 2006 were obtained from the National Notifiable Disease Surveillance System (NNDSS). The vectorization of the village, street, and boundaries of each township was performed on a 1:100,000 scale topographic map and digital map layers were created in ArcGIS 9.0 software (ESRI Inc., Redlands, CA, USA). Demographic information was integrated in terms of the administrative code. Each HFRS case was geo-coded according to their possible infected sites and a layer including information on HFRS cases was created and overlapped on the above digital map. To identify the geo-spatial clusters i.e., ''hotspots'' with high HFRS risk of infection, the spatial scan statistic [17,18] was performed by using SaTScan software [19]. The incidence rates of HFRS in 223 townships were calculated by using the fifth national census data in 2000. The maximum spatial cluster size was set to be 5% of the total population at risk and 9999 Monte Carlo replications were used to test the null hypothesis that the relative risk (RR) of HFRS was the same between any townships or their collection and remaining townships. A P-value,0.05 was considered statistically significant. RT-PCR and Phylogenetic Analysis RNA extraction and reverse transcription reaction were performed as described previously [14]. The complete S sequences were generated from overlapping fragments by polymerase chain reaction (PCR) (primer pairs were presented in Table S1). Partial L sequences were generated using primer pairs previously described [20]. The sequences of SEOV strains were aligned using Clustalx1.8 software [21]. Phylogenetic trees were generated using the Bayesian Metropolis-Hastings Markov Chain Monte Carlo (MCMC) tree-sampling methods implemented by Mr. Bayes 3.1 software [22], using the GTR evolutionary model, with gamma-distributed rate variation across sites and a proportion of invariable sites. The run were stopped until the standard deviation of split frequencies was below 0.01. For comparison, phylogenetic analysis was also performed with the maximum-likelihood algorithm using Phylip software [23]. ML topologies were evaluated by bootstrap analysis of 100 ML iterations. Results A total of 322 confirmed HFRS cases were reported in Beijing metropolis from 2004 to 2006. The incident rates of HFRS in 223 townships ranged from 0/100,000 to 27/100,000. Three clustered areas (hotspots) were identified by spatial cluster analysis ( Figure 1). The most likely clustered area, which was designated as clustered area A, contained 16 townships and located at the east of Beijing downtown, including a population of 570,000. The Relative Risk (RR) of the most likely clustered area was 5.45 (P,0.001). Clustered area B contained 14 townships, including a population of 650,000 with a RR of 3.57 (P = 0.002). This ''hotspot'' located at the northwest of Beijing and adjoined Hebei Province. Clustered area C contained 11 townships, including a population of 230,000 with a RR of 3.30 (P,0.001). Author Summary Hemorrhagic fever with renal syndrome (HFRS) is caused by Hantaviruses, the enzootic viruses with a worldwide distribution. In China, HFRS is a significant public health problem with more than 10,000 human cases reported annually and the endemic areas of the disease have extended from rural to urban areas and even to central cities in recent years. The HFRS incidence has increased recently and the morbidity seemed to be considerably diverse in different areas in Beijing, the capital of China. With the aim of gaining more information to control this disease, we carried out a spatial analysis of HFRS based on the data from human cases during 2004-2006 and investigated the genetic features of complete S and partial L segment sequences of Seoul virus from natural infected rodent hosts and patients. We found three geo-spatial clusters, i.e., ''hotspots'' of HFRS in Beijing, where intervention should be enhanced. Our data indicated that the genetic variation and recombination of SEOV might be related to the high risk areas of HFRS in Beijing, which was worthy of further investigation. Hotspots of HFRS and Seoul Virus in Beijing www.plosntds.org 430-nucleotide L genomic sequences showed 97.4%-100% nucleotide sequence identity with each other and 93.7%-99.1% identity with those of other SEOV. The phylogenetic tree based on entire coding region sequence of S segment showed that the strains circulating in Beijing clustered into three distinct lineages regardless of their host origins ( Figure 2, Figure S1) (HuBJ20 was not included because it was obtained from a patient who got the disease in another place far away from Beijing and only was diagnosed in a hospital of Beijing). The phylogenetic tree based on complete S sequence showed the similar topology structure (data not shown). Five strains from HFRS patients clustered together with a rodent-originated sequences (Rn-SHY17), designated as #1. Two sequences from HFRS patients clustered together with six rodent-originated sequences (including BjHD01), designated as #2. One strain from a rodent (Rn-HD27) was distinct from other Beijing strains but close to those from Zhejiang Province where is more than 1, 000 km away from Beijing, which was designated as #3. The topology of the trees based on partial L sequences was similar to that based on S segment. In recombination analysis, three significant potential recombination events (PRE) were detected and two of them involved in the strains circulating in Beijing ( Table 1). One of them involved HuBJ16 strain, whose major parent was Rn-YUE12 and minor parent was HuBJ19. Another PRE happened in HuBJ22 and HuBJ7 with HuBJ19 as the major parent and HuBJ3 as minor parent. In the Phylogenetic trees constructed using sequences of either the putative recombinant region or the region without recombination, changes in the topology of the trees could be observed ( Figure 3). However, In the Phylogenetic trees according to the breakpoints of HuBJ22 strain, it was weakly supported, although the change in the topology of the trees could also be observed (data not shown). By contrast, no evidence of recombination was evident for the partial L segment sequence. The 3 spatial clustered areas of HFRS seemed associated with different SEOV strains. All SEOV strains in lineage #1 were from clustered area A and clustered area B, although the two clustered areas were not contiguous to each other. Apart from HuBJ15, four patient-originated strains and one rodent-originated strain had a special homologous substitution of asparagine to threonine at position 259 of the deduced amino acid sequences of the N protein, which was distinct from all other SEOV strains. Interestingly, all the strains involving PRE belonged to lineage #1. Most strains from rodent hosts were in lineage #2 and scattered in different areas of Figure 1. Spatial clustered areas with higher incidence of HFRS using spatial scan statistics. Spatial scan analysis was performed by moving windows statistics approach. It determined three statistically significant cluster areas (hotspots), designed as cluster area A with a Relative Risk (RR) of 5.45 (P,0.001) (the red area), cluster area B with RR of 3.57 (P = 0.002) (the yellow area) and cluster area C with RR of 3.3 (P,0.001) (the green area). The dots and triangles represent the sites from which sequence data are available, not all human cases or rodent captures in Beijing. The dots represented HFRS cases and the triangles represented the sites where the rodent hosts were captured. HuBJ15 and HuBJ22 were diagnosed in Beijing but their exposed sites fall outside of Beijing. They were included in the figure because they were in vicinity. HuBJ20 were diagnosed in Beijing but the exposed sites of the case were far away from Beijing, thus was not included in the figure. doi:10.1371/journal.pntd.0000945.g001 Hotspots of HFRS and Seoul Virus in Beijing www.plosntds.org Beijing. Among them, four rodent-originated strains (Rn-M11, Rn-DC8, Rn-YUE12 and Rn-CP7) had a same special homologous substitution of asparagine to serine at amino acid position 214, which had never been detected from any other SEOV strains. None of the 4 strains were found in any spatial clustered areas with high RR value. Strains without the homologous substitution at position 214 in lineage #2 could be divided into two parts. One strain from a patient (HuBJ 9) and two strains (Rn-HD11, BjHD01) from the rodent hosts located in clustered area C. Another strain (HuBJ3) from a patient located in southwest of Beijing, an area with low RR value. Rn-HD27 strain in lineage #3 without any special substitution also located in clustered area C. Discussion Cluster analyses are important in epidemiology in order to detect aggregation of disease cases, to test the occurrence of any statistically significant clusters, and ultimately to find evidences of etiologic factors. Cluster analysis identifies whether geographically grouped cases of disease can be explained by chance or are Hotspots of HFRS and Seoul Virus in Beijing www.plosntds.org statistically significant. Spatial scan statistic [17,18] implemented in SaTScan software [19] is being widely used to detect the clusters of many infectious diseases [25,26]. Recently, we analyzed the distribution of HFRS cases nationwide using GIS-based spatial analysis and highlight geographic areas where the population had a high risk of acquiring the disease [27]. Beijing is one of the emerging endemic areas of HFRS in recent years. In this study, the district-based digital map layers and three-year surveillance data were analyzed altogether and a moving-window scan statistics approach were used to determine the geo-spatial ''hotspots''. It reduced the effects resulted from probable reporting bias and selecting bias. The results of the study suggested that there were three spatial ''hotspots'' of HFRS in Beijing, where the population had a high risk of acquiring the disease and intervention should be enhanced. Mice of species Apodemus agrarius are quite close to people in rural areas. Consequently, HTNV is still the major cause of HFRS in China. However, more and more HFRS patients caused by SEOV were reported in mainland China. It is possibly related to its animal host, R. Norvegicus, which is spatially more close to human beings than any hosts of other hantaviruses. Sometimes, they can even migrate to the places far away by traveling on vehicles such as ship, train or truck [4]. They may cause international transmission by carrying HV to another place, just as presumed in Taiwan and Indonesia [28,29]. Beijing is the capital of China, transportation of goods and human migration with other regions that followed the rapid economic development was quite frequent in recent years. It is not surprising that several lineage of SEOV were circulating simultaneously in Beijing and appearance of recombination. It was reported that prevalence of SEOV in rodents was different among districts in Beijing [30]. However, it was not consistent with clustered areas based on GIS-based spatial analysis, suggesting other factors might be related to the incidence of the disease in human. Incorporating the additional genetically related taxa into a typical analysis usually leads to better phylogenetic resolution and comprehensive understanding. It had reported the phylogenetic tree based on partial M segment of most strains from rodents in Beijng (30). Since the amount of available biological material from patients was limited, we proceeded to amplify and sequence the entire S genome segment and partial L segment. The S-segment phylogenetic tree indicated that at least 3 lineages of SEOV were circulating in Beijing (figure 2). Strains in the first lineage coincided with the spatial clustered areas of HFRS with high RR values and most strains had special substitution of asparagine to threonine at amino acid position 259 of the N protein. All the strains involving recombination signals in Beijing were all in lineage #1 and located in HFRS clustered areas with the highest RR value. It seemed that the PRE was not a seldom event in these areas since more than one PRE were detected. Our data, together with a number of other studies, suggested that homologous recombination events might be not an uncommon process in hantavirus natural evolution [31][32][33]. In lineage #2, some strains had substitution of asparagine to serine at position 214 of the N protein. All of them could only be detected from rodent hosts and located in areas with lower risk of HFRS occurrence. The relationship between the hantavirus lineages and the cluster areas could be observed in partial L-segment phylogenentic tree. When compared with previous study based on partial M segment, there was another lineage circulating in Beijing (30). Moreover, the relationship between the hantavirus lineages and the cluster areas could also be observed except that two strains distributed in areas with lower risk of HFRS (Dongcheng district) seemed fall into the lineage #1 (30). It should be noted that the captured sites of the two rodents were in a railway station and were closed with the clustered area A, which might be the reason for the two strains different from strains in areas with low risk of HFRS but clustered together with strains in lineage #1. Longer sequence such as complete S segment analysis of the two strains might be helpful to elucidate their phylogeny and genetic characteristic. Unfortunately, the limited amount of available biological materials from the two rodents hindered us to complete the task. Our findings, together with previous study, indicated that several lineage of SEOV were circulating simultaneously in Beijing. Furthermore, it suggested that genetic characteristics of SEOV might be associated with some risk areas of HFRS in Beijing. This correlation could be that different strains circulating in different areas were evolving at a different rate or different pattern due to environmental diversity, hence accumulated different mutations. On the other hand, mutations and recombination might play an important role for SEOV to adapt to hosts in an emerging endemic area such as Beijing. It seemed that the mutations at amino acid position 214 and 259 of the N protein didn't happened by chance, since the same mutations happened in more than one strains from several areas and these mutations seemed to consistent with the spatial cluster areas. Compared with other hantaviruses, the nucleotide sequences of SEOV seemed to be more conservative. The S genome segment nucleotide sequences identity of SEOV ranged from 93.7% to 99.5% and amino acid sequences identity ranged from 96.6% to 100%. Although there were some evidences that the two observed mutations located in regions with the important function to N protein for some hantaviruses [34][35][36][37], laboratorial data about the effect of the mutation to the biological function of S segment gene products were absent at present. This issue needs to be further evaluated. Supporting Information Alternative Language Abstract S1 Chinese abstract translated by author Shu-Qing Zuo.
2014-10-01T00:00:00.000Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "4587f520220b3f758d034512b8f4ccafe2528a3e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0000945&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4587f520220b3f758d034512b8f4ccafe2528a3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208144201
pes2o/s2orc
v3-fos-license
Vildagliptin, a DPP-4 Inhibitor, Attenuates Endothelial Dysfunction and Atherogenesis in Nondiabetic Apolipoprotein E-Deficient Mice. Dipeptidyl peptidase-4 (DPP-4) inhibitors are novel antidiabetic agents with possible vascular protection effects. Endothelial dysfunction is an initiation step in atherogenesis. The purpose of this study was to investigate whether vildagliptin (Vilda) attenuates the development of endothelial dysfunction and atherosclerotic lesions in nondiabetic apolipoprotein E-deficient (ApoE-/-) mice. Eight-week-old nondiabetic ApoE-/- mice fed a Western-type diet received Vilda (50 mg/kg/day) for 20 weeks or 8 weeks. After 20 weeks of treatment, Vilda administration reduced atherogenesis in the aortic arch as determined by en face Sudan IV staining compared with the vehicle group (P < 0.05). Vilda also reduced lipid accumulation (P < 0.05) and vascular cell adhesion molecule-1 (VCAM-1) expression (P < 0.05) and tended to decrease macrophage infiltration (P = 0.05) into atherosclerotic plaques compared with vehicle. After 8 weeks of treatment, endothelium-dependent vascular reactivity was examined. Vilda administration significantly attenuated the impairment of endothelial function in nondiabetic ApoE-/- mice compared with the vehicle group (P < 0.05). Vilda treatment did not alter metabolic parameters, including blood glucose level, in both study protocols. To investigate the mechanism, aortic segments obtained from wild-type mice were incubated with exendin-4 (Ex-4), a glucagon-like peptide-1 (GLP-1) analog, in the presence or absence of lipopolysaccharide (LPS). Ex-4 attenuated the impairment of endothelium-dependent vasodilation induced by LPS (P < 0.01). Furthermore, Ex-4 promoted phosphorylation of eNOS at Ser1177 which was decreased by LPS in human umbilical endothelial cells (P < 0.05). Vilda inhibited the development of endothelial dysfunction and prevented atherogenesis in nondiabetic ApoE-/- mice. Our results suggested that GLP-1-dependent amelioration of endothelial dysfunction is associated with the atheroprotective effects of Vilda. R ecent animal and clinical studies have documented that the cardiovascular protective properties of dipeptidyl peptidase-4 (DPP-4) inhibitors are independent of their antidiabetic action. [1][2][3][4] The fundamental role of DPP-4 inhibitors is elevation of glucagonlike peptide-1 (GLP-1) level, which promotes insulin secretion from the pancreas, 5) whereas the receptor for GLP-1 is widely expressed in many cell types, including vascular cells and macrophages, suggesting pleiotropic and cardioprotective effects of DPP-4 inhibitors beyond their blood glucose-lowering effect. 6,7) Previous studies, including our own, have demonstrated that administration of DPP-4 inhibitors reduced the development of atherosclerotic plaques in normoglycemic animal models, with no alteration of metabolic parameters, including blood glucose and lipid levels. [8][9][10][11][12] Several studies have also reported cardioprotective effects of DPP-4 inhibitors in clinical situations. 13,14) The underlying mechanisms are not fully understood; however, several studies have demonstrated that inhibition of pro-inflammatory activation of vascular cells by GLP-1 contributes to the cardioprotective effects of DPP-4 inhibitors. 8,11) Vildagliptin (Vilda), which is one of the most investigated DPP-4 inhibitors, shows a more stable glycemic control profile in diabetic patients compared with other members of this class. 15,16) Previous studies have demonstrated that Vilda attenuated the progression of atherosclerosis in both diabetic and nondiabetic apolipoprotein Edeficient (ApoE −/− ) mice by reduction of pro-inflammatory activation of macrophages, an important cell type in atherogenesis. 11) Atherosclerosis is an inflammatory disease in which multiple cell types are involved. Vascular inflammation causes endothelial dysfunction, an initiator of atherosclerosis. 17) Endothelial dysfunction alters vascular responses, which stimulate the development of atherosclerosis. 18,19) Recent studies have suggested that endothelial dysfunction could be a therapeutic target for the inhibition of atherosclerotic disease. 20) However, the effects of Vilda on endothelial cell function have not been fully investigated. Therefore, in this study, we administered Vilda to nondiabetic ApoE −/− mice and examined its effects on endothelial cell function and atherogenesis. Our findings demonstrated that Vilda reduces the development of atherosclerosis and ameliorates endothelial dysfunction in this mouse model. The results of in vitro and ex vivo experiments suggested that protective properties on endothelial cells which depend on GLP-1 at least partially contribute to these effects. Methods Animals and drug administration: ApoE −/− (C57BL/6J background) mice were originally purchased from the Jackson Laboratory. Mice were maintained under a 12hour light/dark cycle. Vilda was supplied by Novartis Pharma. To examine the effect of Vilda on the development of atherosclerosis, male ApoE −/− mice were treated with Vilda 50 mg/kg/day from 8 weeks old by gavage for 20 weeks. To investigate the effect of Vilda on endothelial function at an earlier stage of atherosclerosis, the same dose of Vilda was administered to female ApoE −/− mice for 8 weeks. A Western-type diet (WTD) was started from 8 weeks old in both experiments. Vilda was dissolved in 0.5% carboxymethyl cellulose (CMC) solution. The control group received an equal volume of CMC. All experimental procedures conformed to the guidelines for animal experimentation of Tokushima University. The protocol was reviewed and approved by our institutional ethics committee. Blood pressure and laboratory data: Blood pressure (BP) of each mouse was measured using a tail-cuff system (BP-98A, Softron) as described in our previous paper. 21) The blood glucose level was measured from the tail vein using a glucometer (NIPRO StatStrip XP2, NIPRO) without fasting and with fasting in 8-week and 20-week treatments, respectively. At the time of sacrifice, blood was collected from the heart into EDTA-containing tubes. After blood samples were centrifuged, plasma was stored at −80 until required. Plasma total cholesterol, high density lipoprotein (HDL)-cholesterol, and triglyceride levels were measured at LSI Medience Corporation (Japan). Quantification of atherosclerotic lesions: The development of atherosclerotic lesions in the aorta was determined by Sudan IV staining as described previously. 22) In brief, mice were sacrificed with an overdose of pentobar-bital and perfused with 0.9% sodium chloride solution at a constant pressure. Both the heart and whole aorta were immediately removed. The thoracic aorta was opened longitudinally and fixed with 10% neutral buffered formalin. To quantify atherosclerotic lesions in the aortic arch, we performed en face Sudan IV staining. The percentage of Sudan IV-positive area in the aortic arch was calculated. Histological and immunohistochemical analysis: The heart was cut along a horizontal plane between the lower tips of the left and right atria. The upper portion was snap-frozen in OCT compound (Tissue-Teck). Then, the aortic root was sectioned serially (at 5-μm intervals) from the point where the aortic valves appeared to the ascending aorta until the valve cusps were no longer visible. These frozen sections of the aortic root were used for histological and immunohistochemical analyses. Sections were stained with oil red O to detect lipid deposition. Also, sections were incubated with anti-monocyte/macrophage marker (MOMA-2) antibody (BioRad), antiintercellular adhesion molecule-1 (ICAM-1) antibody, and anti-vascular cell adhesion molecule-1 (VCAM-1) antibody (Abcam). Sections were then incubated with biotinylated secondary antibody (VECTOR Laboratories, Inc.), followed by VECTASTAIN ABC-AP Kit (VECTOR Laboratories, Inc.), and stained using a VectorRed AP Substrate Kit (VECTOR Laboratories, Inc.). All sections were counterstained with hematoxylin. The ratio of positive area to plaque area was calculated in three valve lesions in the aortic root and used for comparison. 22) Vascular reactivity assay: The descending thoracic aorta was cut into 2-mm rings with special care to preserve the endothelium and mounted in an organ bath filled with modified Krebs-Henseleit buffer (KHB; 118.4 mM NaCl, 4.7 mM KCl, 2.5 mM CaCl2, 1.2 mM KH2PO4, 1.2 mM MgSO4, 25 mM NaHCO3, 11.1 mM glucose) aerated with 95% O2 and 5% CO2 at 37°C. The preparations were attached to a force transducer, and isometric tension was recorded on a polygraph. Vessel rings were primed with 31.4 mM KCl and then precontracted with phenylephrine, producing submaximal (60% of maximum) contraction. After the plateau was attained, the rings were exposed to increasing concentrations of acetylcholine (Ach; 10 −9 to 10 −4 M) and sodium nitroprusside (SNP; 10 −9 to 10 −4 M) to obtain cumulative concentration-response curves. In some experiments, aortic segments were incubated with 10 ng/mL lipopolysaccharide (LPS) in the presence/absence of a GLP-1 analog, exendin-4 (Ex-4, Sigma-Aldrich), for 24 hours before analysis of vascular reactivity. Cell culture: Human umbilical vein endothelial cells (HUVEC) were purchased from Life Technologies and cultured in EGM-2 (Lonza). HUVEC (passages 4-6) were treated with 10 nM Ex-4 for 16 hours in EBM-2 containing 2% FBS and then stimulated with 10 ng/mL LPS for 30 minutes. Western blot analysis: Cell lysates were prepared using RIPA buffer (Wako Pure Chemical Industries, Ltd.) containing a protease inhibitor cocktail (Takara Bio Inc.) and phosphatase inhibitors (Roche Life Science). Proteins were separated by SDS-PAGE and transferred onto polyvinilidine difluoride membranes (Hybond-P; GE Health-EFFECTS OF VILDAGLIPTIN ON ATHEROGENESIS care). After blocking with 5% bovine serum albumin, the membranes were incubated with primary antibody against either phosphorylated-eNOS Ser 1177 , eNOS ( BD Biosciences), or β-actin (Sigma-Aldrich) overnight at 4 . After blots were washed, the membranes were incubated in horseradish peroxidase-conjugated secondary antibody (Cell Signaling Technology) for 1 hour. Antibody distribution was visualized with ECL-Plus reagent (GE Healthcare) using a luminescent image analyzer (LAS-1000, Fuji Film). Statistical analysis: All results are expressed as mean ± SEM. Comparison of parameters between two groups was performed using unpaired Student's t-test. Comparisons of dose-response curves between groups were made by twofactor repeated measures ANOVA, followed by Tukey's post hoc test. A value of P < 0.05 was considered significant. Vilda inhibited the development of atherosclerosis in nondiabetic ApoE −/− mice: To examine the effect of Vilda on the progression of atherogenesis, ApoE −/− mice were treated with Vilda or vehicle for 20 weeks. Vilda attenuated atherosclerotic lesion progression in the aortic arch as determined by en face Sudan IV staining compared with vehicle (P < 0.05) (Figure 1). Administration of Vilda to nondiabetic ApoE −/− mice did not alter metabolic parameters, including blood glucose and lipid levels, as shown in Table I. The result of oil red O staining demonstrated that Vilda significantly reduced lipid deposition in atherosclerotic plaques (P < 0.05) (Figure 2A). The result of immunostaining demonstrated that Vilda significantly reduced VCAM-1 expression (P < 0.05) and tended to decrease macrophage accumulation (P = 0.05) in atherosclerotic plaques ( Figure 2B and C). Vilda improved endothelial function in nondiabetic ApoE −/− mice: To investigate the effect of Vilda on endothelial function in a nondiabetic condition, vascular response was examined in wild-type and ApoE −/− mice. After WTD feeding for 8 weeks, endothelium-dependent vasodilation in response to Ach was significantly impaired in ApoE −/− mice compared with that in age-and sexmatched wild-type mice. However, treatment with Vilda for 8 weeks significantly improved the impairment of endothelium-dependent vasodilation in ApoE −/− mice compared with vehicle administration (P < 0.05) ( Figure 3A). On the other hand, endothelium-independent vasorelaxation in response to SNP did not differ between the Vilda and vehicle groups ( Figure 3B). Metabolic parameters, including blood glucose level, did not differ between the Vilda-treated group and vehicle-treated group (Table II). Ex-4 attenuated endothelial dysfunction induced by LPS: To investigate whether increased GLP-1 level is associated with improvement of endothelium-dependent vascular function, Ach-induced vasorelaxation was examined using aortic rings obtained from wild-type mice. Inflammatory stimulation with LPS impaired vasorelaxation in response to Ach, although Ex-4, a GLP-1 analog, significantly ameliorated this response (P < 0.01) ( Figure 4A). Neither LPS nor Ex-4 affected endothelium-independent vasorelaxation in response to SNP ( Figure 4B). To investigate the underlying mechanism by which Ex-4 attenuated impairment of endothelium-dependent vasorelaxation induced by LPS, we examined the phosphorylation of eNOS Ser1177 in HUVEC. The results of western blotting demonstrated that phosphorylation of eNOS Ser1177 was promoted by the presence of Ex-4 in LPS-treated HUVEC (P < 0.05) ( Figure 5). Discussion In this study, we found that Vilda attenuated endothelial dysfunction and reduced atherosclerotic lesions in nondiabetic ApoE −/− mice. Vilda also reduced VCAM-1 expression and tended to decrease macrophage accumulation in atherosclerotic plaques. The results of an ex vivo experiment using aortic rings demonstrated that Ex-4, a GLP-1 analog, ameliorated endothelial dysfunction induced by LPS. Also, an in vitro experiment using HU-VEC showed that Ex-4 increased eNOS Ser1177 phosphorylation, which was deteriorated by LPS. Recent studies demonstrated protective effects of DPP-4 inhibitors on endothelial function. However, only a few studies have examined the effects of DPP-4 inhibitors on endothelial function and atherogenesis in a normoglycemic atherosclerotic mouse model. 23,24) Also, we demonstrated that GLP-1 attenuated endothelial dysfunction in HUVEC stimulated with LPS, which plays an important role in the process of atherogenesis. 25) The results of our study suggest that the elevated GLP-1 level caused by DPP-4 inhibition by Vilda contributes, at least partially, to the improvement of endothelial function and reduction of atherosclerotic lesion development. Previous studies have demonstrated that DPP-4 inhibitors attenuate atherogenesis in diabetic ApoE −/− mice. 11,26) Atherosclerosis is the most serious manifestation in patients with diabetes. Blood glucose-lowering treatment with DPP-4 inhibitors suppresses multiple cellular and molecular mechanisms that stimulate atherogenesis. In fact, several clinical studies reported that DPP-4 inhibitors, including Vilda, improved endothelial dysfunction, an initiation step in the development of atherosclerosis in diabetic patients. 14,27) On the other hand, accumulating evidence suggests that DPP-4 inhibitors prevent atherogenesis independent of their blood glucose-lowering effect. [8][9][10][11][12] One of the underlying mechanisms is the suppression of inflammatory activation of immune cells, such as macrophages. 10,[28][29][30] In addition to the activation of inflammatory cells, endothelial dysfunction plays a pivotal role in the initiation of atherogenesis. 31) Therefore, in this study, we examined the effect of Vilda on the development of endothelial dysfunction and atherosclerosis in nondiabetic ApoE −/− mice. We found that Vilda-treated animals had reduced atherosclerotic lesions in the aortic arch and less accumulation of lipid and macrophages and inflammatory molecule expressions such as VCAM-1 in atherosclerotic plaques compared with vehicle-treated mice. These results are consistent with previous studies which reported atheroprotective effects of this class of antidiabetic drug. Impaired endothelial function causes atherosclerosis. 20) Endothelium-derived nitric oxide (NO) plays a cru-AINI, ET AL cial role in vascular homeostasis, whereas reduction of production and/or bioavailability of NO contributes to the development of atherosclerosis. 32,33) NO is produced in the endothelium via nitric oxide synthase (NOS); atheroscle-AINI, ET AL rotic stimuli, such as hyperlipidemia and hyperglycemia, deteriorate this function. Recent studies have demonstrated that DPP-4 inhibitors increased NO production, leading to the improvement of endothelial function in human and animal studies. 14,27,34) Furthermore, the results of a clinical study which investigated the effect of sitagliptin, a DPP-4 inhibitor, on endothelial function in coronary artery disease and uncontrolled diabetic patients suggested that it ameliorated endothelial dysfunction without blood glucose alteration. 13) Also, previous studies have demonstrated that genetic deletion of eNOS by using eNOS-deficient mice or pharmacological blockade of eNOS by L-NGnitroarginine methyl ester attenuated protective effects of DPP4 inhibitors on endothelial cells, including vascular relaxation and blood flow recovery. [35][36][37] Therefore, the results of our present study suggested that Vilda improved endothelial cell function, especially in the early stages of atherosclerosis, by the activation of eNOS in nondiabetic animals independently of blood glucose. In this study, we examined the effect of GLP-1 on endothelial function. Previous studies have demonstrated that GLP-1 has various protective effects on the endothelium. 23,[38][39][40] The results of our ex vivo experiments demonstrated that endothelial function was impaired by LPS in aortic segments isolated from wild-type mice, although Ex-4, a GLP-1 analog, ameliorated this response. Also, the results of our in vitro experiments using HUVEC demonstrated that Ex-4 ameliorated LPS-induced impairment of eNOS Ser1177 phosphorylation. Increased eNOS phosphorylation at Ser1177 suggests elevated NO production in endothelial cells. 32,33) Previous studies have reported that LPS decreases eNOS activation 41,42) by the activation of inflammatory signaling pathways, such as p38 MAPK and NF-κB. 43,44) On the other hand, Ex-4 or a GLP-1 analog inhibit the activation of these inflammatory signaling pathways in many cell types, including endothelial cells. [45][46][47][48] Taken together, Ex-4 protects eNOS phosphorylation by the downregulation of these inflammatory signaling induced by LPS. Because other studies have suggested various links between GLP-1 and eNOS, 49) further studies are needed. Thus, these results suggest that increased GLP-1 level contributed, at least in part, to the improvement of endothelial function. In conclusion, Vilda attenuated endothelial dysfunction and reduced atherosclerotic lesions in nondiabetic ApoE −/− mice without an alteration of the blood glucose level. This study increases the understanding of the mechanisms by which Vilda attenuates atherosclerosis. Effects of DPP-4 inhibitors independent of glucose lowering may provide an attractive therapeutic option for atherosclerosis.
2019-11-19T14:04:40.354Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "f663b02c5a0e988abfdf21906ad6f2bad7fb4c34", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ihj/60/6/60_19-117/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fa87f53c674ba1f1d431e41fadb0f9302ce6a076", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238249619
pes2o/s2orc
v3-fos-license
Polarization of ocean acoustic normal modes : In ocean acoustics, shallow water propagation is conveniently described using normal mode propagation. This article proposes a framework to describe the polarization of normal modes, as measured using a particle velocity sensor in the water column. To do so, the article introduces the Stokes parameters, a set of four real-valued quantities widely used to describe polarization properties in wave physics, notably for light. Stokes parameters of acoustic normal modes are theoretically derived, and a signal processing framework to estimate them is introduced. The concept of the polarization spectrogram, which enables the visualization of the Stokes parameters using data from a single vector sensor, is also introduced. The whole framework is illustrated on simulated data as well as on experimental data collected during the 2017 Seabed Characterization Experiment. By introducing the Stokes framework used in many other fields, the article opens the door to a large set of methods developed and used in other contexts but largely ignored in ocean acoustics. V C 2021 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1121/10.0006108 I. INTRODUCTION Sound can be described with thermodynamic variables, the most common being pressure p, or kinematic flow variables such as particle acceleration a and velocity v.All of these continuous field variables represent, in the linear acoustic regime, small perturbations from their background values.For example, in a static fluid (a common approximation for sea water), p and v are related through the linearized Euler equation, @v @t ¼ À1 q rp; (1) with q the ocean density and t the time. Although the ocean acoustics literature tends to focus on the pressure p, which is clearly easier to measure and encumbered with fewer systematic errors, experimental techniques for inclusion of vector acoustic fields exist (e.g., Dahl and Dall'Osto, 2020a;D'Spain et al., 1991;Gray et al., 2016;Shchurov et al., 2011).Along with defining sound energy flow for which p and v are in phase (active intensity), particle velocity notably carries information about the source and the environment and thus can be used as input for source localization (Hawkes and Nehorai, 2003;Thode et al., 2010) and/ or environmental estimation (Bonnel et al., 2019a;Dahl and Dall'Osto, 2020b;Dall'Osto et al., 2012;Ren and Hermand, 2012;Shi et al., 2019).Particle velocity also emerges as an important field for fishes and crustaceans, given their use of v either exclusively or possibly in addition to p in auditory and environmental sensing processes (Popper and Hawkins, 2018). The aim of the current article is to provide a physical and signal processing framework adapted to the study of particle velocity, with a specific focus on low-frequency (f < 500 Hz) propagation in shallow water (depth D < 200 m).In this context, the propagation is described by normal mode theory: the environment acts as a dispersive waveguide.The sound field is conveniently described as a sum of (dispersive) modes, each mode propagating with its own frequency-dependent group velocity [Jensen et al. (2011), Chap. 5].A common assumption associated with normal mode propagation is that the waveguide depends exclusively on range r and depth z (it is rotationally invariant around the source).As a result, the particle velocity can be seen as a bi-dimensional vector and can be expressed using its horizontal and vertical components: v ¼ ½v r ; v z T .Note that the vector field is usually measured by a three-dimensional (3D) sensor with a vertical (v z ) and two horizontal components (usually denoted v x and v y ).The latter (v x , v y ) can be projected onto the source/receiver axis to create v r . In this context, the particle velocity defines a bivariate signal (Flamant, 2018), where the interrelations between horizontal and vertical components encode relevant information about the oceanic medium.Unfortunately, conventional approaches to process bivariate signals such as rotary components (Gonella, 1972;Mooers, 1973) or classic bivariate analysis (Priestley, 1981) do not allow for straightforward interpretation of their geometric and physical properties.To circumvent this issue, a general framework for the analysis and processing of bivariate signals has been recently proposed (Flamant et al., 2017(Flamant et al., , 2019)).It unravels the key role played by the physical notion of polarization, usually encountered in optics (Born and Wolf, 1980) and radar (Lee and Pottier, 2017), to describe and understand the inner properties of bivariate signals.This framework also establishes the general relevance of Stokes parameters (Stokes, 1851)-a set of four real-valued observables widely used to describe polarization properties of light (Rubin et al., 2019;Schaefer et al., 2007;Stenflo, 2013)-for bivariate signals.Their extension to the case of elastic waves (denoted elastodynamic Stokes parameters) was proposed (Turner and Weaver, 1994) for the study of multiple scattering of ultrasounds in elastic media, but they were never used, to the best of our knowledge, for the description of waterborne sound particle velocity.Importantly, Stokes parameters are energetic quantities, meaning that they can be easily estimated from experimental bivariate signal measurements. In ocean acoustics, sound polarization has mostly been described through complex intensity I ¼ pv à (Dahl and Dall'Osto, 2020a;Dall'Osto et al., 2012;D'Spain et al., 1991).Here, we propose to study the polarization of normal modes using the Stokes framework.Stokes parameters are directly derived from v, such that experimental measures do not require a coherent and collocated measure of p and v. Insightful polarization properties of normal modes are introduced.It is notably demonstrated that the polarization of individual modes does not depend on source position.It is also shown that the angle of the main axis of the polarization ellipse is proportional to modal attenuation; in a lossless waveguide, the polarization ellipse is horizontal.Basic tools to assess modal polarization, both in the frequency and in the time-frequency (TF) domains, are introduced: Stokes parameters can be easily derived from standard signal processing quantities [Fourier transforms (FTs) and spectrograms].The proposed framework is illustrated on simulated as well as experimental particle velocity data, collected at sea during the 2017 Seabed Characterization Experiment (SBCEX17) (Wilson et al., 2020). The remainder of the article is organized as follows.Section II presents the physical background required for the article.It overviews the basics of normal mode propagation, with the main focus on particle velocity that follows the development in Dahl and Dall'Osto (2020a) prior to introducing the concept of the bivariate signal.Section III covers the signal processing background required for this article.It presents the bivariate signal context, overviews polarization concepts including the polarization ellipse, and introduces the Stokes parameters.Section IV constitutes the heart of the article.It makes the link between Secs.II and III.Modal Stokes parameters are derived and illustrated on a simulated scenario that mimics SBCEX17.Section V introduces the concept of polarization spectrograms, which extend the Stokes parameter to the TF domain and enable the visualization of modal Stokes parameters using a single vector sensor and an impulsive source.Polarization spectrograms for modal propagation are illustrated on simulated data as well as on SBCEX17 experimental data in Sec.VI.The article is concluded in Sec.VII, which provides a summary and discusses future opportunities.Two appendixes supplement the article.Appendix A presents a comparison between the proposed Stokes framework and vector metrics recently defined in Dahl and Dall'Osto (2020a).Appendix B provides an experimental observation of the degree of polarization (i.e., polarization variability). II. PARTICLE VELOCITY AND NORMAL MODES Low-frequency acoustic propagation in shallow water is conveniently described using normal mode theory.Given a broadband source signal Xðf Þ emitted at depth z s in a rangeindependent waveguide, the pressure field p received at depth z after propagation of a range r is given by [Jensen et al. (2011) where M is the number of propagating modes, W n and k n are, respectively, the modal depth function and the horizontal wavenumber of mode n, Q ¼ e jp=4 = ffiffiffiffiffi ffi 8p p qðz s Þ is a factor depending only on the density qðz s Þ at depth z s , and j 2 ¼ À1.In general, the wavenumber k n is a complex number and can be written as k n ¼ k ðrÞ n À jb n , with k ðrÞ n and b n the real and imaginary parts of k n .In a lossless waveguide (i.e., with no attenuation), the modal wavenumber is real: n and b n ¼ 0. To simplify notations, we define so that the pressure associated with mode n is The modal particle velocity is obtained by combining Eqs.(1) and (4).The horizontal component of particle velocity for mode n is while its vertical component is In the frequency domain, the particle velocity vector associated with mode n is thus One obtains the modal particle velocity in the time domain, v n ðtÞ, using an inverse FT.Formally, where F is the FT operator. As stated previously, v(t)¼ Rv n ðtÞ is said to be a bivariate signal, with components made up as a linear combination of modes. Figure 1 illustrates the interpretation of the bivariate signal as a time-evolving vector.Figure 1(a) depicts a bivariate monochromatic signal, the elementary brick underpinning our analysis.Such a signal is said to be elliptically polarized, as its time-evolving trace in the two-dimensional (2D) (v r , v z ) plane is an ellipse, whose shape is governed by the interrelations between the amplitudes and phases of radial and vertical harmonic components.Figure 1(b) gives an example of a more sophisticated narrowband bivariate signal.Although the signal is still narrowband, its polarization is no longer constant: the signal's trace in the 2D (v r , v z ) plane is now an ellipse with changing shape, size, and orientation.This is due to complex relationships between the amplitudes and phases of the signal's components.The main goal of bivariate signal analysis is to extract polarization information from such signals.The Stokes parameter framework presented in this paper is a convenient and physics-based solution to this problem. III. BIVARIATE SIGNAL DESCRIPTION This section provides a short introduction to the framework for bivariate signal processing introduced and developed in Flamant (2018).Unlike existing approaches, this new framework enables straightforward interpretations of well-established signal processing tools in terms of the physical concept of (wave) polarization.Notably, Stokes parameters, a set of four real-valued energetic parameters widely used in polarization optics (Born and Wolf, 1980), lie at the core of the framework.The numerous physical interpretations enabled by the framework greatly ease the design, analysis, and processing of bivariate signals. Here, for brevity, we only review the necessary ingredients and results of the framework, and we refer to the original papers for further details (Flamant et al., 2017(Flamant et al., , 2018(Flamant et al., , 2019)). A. The geometry of monochromatic bivariate signals Let us consider first the most elementary bivariate signal, i.e., a monochromatic bivariate signal qðtÞ of frequency f.This vector signal writes the following way: with a x ; a y !0 and u x ; u y 2 ½0; 2pÞ the amplitude and phase of x and y, respectively.The two components x and y of q are univariate monochromatic signals with the same frequency f, but they may have different amplitude and phase.As a result, the full description of qðtÞ in (8) requires four parameters: the amplitudes a x , a y , and phases u x , u y of the two components.The interrelations between these quantities precisely govern the geometric properties of the signal qðtÞ. Figure 2(a) depicts the ellipse drawn over time by the bivariate signal q(t) in the x-y 2D plane.The ellipse trajectory is described by four real-valued parameters: • the amplitude or size j !0 of the ellipse; • the phase at origin u 2 ½0; 2pÞ; • the ellipse orientation h 2 ½Àp=2; p=2, representing the angle between the main axis of the ellipse and the horizontal axis; • the ellipticity angle v 2 ½Àp=4; p=4, which characterizes the shape of the ellipse as well as the rotation direction in the ellipse: counterclockwise if v !0 and clockwise when v < 0. The first two parameters are classical as they correspond to the standard notion of amplitude and phase.The two remaining parameters are geometric and encode the polarization of the bivariate monochromatic signal.In particular, when v ¼ 0, the ellipse becomes a line segment: we say that qðtÞ is linearly polarized.For v ¼ 6p=4, the ellipse degenerates into a circle, so that qðtÞ is said to be circularly polarized.Moreover, the signal is said to be horizontally polarized when h ¼ 0, whereas if h ¼ 6p=2 it is said to be vertically polarized. The canonical parameters ½j; u; h; v neatly encode the trajectory of the bivariate monochromatic bivariate signal qðtÞ.In other terms, they allow for a joint description of the properties of the univariate components x and y.In particular, we can rewrite qðtÞ in Eq. ( 8) directly in terms of j; u; h; v as (Flamant et al., 2019), We can also express the canonical parameters in terms of amplitudes a x , a y and phases u x ; u y as when a x 6 ¼ a y .The case a x ¼ a y corresponds to circular polarization: here v ¼ 6p=4 and h is undefined. B. Stokes parameters and the Poincar e sphere To describe polarization, a popular alternative to the canonical geometric parameters introduced above consists of the Stokes parameters denoted by S 0 ; S 1 ; S 2 ; S 3 .These four real-valued parameters benefit from being easily measured experimentally from intensity measurement, which made them particularly convenient for applications in optics (Rubin et al., 2019;Schaefer et al., 2007;Stenflo, 2013). Considering again the bivariate monochromatic signal qðtÞ given in Eq. ( 8), Stokes parameters can be expressed in terms of the ellipse parameters as (Flamant, 2018). where U 2 ½0; 1 is the degree of polarization of the signal, further discussed below.First, note that expressions ( 14)-( 17) do not include the overall phase u, since Stokes parameters are energetic quantities, which, by definition, are insensitive to a global phase shift. For the sake of generality, Eqs. ( 15)-( 17) include the degree of polarization U, a statistical measure of the dispersion of the polarization ellipse across multiple realizations (Flamant et al., 2017) (see Fig. 3).If one assumes that the bivariate signal is generated by a stochastic (i.e., random) process, then the degree of polarization is used to quantify the stability of the polarization ellipse from one realization to another.When U ¼ 1, the polarization ellipse is always strictly the same, and the signal is said to be fully polarized.When U ¼ 0, the ellipse is fully random, and the signal is unpolarized.In intermediate cases (0 < U < 1), the signal is said to be partially polarized. Note that deterministic signals, such as qðtÞ given by ( 8), are fully polarized signals (U ¼ 1).Indeed, all the realizations of a deterministic signal are strictly identical; therefore, the polarization ellipse is the same for all the realizations.In this paper, we assume that bivariate signals are fully polarized unless stated otherwise.Section VII discusses briefly the case of partial polarization, and an example from field observations is given in Appendix B. Stokes parameters are homogeneous to intensities, such that S 0 describes pure energetic information while the three remaining parameters S 1 , S 2 , and S 3 give geometric polarization properties in intensity units.It is convenient to remove the intensity dependence in S 1 ; S 2 ; S 3 by defining normalized Stokes parameters as Thanks to the Poincar e sphere of polarization states shown in Fig. 2(b), normalized Stokes parameters have a natural interpretation as the Cartesian coordinates of the vector described by spherical coordinates ðU; 2h; 2vÞ.Each point on the sphere is associated with a single polarization state.Thus, one can easily switch from one representation to another: for instance, the amplitude a, orientation h, ellipticity v, and degree of polarization U are obtained from the Stokes parameters using C. Spectral Stokes parameters for general bivariate signals Until now, we have only considered the definition of Stokes parameters for a single bivariate monochromatic signal of frequency f.To generalize the Stokes formalism to generic, broadband bivariate signals, one needs to define frequency-dependent Stokes parameters, as explained below. Spectral Stokes parameters simultaneously describe the energetic and polarization properties of a bivariate signal with respect to frequency.They are defined easily in terms of power spectral densities (PSDs).Let P xx ðf Þ be the PSD of x(t), P yy ðf Þ the PSD of y(t), and P xy ðf Þ the cross-spectral density (CSD) between x(t) and y(t).Note that in practice, for a deterministic signal with a finite support (length T), the PSDs are trivially obtained using the FT.Let Xðf Þ ¼ FxðtÞ and Yðf Þ ¼ FyðtÞ.the PSD and CSD can always be defined as the FT of the autocorrelations and cross-correlations of the univariate signals x(t) and y(t). 1 Spectral Stokes parameters are then defined as (Flamant et al., 2017;Schreier and Scharf, 2010) where <½: and =½: stand for real and imaginary parts.Just as before, for frequencies such that S 0 ðf Þ 6 ¼ 0, one defines the normalized Stokes parameters, with values between -1 and 1, as A. Theory This section makes a link between the polarization parameters (Sec.III) and the modal propagation (Sec.II).The propagation environment is assumed to be known and non-fluctuating, and the modes are noiseless.As a result, considered signals are deterministic: the modes are fully polarized (U ¼ 1).The context of partially polarized modes, which arises when the received signal is noisy and/or when the propagation environment is fluctuating, will be discussed in Sec.VII and in Appendix B. For a given mode n, we make a link between the particle velocity [Eq.( 7)] and the vector model for bivariate signals [Eq (8)].This link is done through xðtÞ ¼ v r n ðtÞ and yðtÞ ¼ v z n ðtÞ.In the frequency domain, assuming a particle velocity signal of length T, One can easily show that The normalized Stokes parameters are thus Equations ( 34)-( 36) are important results that characterize the modal polarization.First of all, it is reassuring to see that the (normalized) Stokes parameters are real numbers and that they do not depend on the signal length T. Also, they depend on the environment (through k n and W n ) and on the receiver depth z.However, they do not depend on the range r or on the source depth z s .In other words, for a given receiver position, they are fully independent from the source position.This important behavior is obtained thanks to the normalization process (division by S 0 ).Remember that the results presented here have been derived in a rangeindependent waveguide.The derivation of the Stokes parameters for range-dependent waveguides, particularly when mode coupling occurs, is an exciting research question, but it is beyond the scope of this paper. Finally, remember that in the specific case of a lossless waveguide, k n ¼ k ðrÞ n , so that P xy ðf Þ is purely imaginary.This indicates that the spectral components of x and y are in phase quadrature (90 phase shift).Further, in the same scenario, b m ¼ 0 leads to s 2 ¼ 0. Using (20), this implies that h ¼ 0 and the major axis of the polarization ellipse is horizontal.Equivalently, this also means that its polarization state is located on the prime meridian of the Poincar e sphere (Fig. 2).In a lossless waveguide, the particle velocity of individual normal modes is polarized horizontally. 2In a general waveguide with attenuation, s 2 is directly proportional to b m . B. Example This section illustrates the Stokes parameters on a simulated scenario that mimics SBCEX17 (Wilson et al., 2020).The experiment took place on the "New England Mud Patch," about 100 km south of Cape Cod, Massachusetts.A specificity of the environment is that the seabed features a first, thick layer of mud over more consolidated sediments.In this article, a notional geoacoustic model of the environment is considered: This specific simulated scenario has been chosen to reproduce the conditions of a particle velocity study by Dahl and Dall'Osto (2020a).In their article, Dahl and Dall'Osto derived four quantities from coherent combinations between pressure and particle velocity.Although they are based on the complex intensity I ¼ pv à rather than on just v, those quantities are closely related to the modal polarization.A thorough comparison between these quantities and the Stokes parameters is presented in Appendix A. The simulated Stokes parameters for the first five modes are shown in Fig. 4 (continuous lines).The figure shows S 0 and the three normalized Stokes parameters (s 1 , s 2 , s 3 ) for frequencies between 0 and 200 Hz.One clearly sees the effect of modal dispersion on the Stokes parameters: they depend both on mode number and frequency.Remember that s 2 ðf Þ / b m was an important theoretical result from Sec. IV A. Since b m is typically a very small number, so is s 2 .Note here that the vertical scale for s 2 is different from the one used for s 1 and s 3 . To illustrate the polarization sensitivity to geoacoustic parameters, the Stokes parameters are also computed in an environment where the mud layer is replaced by sand, which follows an example given in Dahl and Dall'Osto (2020a).The only difference from the previous simulation is that c TOP mud ¼ c BOT mud ¼ 1600 m/s.The resulting data are also shown in Fig. 4 (dotted lines), and these differ markedly from the previous dataset.This suggests that the Stokes parameters have a high sensitivity to the environment and may be good inputs for geoacoustic inversion.This is confirmed by looking at the group speed associated with the two different environments, also plotted in Fig. 4. Group speeds are usually derived from pressure data and are known to be good FIG. 4. (Color online) Stokes parameters and group velocities simulated in an environment with a mud layer (continuous lines) or sand (dotted lines) layer.Note that the vertical scale goes from À1 to 1 for s 1 and s 3 , while it goes from À10 À3 to 10 À3 for s 2 . input data for environmental inversion (e.g., Bonnel et al., 2021;Potty et al., 2000).However, Stokes parameters are clearly more sensitive to environmental changes.Provided they can be correctly estimated from particle velocity data, the Stokes parameters should become excellent input for geoacoustic inversion. V. POLARIZATION SPECTROGRAMS Figure 4 shows that the modal polarization is frequency-dependent.This, obviously, is intrinsically related to modal dispersion.In the following, we use TF analysis to study modal dispersion.Indeed, spectrograms are commonly used to visualize and estimate the dispersion of individual modes with application for geoacoustic inversions (Ballard et al., 2014;Bonnel et al., 2019a;Potty and Miller, 2020) or marine mammal localization (Bonnel et al., 2014;Thode et al., 2017). One may thus wonder if the modal separation provided by TF analysis could be helpful to assess the modal polarization.The recent paper by Dahl and Dall'Osto (2020a) (mentioned in Sec.IV B and detailed in Appendix A) suggests that TF analysis is indeed a good tool to study polarizationrelated properties of modes.In this section, the concept of the polarization spectrogram, based on the Stokes framework presented above, is introduced and illustrated on simulated data.An experimental example will be provided in Sec.VI.The current section shows that the polarization spectrograms of particle velocity data, as measured by a single vector sensor, allow the visualization of modal polarization properties of individual modes. A. Theory The Stokes framework, presented above, has been derived for monochromatic signals.However, it can easily be extended to study non-stationary deterministic signals in the TF domain.Indeed, one can build polarization spectrograms showing the TF distribution of the (normalized) Stokes parameters. The polarization spectrogram theory is fully described in Flamant et al. (2019).In practice, polarization spectrograms can be computed very simply.The procedure is similar to what is presented in Sec.III C, except that the FT operator F needs to be replaced by a short-time FT operator.The TF Stokes parameter S 0 ðt; f Þ is thus obtained by summing the spectrograms of the vertical and horizontal particle velocity, while S 1 ðt; f Þ is obtained by computing their difference.The parameters S 2 ðt; f Þ and S 3 ðt; f Þ are obtained by evaluating the real and imaginary part of the crossspectrograms.Normalized TF Stokes parameters s 1 ðt; f Þ; s 2 ðt; f Þ and s 3 ðt; f Þ] are obtained through normalization by S 0 ðt; f Þ.The four TF Stokes parameters S 0 ðt; f Þ, s 1 ðt; f Þ, s 2 ðt; f Þ, and s 3 ðt; f Þ will now be called "polarization spectrograms."Polarization spectrograms can be trivially computed in most programming languages using any offthe-shelf TF toolbox.Note that the normalization by S 0 ðt; f Þ may be problematic if S 0 ðt; f Þ ' 0. This issue is easily solved by setting a lower threshold on S 0 ðt; f Þ.To do so, one finds all the TF points such that S 0 ðt; f Þ < and then sets S 0 ðt; f Þ ¼ for all those points.The parameter is typically fixed as a small percentage (typically 0.1% or less) of the total energy of the signal and/or the maximum value attained by S 0 in the TF plane.Note that a PYTHON toolbox dedicated to the TF analysis of bivariate signals is available with Flamant et al. (2019). An important property of polarization spectrograms is that, for a single-component noise-free signal and no interference, the polarization spectrogram values at the signal's TF location (i.e., the ridge) exactly give the values of the underlying Stokes parameter (Flamant et al., 2019).In more complex scenarios, if the polarization spectrograms are not overly contaminated by interference and/or noise, this property is still approximately true.Note that this behavior is fully similar to that of traditional spectrograms, which give the signal's energy along the TF ridge (up to a constant multiplicative factor that depends on spectrogram parameters). As a reminder, in our modal propagation context, the theoretical TF location of mode m with frequency f is given by (Bonnel et al., 2020) with r the source/receiver range, v m ðf Þ the group speed of mode m, and t s ðf Þ the emission time of frequency f by the source.If the source is impulsive, all the frequencies are emitted at the same time, and t s ðf Þ ¼ t 0 is constant.Note that modal separation in the TF domain increases when range r increases. B. Example The simulated scenario from Sec. IV B is now used to illustrate polarization spectrograms.Simulated time series are computed using the first five propagating modes, assuming a perfectly impulsive source with frequencies between 0 and 200 Hz.Polarization spectrograms are computed using the procedure explained in Sec.V A. The threshold is chosen as 0.1% of the maximum of S 0 ðt; f Þ.The obtained polarization spectrograms are presented in Fig. 5.To facilitate reading, the modal TF locations are shown as black curves on all the spectrograms. The polarization spectrogram S 0 ðt; f Þ shows the TF distribution of the bivariate signal's energy.It can be interpreted as a traditional spectrogram for a univariate signal.Here, the simulated range is large enough for S 0 ðt; f Þ to show cleanly separated modes.The other polarization spectrograms s 1 ðt; f Þ, s 2 ðt; f Þ, and s 3 ðt; f Þ show the TF distribution of the normalized Stokes parameters.They enable the evaluation of individual mode polarization properties.From a signal processing point of view, the polarization properties fully encode the interdependencies between amplitudes and phases of the vector field, and the polarization spectrograms enable doing so for individual modes.The polarization spectrogram s 1 ðt; f Þ shows that s 1 is larger than 0.5 for all modes at all frequencies, with the notable exception of mode 5, for which s 1 ðt; f Þ drastically drops for frequencies below 150 Hz.This behavior is fully consistent with normalized Stokes parameters computed for individual modes, as shown in the top-right panel of Fig. 4. From Fig. 4, note that negative values of s 1 ðf Þ are predicted for mode 5 and at frequencies between ' 75 and 100 Hz; this is not visible on s 1 ðt; f Þ (Fig. 5) because this frequency band is highly attenuated. The polarization spectrogram s 2 ðt; f Þ is quite puzzling at first look.Although we know js 2 j ( 1 (see Fig. 4), s 2 ðt; f Þ clearly shows high values between modes.This is due to interference resulting from the TF uncertainty, which spreads the modes outside of their theoretical locations (i.e., this interference is an artifact from the TF processing).However, for frequencies higher than ' 75 Hz, one sees that s 2 ðt; f Þ is indeed very small along the modal TF position.On the other hand, for frequencies less than ' 75 Hz, s 2 ðt; f Þ is relatively large even at the modal TF position.This is because, at those frequencies, actual interference between modes exists.This is shown in Fig. 5 by the black curves that cross each other.Physically, the Airy phase of mode m 0 interferes with modes m, with m < m 0 .Note that the acoustic energy associated with the modal Airy phase is highly attenuated so that this behavior is unlikely to be resolved on most noisy and/or experimental data. Finally, the polarization spectrogram s 3 ðt; f Þ shows clear polarization difference between modes.As an example, s 3 ðt; f Þ < 0 for mode 1 at all frequencies, while s 3 ðt; f Þ > 0 for modes 3-5.More interestingly, s 3 ðt; f Þ changes sign for mode 2 around 50 Hz.This detailed behavior is fully consistent with s 3 ðf Þ, as computed for individual modes (see the bottom-right panel of Fig. 4). VI. EXPERIMENTAL APPLICATION This section presents an experimental application of the Stokes parameter framework for data collected during SBCEX17. A. SBCEX17 SBCEX17 was a multi-institutional, multi-ship, multidisciplinary effort that took place on the New England Mud Patch, about 110 km south of Cape Cod, in March/April 2017.Its main objective was to advance understanding of the acoustic properties of fine-grained sediments with clay, i.e., mud.An overview of SBCEX17 is provided in Wilson et al. (2020). During SBCEX17, multiple acoustic receivers and sources were deployed, covering frequencies from about 10 Hz to 10 kHz.Of particular interest here is the Intensity Vector Autonomous Recorder (IVAR) deployed by the Applied Physics Laboratory, University of Washington (Dahl and Dall'Osto, 2020a) at location (40.48655 N;70.63831 W).IVAR is a bottom moored vector sensor: it records the 3D particle velocity vector field at a given location in the water column, $1 m above the seafloor.The blue/green/yellow color scale is valid only for S 0 , which has been arbitrarily normalized so that its maximum value is 1.The blue/red color scale is valid for the normalized Stokes parameters s 1 , s 2 , and s 3 .The time axis origin is arbitrary but is the same for all the panels.https://doi.org/10.1121/10.0006108 In this article, we consider an excerpt of IVAR data that contains the signal from a distant combustive sound source (CSS) deployed by the Applied Research Laboratory, University of Texas.The specific CSS transmission occurred on March 18 around 16:19 UTC at a way-point called "station 35," located at (40.49881 N; -70.45240W), $16 km away from IVAR.Three shots were successively emitted at depth z s ' 20 m; this paper considers only the first.The CSS signal is a low-frequency high-energy impulse, followed by several weaker replicas called "bubble pulses" (McNeese et al., 2014).The source signal was measured during the experiment using a hydrophone hard-mounted on the CSS deployment frame, $1 m away from the center of the source chamber.This signal will be used to perform source deconvolution. B. Data As stated in Sec.VI A, IVAR measures the 3D particle velocity vector field.More specifically, IVAR measures two horizontal velocity components and a vertical one.To build the bivariate signal vðtÞ ¼ ½v r ðtÞ; v z ðtÞ T , it is required to project the two IVAR horizontal components into a single v r ðtÞ, with the r axis pointing toward the source.This is done using simple geometrical rules, as explained in Eq. ( 12) of Dahl and Dall'Osto (2020a). The received signal is further processed using a band stop filter between 86 and 88 Hz to remove contamination from internal interference (note that a high-pass filter starting at 25 Hz is also embedded in the recording system to prevent signal saturation).Last, source deconvolution is performed using the simple "water level" method (Clayton and Wiggins, 1976).The method description and its application to SBCEX17 CSS data are presented in depth in Bonnel et al. (2020), along with data examples and companion MATLAB code. The preprocessing chain (horizontal projection, filtering, and source deconvolution) leads to an experimental bivariate signal vðtÞ ¼ ½v r ðtÞ; v z ðtÞ T that represents the waveguide impulse response [i.e., as if Xðf Þ ¼ 1].This signal is presented in Fig. 6.Since the signal contains several modes, its time-domain representation is not legible.Still, one notes that the peak value of v z ðtÞ is roughly an order of magnitude smaller than the peak value of v r ðtÞ.This leads to a 2D trace in the r-z domain that looks like a horizontal elongated ellipse.This will be further quantified, on an individual mode basis, using polarization spectrograms. C. Polarization spectrograms Experimental polarization spectrograms are shown in Fig. 7.The theoretical TF positions of the modes, computed using the environmental parameters from Sec. IV B, are also plotted as black curves.The polarization spectrograms were computed using an threshold equal to 1% of the maximum of S 0 ðt; f Þ.This value was empirically chosen to provide visually good results.It is notably higher than the one used in Sec.V B to cope with experimental noise. The experimental polarization spectrograms enable the visualization of the polarization properties of individual modes.These spectrograms are qualitatively similar to the simulated ones.Important features include high values of s 1 ðt; f Þ for modes 1-4 at all frequencies.Although s 3 ðt; f Þ is clearly contaminated by noise and interference, it shows a pattern similar to the one predicted in Fig. 5. Indeed, s 3 ðt; f Þ > 0:5 at most frequencies for modes 4 and 5, while it has smaller but positive values for mode 3. Modes 1 and 2 are not as legible, but mode 2 shows s 3 ðt; f Þ < 0 for frequencies between 50 and 100 Hz, as predicted by the simulation. Note that a clear mismatch exists between experiment and simulation for fine details of the polarization spectrograms.As an example, the experimental data do not show s 1 ðt; f Þ < 0 for mode 3 and f < 100 Hz.This is to be expected since the simulation has been performed using a notional environmental model.Obtaining a true match between simulation and experiment would require a dedicated environmental inversion, which is beyond the scope of this paper.Nonetheless, the qualitative agreement between simulation and experiment clearly demonstrates the relevance of the proposed method. VII. DISCUSSION The Stokes parameter framework presented in this article enables a full description of the polarization of underwater sound.It differs from existing underwater vector acoustics works mentioned previously in that it focuses exclusively on the particle velocity v rather than on the complex intensity I ¼ pv à that requires a concurrent and collocated measure of both pressure p and particle velocity v. Since p and v have to be measured with two different sensors (usually a hydrophone for p and an accelerometer for v), the proposed framework simplifies the experimental task associated with measuring underwater sound polarization. Restricting the scope of the study to modal propagation, Dahl and Dall'Osto recently defined a set of "modal vector metrics" that describes modal polarization (Dahl and Dall'Osto, 2020a).As stated above, the Dahl and Dall'Osto metrics are different from the Stokes parameters because they are derived from I ¼ pv à .Another important difference is that the Dahl and Dall'Osto metric definition requires some (limited) knowledge about the environment, notably the water sound speed and density at the receiver location.Last, these metrics are fully defined using physics-based arguments.While they are very informative about the modal polarization, there is no path to determine if those metrics form a complete description of the polarization.On the other hand, since the Stokes framework is based on signal processing arguments, it fully describes the polarized signal.We note that indeed the Dahl and Dall'Osto metrics are very similar to the Stokes parameters, and the formal link between the two is presented in Appendix A. Whether considering Dahl and Dall'Osto metrics or the Stokes parameters, the reader may wonder if modal polarization has practical applications.In Sec.IV B, we suggested that the Stokes parameters are highly sensitive to the environment (see Fig. 4) and thus may be used as input for geoacoustic inversion.This idea was explored for the Dahl and Dall'Osto metrics in Bonnel et al. (2019a).A similar study (not presented here for the sake of concision) was run for the Stokes parameters, and similar results were obtained.The modal polarization parameters (Stokes or Dahl and Dall'Osto metrics) are more sensitive to the seafloor properties than the modal group speeds, a classical input for geoacoustic inversion (Ballard et al., 2014;Bonnel et al., 2019b;Potty et al., 2000).As a result, Stokes parameters appear to be promising input data for upcoming inversion studies.An example is the recent study by Dahl and Dall'Osto (2021) involving what they refer to as circularity, or s 3 in this study, for geoacoustic inversion of underwater ship noise from SBCEX17. Also, it is important to come back to the notion of degree of polarization U, which was introduced in Sec.III B. The degree of polarization can be defined as a statistical measure of dispersion of the polarization ellipse across multiple realizations (Flamant et al., 2017;Schreier and Scharf, 2010).In the context considered in this article, the signal is noise-free and the environment is not fluctuating.As a result, the signal is fully deterministic and modal polarization does not change.Therefore, the signal is fully polarized and U ¼ 1.This, obviously, will never be true in an experimental context.Let us consider a classical tomographic setup with a fixed receiver and a fixed source.If the source emits recurrent signals, each received signal can be considered as a realization of the underlying stochastic oceanic process (Colosi, 2016).One can then estimate the Stokes parameters and derive the degree of polarization U using Eq. ( 22).It is expected that U would be representative The blue/green/yellow color scale is valid only for S 0 , which has been arbitrarily normalized so that its maximum value is 1.The blue/red color scale is valid for the normalized Stokes parameters s 1 , s 2 , and s 3 .The time axis origin is arbitrary but is the same for all the panels. of environmental fluctuations.The exact link between U and environmental fluctuation regimes (unsaturated, partially saturated, and fully saturated) needs to be determined.However, interesting perspectives arise in using the Stokes parameters to quantify environmental fluctuations.An important result from Dahl and Dall'Osto (2020a) is revisited in Appendix B to experimentally illustrate the concept of degree of polarization of individual modes.More broadly, the degree of polarization can be used to track fluctuations of either the whole signal or individual arrivals (modes or rays).This opens new research avenues, notably for long range propagation in deep water.Indeed, in this context, water column fluctuations highly impact the signal, and new methods are required to better link the physical oceanography and the acoustics. Finally, using the Stokes parameters for practical marine applications calls into question our ability to properly estimate them from a particle velocity signal.For generic bivariate signals, Eqs. ( 23)-( 26) show that spectral Stokes parameters can be efficiently obtained by combination of conventional non-parametric spectral density estimators (e.g., periodogram, multitaper) [see Flamant et al. (2017) for details].In the context of particle velocity signals, estimation methods for individual modes should further take into consideration the near-horizontal polarization, i.e., js 2 ðf Þj ( 1.Other physical constraints may be also included, such as parametric modeling of the spectral dispersion of Stokes parameters.Also, polarization properties of modal interference can be examined [see Dahl andDall'Osto (2020b, 2021) for examples with ship noise], which opens the door to interesting questions about the link between polarization and the waveguide invariant [Jensen et al. (2011), Chap. 5].As a result, the practical estimation of Stokes parameters defines key challenges, in terms of both physical interpretability and robustness to noise.The development of dedicated estimation procedures is required to ensure the full exploitation of the information gathered in Stokes parameters.The proposed framework opens the door to novel physics-based bivariate signal processing methods for ocean acoustics.factors: the ambient noise and environmental fluctuations.These two factors contribute in making estimated polarization properties different when using different source signals. If one assumes ergodicity (source signals recorded at different times have the same underlying statistics) and spatial homogeneity (the environment is perfectly range-and azimuth-independent), this constitutes an experimental observation of the degree of polarization U. Formally, note that estimating U from the dataset used in Dahl and Dall'Osto (2020a) is not straightforward and is beyond the scope of this paper.In short, it would require assumptions about the variability of U. If U is assumed to be spatially homogeneous, then it could be estimated by averaging normalized Stokes parameters (further assuming ergodicity).On the other hand, if the environment is spatially variable, then it is likely that U depends on source position.In this case, it must be estimated using Stokes parameters rather than the normalized ones.In such a scenario, normalized Stokes parameters (as well as Dall and Dall'Osto metrics) would depend on source position. FIG. 1 FIG.1.(Color online) 3D representation of a bivariate signal v n ðtÞ, showing its dynamical evolution as a rotating 2D vector.(a) A monochromatic bivariate signal contains a single frequency and describes a fixed ellipse in the v r -v z plane.(b) A narrowband bivariate signal is characterized by instantaneous parameters (amplitude, frequency, orientation, and shape of the ellipse) that slowly evolve with time. FIG. 2. (a)The monochromatic bivariate signal describes an elliptical trajectory in the 2D plane.(b) Poincar e sphere of polarization states.For any point on the sphere there is an associated unique polarization state described by either spherical angular coordinates ð2h; 2vÞ or normalized Stokes parameters ðS 1 =S 0 ; S 2 =S 0 ; S 3 =S 0 Þ. Figure adapted from J. Flamant, "A general approach for the analysis and filtering of bivariate signals," Ph.D. thesis (Centrale Lille, 2018)(Flamant, 2018). FIG. 5 FIG. 5. (Color online)Polarization spectrograms for the simulated SBCEX17 data.The black curves show the theoretical TF position of the modes.All the spectrograms are plotted with linear scales.The blue/green/yellow color scale is valid only for S 0 , which has been arbitrarily normalized so that its maximum value is 1.The blue/red color scale is valid for the normalized Stokes parameters s 1 , s 2 , and s 3 .The time axis origin is arbitrary but is the same for all the panels. FIG. 6. (Color online) Experimental particle velocity data (SBCEX17) after projection in the rz domain, filtering, and source deconvolution.The time evolution of the bivariate signal (dark blue) shows a main acoustic arrival around t ¼ 0.5 s (time axis origin is arbitrary).Individual particle velocities v r and v z are shown explicitly (intermediate blue) in the bottom and right projection panels.The total time history of the 2D trace in the rz domain is shown at the far left (lightest blue). FIG. 7 FIG. 7. (Color online)Polarization spectrograms for the experimental SBCEX17 data.The black curves show the theoretical TF position of the modes.All the spectrograms are plotted with linear scales.The blue/green/yellow color scale is valid only for S 0 , which has been arbitrarily normalized so that its maximum value is 1.The blue/red color scale is valid for the normalized Stokes parameters s 1 , s 2 , and s 3 .The time axis origin is arbitrary but is the same for all the panels.
2021-10-03T06:16:57.122Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "1c85b28685cde227f9211a28013c13b1c624794e", "oa_license": "CCBY", "oa_url": "https://darchive.mblwhoilibrary.org/bitstream/1912/27964/1/10.0006108.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e5d27940fc80004d8b981bb572261edeb7b1cb8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
1831746
pes2o/s2orc
v3-fos-license
An Extracellular Pathway for Dystroglycan Function in Acetylcholine Receptor Aggregation and Laminin Deposition in Skeletal Myotubes* The dystroglycan (DG) complex is involved in agrin-induced acetylcholine receptor clustering downstream of muscle-specific kinase where it regulates the stability of acetylcholine receptor aggregates as well as assembly of the synaptic basement membrane. We have previously proposed that this entails coordinate extracellular and intracellular interactions of its two subunits, α- and β-DG. To assess the contribution of the extracellular and intracellular portions of DG, we have used adenoviruses to express full-length and deletion mutants of β-DG in myotubes derived from wild-type embryonic stem cells or from cells null for DG. We show that α-DG is properly glycosylated and targeted to the myotube surface in the absence of β-DG. Extracellular interactions of DG modulate the size and the microcluster density of agrin-induced acetylcholine receptor aggregates and are responsible for targeting laminin to these clusters. Thus, the association of α- and β-DG in skeletal muscle may coordinate independent roles in signaling. We discuss how DG may regulate synapses through extracellular signaling functions of its α subunit. The dystroglycan (DG) complex is involved in agrin-induced acetylcholine receptor clustering downstream of muscle-specific kinase where it regulates the stability of acetylcholine receptor aggregates as well as assembly of the synaptic basement membrane. We have previously proposed that this entails coordinate extracellular and intracellular interactions of its two subunits, ␣and ␤-DG. To assess the contribution of the extracellular and intracellular portions of DG, we have used adenoviruses to express full-length and deletion mutants of ␤-DG in myotubes derived from wild-type embryonic stem cells or from cells null for DG. We show that ␣-DG is properly glycosylated and targeted to the myotube surface in the absence of ␤-DG. Extracellular interactions of DG modulate the size and the microcluster density of agrin-induced acetylcholine receptor aggregates and are responsible for targeting laminin to these clusters. Thus, the association of ␣and ␤-DG in skeletal muscle may coordinate independent roles in signaling. We discuss how DG may regulate synapses through extracellular signaling functions of its ␣ subunit. Dystroglycan (DG) 2 is a member of the dystrophin-associated glycoprotein complex (DGC). It is encoded by a single gene (dag1) and is expressed initially as a propeptide that gets cleaved to generate two distinct subunits as follows: ␣-dystroglycan (␣-DG), a peripheral mucin-like protein, and ␤-dystroglycan (␤-DG), a transmembrane protein (1,2). In mature skeletal muscle, both subunits associate noncovalently and localize to the plasma membrane where they form the functional core of the DGC. Dystroglycan is crucial for muscle fiber integrity and survival and is widely thought to act as a structural element of the cell surface by linking the extracellular matrix (ECM) to the cytoskeleton to protect the cell against the stress of contractions (3,4). Other studies suggest that DG mediates intracellular signaling in noncontractile cells (5,6) and can regulate cell death (7,8), replication (9), and polarity (10,11). In skeletal muscle, the absence of DG leads to muscular dystrophy (12,13). At neuromuscular junctions (NMJs), DG associates directly with rapsyn (14) and is colocalized with acetylcholine receptors (AChRs) (15) by their movement to nascent synapses (16). DG-deficient muscle fibers in vivo and in vitro show diffuse and unstable aggregates of AChRs (12,17,18), indicating a role for DG in the condensation and stabilization of AChR microclusters. In addition, laminin, perlecan, and acetylcholinesterase (AChE) are greatly reduced at the NMJs of DGdeficient muscle (12,18). The NMJ represents a highly specialized compartment within the multinucleated myotube that extends from the nerve terminal through the intervening synaptic basement membrane and plasma membrane to the underlying cytoskeleton and subsynaptic nucleus (19). Formation of this synaptic compartment is associated with expression, highly restricted localization, and stabilization of molecules. The mechanisms involved are fundamental to our understanding of structural and functional mosaicism within the cells. For example, one of the hallmarks of mature NMJs is the presence of a high density of AChRs (10 4 receptors/ m 2 ) at the tops of folds in the postsynaptic membrane (20). These receptors are trapped in a region of the plasma membrane directly apposed to the nerve terminal, degrade very slowly (21), and are unable to diffuse within the plasma membrane. Initially, AChR aggregation is a muscle-intrinsic process that does not require innervation (22,23), but reorganization of the AChRs within the membrane is regulated by secretion from the motor nerve of agrin, a heparan-sulfate proteoglycan (24). Agrin acts via MuSK, a muscle-specific receptor tyrosine kinase, together with a myotube-associated specific coreceptor within the postsynaptic membrane (25). Phosphorylation of MuSK recruits rapsyn, which is closely associated with the major cytoplasmic loop of the AChRs (26 -31) and can self-associate to form microclusters of AChRs (27,32). Activation of MuSK leads to the phosphorylation of the ␤ subunit of the AChR (33), an important requirement for enhanced linkage of AChRs to the cytoskeleton (34). As a specialized domain of skeletal myofibers, the NMJ offers insights into DG functions in fundamental cellular processes. For example, DG functions in basement membrane assembly by anchoring AChE via interactions with perlecan (18,35,36). In previous studies, we have shown that the glycosylation of ␣-DG is regulated by the nerve and that it functions in NMJ formation (37). Also, we proposed that ␣and ␤-DG function coordinately to assemble an extracellular and intracellular matrix of proteins (38), which is consistent with a recent report suggesting novel extracellular interactions for DG (39). In this study, we have taken advantage of the well defined role of DG at NMJs to determine whether the extracellular domains of DG function independently in AChR aggregation. We describe here that the functions of DG in AChR aggregation and basement membrane assembly in myotubes can be rescued by viral expression of full-length DG in DGϪ/Ϫ myotube cultures. Interestingly, DG constructs consisting of ␣-DG alone or of ␣-DG and the extracellular regions of ␤-DG can regulate aspects of AChR aggregation, namely the size of AChR aggregates, the distribution of microclusters within them, and the assembly of laminin at these aggregates. The implications of these observations for synapse formation and signaling via DG are discussed. EXPERIMENTAL PROCEDURES Generation of Dystroglycan cDNA Constructs-Dystroglycan cDNA constructs (Fig. 1A) were generated by PCR amplification using specific oligonucleotides for mouse dag1. PCR products were subcloned inframe into the peGFP-N1 expression vector (Clontech) to generate eGFP fusion transcripts. The CD4 ectodomain/transmembrane sequence expression plasmid was obtained from Dr. R. Dunn (McGill University). Constructs ␣-DG and ␣/⌬ecto␤DG were obtained from Dr. S. Winder (University of Sheffield, UK). All constructs were sequenced for validation (Genome Center, McGill University). Generation of Recombinant Adenoviruses-Replication-defective adenoviruses were generated using the AdEasy system (Qbiogene) as described by the manufacturer. In brief, eGFP, CD4-eGFP, DG, ␣-DG, ⌬cytoDG, and ␣/⌬ecto␤DG cDNA fusion constructs were subcloned into the multiple cloning site of pShuttle-CMV. All vectors were linearized using PmeI, and cotransfected with pAdEasy-1 into the BJ5183 electrocompetent E. coli strain. Recombinants were screened by colony size and by restriction digests as described (40). Recombinants were linearized using PacI, purified, and transfected into HEK293 cells using Lipofectamine (Invitrogen). Adenovirus production was observed by plaque formation, and particles were isolated from cellular lysates and amplified through several rounds of infections in HEK293 cells. Titers were determined by TCID 50 infection test as described by the manufacturer (Qbiogene). Quantification of AChR Aggregation-Soluble recombinant rat agrin was a generous gift from Dr. M. Ferns (University of California, Davis). To assay AChR clustering in ES cells, differentiated cultures (20 -25 days old) were treated with 0.5 nM agrin for 16 h. Cells were washed twice in Dulbecco's phosphate-buffered saline (DPBS) (Invitrogen) and incubated with 2 g/ml rhodamine-conjugated ␣-bungarotoxin (Molecular Probes) for 20 min at 25°C. Cells were washed twice with DPBS and fixed with 2% paraformaldehyde/PBS at 37°C for 20 min. Cultures were mounted on glass coverslips using ImmunoFloure mounting medium (ICN) and observed with a Zeiss Axioskop fluorescence microscope. Digital images observed with a ϫ63 Zeiss objective were captured with a QImaging Retiga 1300 10-bit digital camera under normalized exposures and quantifications of cluster number, size, and density were generated using Northern Eclipse software (Empix). For confocal images, cultures were observed using a Leica DM LFSA microscope equipped with an Ultraview confocal scanner (PerkinElmer Life Sciences), and images were captured and processed using Metamorph software (Universal Imaging). For cells infected with adenovirus, AChR aggregate size was determined by circling each aggregate with a trace from the free hand tool and by measuring total area as calibrated for the objective used (Fig. 2B). AChR aggregate density was determined as a percentage of total aggregate area (18,41), where the area of each aggregate that was labeled with ␣-bungarotoxin was highlighted using the threshold function (values of 18 -255 pixels), and the highlighted area was determined within the traced area (Fig. 2B). AChR aggregate numbers were determined by counting the number of aggregates present in each microscopic field observed. At least 30 nonoverlapping fields were visualized for each set of cultures, and data were collected from at least three separate experiments. For statistical analyses, we normalized the data obtained as described above for AChR aggregate size and density by a logarithmic transformation, and we then performed multiple comparisons using unbalanced one-way analysis of variance followed by pairwise comparisons (Tukey-Kramer and Fisher's tests) on the normalized data. All probability (p) values stated in the text are for analysis on normalized values. All analyses and percentile plots were generated using StatView 4.5 package (Abacus) or SAS (SPSS). All images and figures were prepared using Adobe Photoshop and Illustrator. Immunocytochemistry-Cultures were stained with ␣-bungarotoxin as described above and then fixed in 2% paraformaldehyde/DPBS at 37°C for 20 min. If applicable, cells were permeabilized in 0.5% Triton X-100 in DPBS for 10 min at 25°C. Cells were blocked with 10% donkey or horse serum at 25°C for 1 h. This was followed by incubation with either monoclonal antibody IIH6 to ␣-DG (1:50) (Upstate Biotechnologies, Inc.), rabbit anti-sera to laminin (homemade, 1:100), anti-GFP antibody (Living Colors; 1:50) (Clontech), or rabbit anti-sera to ␤-DG (homemade, 1:100) at 25°C for 1 h. Cells were washed extensively with DPBS and incubated with rhodamine-conjugated goat anti-mouse IgM (1:100), aminomethylcoumarin acetate-conjugated donkey anti-rabbit (1:100), or rhodamine-conjugated donkey anti-rabbit IgG (1:100) secondary antibodies (Jackson ImmunoResearch), at 25°C for 1 h. Following a second series of washing, cells were mounted on glass coverslips and analyzed as described above. For laminin-AChR overlap observations, we scored positive only the aggregates that discretely overlapped with laminin immunoreactivity. We normalized to the total number of clusters observed in each case. A minimum of 20 nonoverlapping fields was scored in three experiments for each set of cultures, and differences were analyzed by analysis of variance followed by Fisher's test. RESULTS A simple model for DG function predicts that extracellular interactions mediated by ␣-DG would in turn signal intracellularly via ␤-DG. This model is complicated by the fact that most ligands for ␣-DG bind to an extended carbohydrate side chain (for review see Ref. 42), and transmembrane signaling would require the unprecedented translation of this interaction into conformational changes in both ␣and ␤-DG. Considering that interactions of the ␣ subunit are extracellular and of the ␤ subunit are intracellular, we suggest to determine whether ␣and ␤-DG might have independent functions in organizing cell surface proteins by generating a series of DG cDNA constructs bearing deletions of ␤-DG (Fig. 1). We fused these cDNA constructs at their C terminus to the enhanced green fluorescent protein (eGFP; see "Experimental Procedures"). In this study, we focused on constructs that reveal distinct functions of ␣-DG versus full-length DG (Fig. 1). Construct DG encodes full-length ␣and ␤-DG subunits; construct ⌬cytoDG lacks all cytoplasmic domains of ␤-DG; construct ␣-DG encodes the ␣ subunit of DG; construct ⌬ecto␤DG lacks the extracellular domains of ␤-DG leaving its transmembrane region following the C-terminal end of ␣-DG. All constructs were transiently expressed by infecting muscle cells derived from DGϪ/Ϫ (dag1) ES cells (12,18) using serotype Ad5 replicationdefective adenoviruses under the CMV promoter (see "Experimental Procedures"). DGϪ/Ϫ cells form diffuse aggregates of AChRs in response to agrin ( Fig. 2A), and the area occupied by the AChR microclusters as well as their density were quantified by computer-assisted image analysis ( Fig. 2B) (18). Expression of DG Constructs in Myotubes Derived from DGϪ/Ϫ ES Cells-Infection of agrin-treated myotubes with DG-expressing adenoviruses resulted in a diffuse intracellular accumulation of the eGFPtagged constructs (Fig. 3, A-L), a consequence of the constitutive nature of the promoter. To assess the expression of these constructs at the cell surface in infected myotubes, we immunolabeled cultures using three antibodies as follows: monoclonal antibody (mAb) IIH6 against glycosylated ␣-DG on nonpermeabilized cultures (43) to visualize cell surface expression; a rabbit antiserum to the last 15 amino acids of ␤-DG (38) on permeabilized cultures; and an anti-serum to eGFP. DGϩ/ϩ cells infected with a control eGFP-expressing adenovirus (Fig. 3, A and B) showed regions of aggregated surface labeling of ␣-DG (Fig. 3AЈ) and of ␤-DG (Fig. 3BЈ) in a distribution that is identical to uninfected DGϩ/ϩ cells (data not shown). As expected, DGϪ/Ϫ cells infected with eGFP ( Fig. 3, C and D) expressed neither ␣-DG (Fig. 3CЈ) nor ␤-DG (Fig. 3DЈ). Expression of full-length DG in DGϪ/Ϫ cells (Fig. 3, E and F) showed aggregates of ␣-DG (Fig. 3EЈ) and ␤-DG (Fig. 3FЈ) similar to that in DGϩ/ϩ cells (Fig. 3, AЈ and BЈ). Our immunofluorescence analysis indicated approximately equivalent levels of expression of DG constructs in DGϪ/Ϫ cells when compared with endogenous levels in DGϩ/ϩ cells. Expression of ⌬cytoDG (Fig. 3, G and H) resulted in surface expression of this construct which encompasses both ␣-DG (Fig. 3G') and the extracellular and transmembrane regions of ␤-DG as detected by anti-eGFP (Fig. 3HЈ). In this instance, labeling with mAb IIH6, which recognizes a functional carbohydrate epitope on ␣-DG, indicated that proper glycosylation of ␣-DG does not require ␤-DG cytoplasmic domains. Expression of ␣-DG lacking any anchor to the cell surface via ␤-DG (Fig. 3, I and J) also resulted in its surface deposition on myotubes as detected by both IIH6 and anti-GFP antibodies in nonpermeabilized cells (Fig. 3, IЈ and JЈ). The distribution of this ␣-DG construct on DGϪ/Ϫ cells was similar to the endogenous ␣-DG expression observed on DGϩ/ϩ cells (Fig. 3AЈ). ␣-DG was not readily detected on the surface of infected non-muscle cells (data not shown), suggesting that ␣-DG, when expressed alone, preferably bound to muscle cells. The heterogeneity of ES-derived cell cultures and their low abundance in myotubes prevented us from assessing glycosylation or expression of ␣-DG by Western blot analysis. Nonetheless, in another study, we determined that expression of ␣-DG alone in adenovirus-infected COS cells resulted in its secretion into the culture medium (data not shown), and it could be identified by Western blotting using mAb IIH6 (data not shown). Finally, expression of construct ␣/⌬ecto␤DG (Fig. 3, K and L) showed surface labeling of ␣-DG (Fig. 3KЈ) and ␤-DG (Fig. 3LЈ) as observed for endogenous or ectopic full-length DG (Fig. 3, EЈ and FЈ). Taken together, these observations show that DG cDNA constructs could be expressed in myotubes derived from DGϪ/Ϫ ES cells using adenoviruses and that both ␣and ␤-DG were normally expressed at the cell surface. Proper surface targeting and glycosylation of ␣-DG could occur in the absence of ␤-DG expression. Moreover, the association of Expression of a membrane-bound eGFP (CD4eGFP) in DGϪ/Ϫ myotubes does not alter the distribution of AChR microclusters (right panel). Pictures show a single aggregate for each myotube. B, methods for measuring size and density of AChR aggregates on myotubes. Using Northern Eclipse software (version 6.0, Empix), captured images of bungarotoxin-labeled AChR aggregates were inverted and calibrated for the ϫ63 Zeiss objective used; using the freehand tool, aggregates were encircled, and the area was measured from the "measure" command to evaluate the total aggregate area (upper right panel). The same aggregate was processed using the threshold tool to highlight the AChR microclusters labeled within each aggregate (lower left panel). The area from the highlighted AChRs was measured. To express the relative distribution of AChRs within the aggregate (density), we used the ratio between the measured value for area of AChR microclusters within aggregates to the measured value for total aggregate size (lower right panel). Bar, 15 m. a form of ␣-DG lacking any anchorage to ␤-DG still was able to interact with cell surface proteins either in the ECM or in the plasma membrane possibly affecting interactions and functions of the DG complex. Expression of Dystroglycan in DGϪ/Ϫ Myotubes Restores the Size of AChR Aggregates-Myotubes derived from both DGϩ/ϩ and DGϪ/Ϫ ES cells express AChRs and respond to soluble agrin by forming aggregates of receptors on their surfaces (17,18), indicating that DG is downstream of the agrin/MuSK signaling pathway (44). However, in the absence of DG, these agrin-induced aggregates are an abnormally large, diffuse, and unstable collection of microclusters of AChRs ( Fig. 2A) (18). Furthermore, these DG-deficient aggregates lack laminin, merosin, perlecan, and AChE (18). To ascertain whether ␣and ␤-DG act only as a complex in response to MuSK activation, we infected DGϪ/Ϫ myotubes with adenoviruses expressing either eGFP or DG constructs (Fig. 1), and 5-7-day post-infection, treated the cultures with a saturating concentration (0.5 nM) of soluble agrin to induce AChR aggregation at the surface of myotubes (45), and labeled AChRs using rhodamineconjugated ␣-bungarotoxin. As controls for the assays, we used soluble cytoplasmic eGFP or submembrane eGFP (CD4-eGFP). This latter construct allowed us to control for any effect due to membrane expression of eGFP on AChR aggregation. Expression of either control constructs in DGϩ/ϩ or DGϪ/Ϫ cells did not affect the aggregation of AChRs after agrin treatment ( Figs. 2A and 4). To assess the relative contribution of extracellular and intracellular interactions of DG to AChR aggregation, DGϪ/Ϫ cells were infected with adenoviruses expressing either ⌬cytoDG, ␣-DG, or ␣/⌬ecto␤DG (Figs. 3 and 4) and treated with soluble agrin. Expression of ⌬cytoDG, which includes full-length ␣-DG and the extracellular domains of ␤-DG to which it binds, in DGϪ/Ϫ cells (Fig. 4) resulted in a highly significant but partial reduction in aggregate size (mean, 96.9 Ϯ 6.3 m 2 ) when compared with DGϪ/Ϫ:eGFP (p Ͻ 0.0005; see Fig. 5). Expression of ␣-DG also resulted in reduction of aggregate size (mean, 77.6 Ϯ 4.2 m 2 ) when compared with DGϪ/Ϫ:eGFP (p Ͻ 0.0001; see Fig. 5). Interestingly, expression of ␣/⌬ecto␤DG resulted in full recovery of AChR aggregate size (mean size, 47.8 Ϯ 2.9 m 2 ) when compared with DGϪ/Ϫ:eGFP (p Ͻ 0.0001) and similar to the expression of endogenous or ectopic full-length DG. We did not observe any increase in numbers of AChR aggregates per myotube after expression of the different DG constructs in DGϪ/Ϫ cells (data not shown). These observations suggest that interactions mediated via extracellular regions of DG can function to regulate AChR aggregate size. Moreover, expression of ␣-DG and of its ␤-DG-binding sites can function as well as the noncovalently associated complex of ␣and ␤-DG. Dystroglycan Regulates the Distribution of AChR Microclusters within Aggregates-In cultured myotubes, agrin in solution (46,47) or released by the nerve (48) stimulates the formation of microclusters of AChRs, which subsequently condense into a mature postsynaptic plaque. Myotubes deficient for DG fail to form such plaques and as a result the area of an aggregate occupied by AChRs is greater and the microclusters remain distinct puncta ( Fig. 2A). Because full-length or extracellular DG regulate aggregate size (see above), we asked whether extracellular interactions of DG were also capable of controlling AChR packing density of microclusters within agrin-induced AChR aggregates. In this case, we defined packing density as the relative area of an aggregate covered by AChR microclusters labeled with fluorescent ␣-bungarotoxin (Fig. 2B) (18,41). When quantified in myotubes derived from DGϩ/ϩ cells treated with agrin, this packing density yielded a frequency histogram with a normal distribution and a mean of ϳ70%. Uninfected or eGFP-expressing DGϩ/ϩ cells showed the same mean (66.7 Ϯ 2.0%; see Fig. 6), but uninfected or eGFP-expressing DGϪ/Ϫ cells had a significantly lower relative area occupied by microclusters (mean, 45.1 Ϯ 2.8%; p Ͻ 0.0001). Expressing full-length DG in DGϪ/Ϫ cells clearly increased the relative area occupied by microclusters with a mean of 56.7 Ϯ 1.9% when compared with DGϪ/Ϫ:eGFP (p ϭ 0.022), but it remained somewhat lower than in DGϩ/ϩ:eGFP cells (p ϭ 0.0474; see Fig. 6). Thus, expression of full-length DG significantly but not completely rescued the normal distribution of microclusters within aggregates on myotubes derived from DGϪ/Ϫ ES cells to produce a density of AChRs approximately that in DGϩ/ϩ cells. Interestingly, expression of ⌬cytoDG in DGϪ/Ϫ cells also significantly increased the area occupied by microclusters (mean, 60.2 Ϯ 1.5%) when compared with DGϪ/Ϫ:eGFP (p ϭ 0.005; see Fig. 6), suggesting that the extracellular domains of DG are responsible for most if not all of the activity of full-length DG in regulating AChR density. We observed a similar significant increase when either ␣-DG (mean, 60.8 Ϯ 2.2%; p Ͻ 0.0001) or ␣/⌬ecto␤DG was expressed (mean, 65.7 Ϯ 2.0%; p Ͻ 0.0001). When the populations are ranked by percentiles (Fig. 6B), the distribution plot for DGϪ/Ϫ:⌬cytoDG cells looked similar to that for DGϩ/ϩ:eGFP or DGϪ/Ϫ:DG cells but was distinct from control DGϪ/Ϫ:eGFP cells. In contrast, the plots for DGϪ/Ϫ:␣-DG and DGϪ/Ϫ:␣/⌬ecto␤DG cells showed a greater variation across the percentiles. Together, these data suggest that DG regulates the distribution of microclusters of AChRs within aggregates induced by agrin, and extracellular interactions via ␣-DG function to achieve this regulation. Localization of Dystroglycan to AChR Aggregates-Dystroglycan interacts with rapsyn (14) and localizes precisely to nascent aggregates of agrin-induced AChRs on skeletal myofibers in vivo (16). However, in C2C12 myotubes treated with agrin, DG accumulates in most but not all AChR aggregates (8,43). Furthermore, in the same myotubes treated with laminin-1 that can also stimulate AChR aggregation via DG, DG is not colocalized with AChR aggregates (49). Because DG regulates AChR aggregation, we asked whether minimal constructs of DG are localized to these aggregates in DGϩ/ϩ and DGϪ/Ϫ myotubes. To assess this, we immunolabeled DG using mAb IIH6 (see Fig. 3) to detect an overlap between surface ␣-DG and AChR aggregates (Fig. 7). In DGϩ/ϩ cells infected with eGFP, DG was expressed along the myofiber surface and accumulated with several but not all AChR aggregates (Fig. 7, DGϩ/ϩ:eGFP, arrows). As expected, in DGϪ/Ϫ cells infected with eGFP, there was no DG detected (Fig. 7, DGϪ/Ϫ: eGFP, arrowheads). Only full-length DG expressed in DGϪ/Ϫ cells localized to most AChR aggregates (Fig. 7, DGϪ/Ϫ:DG, arrows) but not all (arrowheads). Expression of ⌬cytoDG, ␣DG, or ␣/⌬ecto␤DG in DGϪ/Ϫ cells was detected along the myotube surface (Fig. 3) but was seldom found colocalized with AChR aggregates (data not shown). That full-length DG is found more frequently at AChR aggregation may reflect targeting to clusters via interactions with rapsyn. Constructs lacking these domains are not similarly targeted but nevertheless can function in regulating aggregate size and density. Possibly this is mediated by participation of DG in ECM assembly, which at sites of AChR aggregation, even distant ones, can affect AChR aggregation by interactions with MuSK (50) or some proteins essential to the primary AChR scaffold (discussed below). DISCUSSION We have demonstrated previously that ␣and ␤-DG are recruited to developing NMJs (16) where they regulate the size and the stability of postsynaptic densities of AChRs and the assembly of a synaptic basement membrane (12,18). We hypothesized that DG functions via an extracellular as well as an intracellular matrix of proteins (38); consistent with this, a recent report suggests novel extracellular interactions of ␣-DG in AChR aggregation (39). Here we show that ␣-DG is glycosylated and targeted to the cell surface in absence of ␤-DG (Fig. 3). We also show that the DG complex can function extracellularly through ␣-DG to regulate the size of AChR aggregates (Figs. 4 and 5) and the density of AChR microclusters within them (Figs. 4 and 6). Moreover, DG constructs lacking either the extracellular or cytoplasmic domains of ␤-DG can mediate laminin assembly at agrin-induced AChR aggregates (Fig. 8). We conclude that the ␣ and ␤ subunits of the DG complex, which have been widely viewed as two interacting and inter-dependent subunits in skeletal muscle, can function independently (discussed below). Processing and Targeting of Dystroglycan in Muscle-Surface expression and glycosylation of DG are crucial for proper function of the DGC in skeletal muscle and in the brain (60,61). Aberrant glycosylation of ␣-DG, due to mutations in glycosylating intermediates, or reduced surface expression of ␣-DG, due to lack of dystrophin, leads to the development of muscular dystrophies (60 -63). Previous studies from Campbell and co-workers (64) have identified a site for the interaction of Large, an important step in the glycosylation pathway of DG within its ␣ subunit. Our results extend this by showing that not only is ␤-DG unnecessary for this glycosylation but ␣-DG can be targeted to the cell surface in the absence of ␤-DG (Fig. 3). Indeed, expression of constructs encoding ␣-DG and ⌬cytoDG are detected by mAb IIH6 (Fig. 3), which recognizes an O-linked carbohydrate side chain on ␣-DG at the myotube surface that interacts with laminin G domain-containing ligands (65,66). This indicates that the DG cytoplasmic domain is not necessary for the functional glycosylation of the ␣ subunit. In fact, this glycosylation may be important in targeting ␣-DG to the cell surface, which could be facilitated by the association with basement membrane components, most likely perlecan, agrin, or laminin. ␣-DG contains at its C-terminal end a region highly homologous to a sperm protein, enterokinase, agrin (SEA) module (67) that is found in membrane-tethered mucin proteins (68). In human MUC-1, the SEA module has been shown to undergo autoproteolysis. Point mutations of a conserved serine (Ser-53) residue within the SEA module abrogate this cleavage activity (69). Similarly, in vertebrates, cleavage of DG into ␣and ␤-DG depends upon a serine (Ser-655) found in this putative SEA module region (67). However, inhibition of cleavage of DG subunits in point mutants does not prevent the cell surface localization of DG in heterologous cells. These observations raise the question of why vertebrate DG evolved into two subunits. One possibility is to generate distinct functions for the two subunits cleaved from the DG precursor peptide. Noncovalent association between ␣and ␤-DG may allow the two subunits to dissociate and possibly interact independently with other partners in the cell or in the ECM. Consistent with this, there are several reports of disjunction in ␣and ␤-DG localization in non-muscle tissue (70 -72). 3 ␣-Dystroglycan in AChR Aggregation-Here we have utilized AChR aggregation as a relatively simple and well characterized cellular event that requires DG (12,17,18,44,73) to study its function in muscle where it is essential for cell integrity in vivo (12) as well as synapse formation. We show that expression of full-length DG or of DG lacking the ␤ subunit in DGϪ/Ϫ myotubes alters AChR aggregate morphology. Both the size of AChR aggregates and the density of the microclusters within aggregates are the same as those in DGϩ/ϩ cells following the expression of DG in DGϪ/Ϫ myotubes (Figs. 4 -6). Expression of DG constructs lacking the ␤ subunit or its cytoplasmic regions was also sufficient to recover the wild-type morphology of DG-null AChR aggregates (Figs. 4 -6). These observations suggest that ␣-DG can act independently from ␤-DG in regulating AChR aggregation. The initial interactions of ␣-DG must be extracellular, distinguishing it from classical transmembrane signaling. By extension, these data indicate that DG, and more broadly the DGC, may function extracellularly in formation of NMJs. Recent studies have reported that inactive muscle agrin can be potentiated to stimulate AChR aggregation when complexed with laminin into an ECM (57,74). The association of ␣-DG with laminin, agrin, and perlecan may similarly facilitate the assembly of endogenous ligands to stimulate AChR aggregation. More generally, extracellular interactions appear critical for NMJ formation. For example, in Caenorhabditis elegans, LEV-10, a type I transmembrane protein containing Clr/Cls, Uegf, and bone morphogenic protein and low density lipoprotein a domains (75), and CAM-1 (Canal-associated neurons abnormal migration protein 1), a retinoic acid-related orphan receptor tyrosine kinase resembling MuSK (76), are necessary for the aggregation of AChRs at nascent NMJs in a process involving only extracellular interactions. This extracellular signaling may serve to coordinate multiple transmembrane pathways during the assembly of the extremely complex structure of the postsynaptic membrane. The mechanism of action of DG in AChR aggregation is dependent upon its levels of expression. The absence (12,18) and excessive levels (77, 78) 4 of full-length DG or ␤-DG alone in skeletal myotubes lead to aberrant postsynaptic densities of AChRs. In our study, extracellular portions of DG (␣-DG and ⌬cytoDG) seem to largely dictate AChR aggregate morphology (Figs. 5 and 6). The size of these aggregates on ␣-DGor ⌬cytoDG-expressing DGϪ/Ϫ cells, although significantly smaller than control DGϪ/Ϫ cells, are larger than in DGϪ/Ϫ cells expressing full-length DG or ␣/⌬ecto␤DG. The microcluster distribution within these aggregates is the same in these former cell populations. Thus, the total number of microclusters within aggregates on myotubes expressing ␣-DG and ⌬cytoDG is apparently greater than on those expressing full-length DG or ␣/⌬ecto␤DG. This effect of DG expression could be explained by an overaccumulation of AChR microclusters within aggregates resulting from an altered metabolic turnover (decrease) of AChRs mediated by the extracellular portions of DG. Related to this, Xu and Salpeter (79) have reported that AChR turnover in vivo is increased in mouse muscle lacking dystrophin. Dystroglycan in Extracellular Signaling-DG is necessary for the formation of some (58) but not all (7) basement membranes. For example, DG is necessary in development for the maintenance of the extra-embryonic Reichert's membrane (80). Deletion of DG from the brain leads to aberrant basement membranes at astroglial end feet on blood vessels and meninges (60). On the other hand, DG-deficient skeletal muscle fibers assemble an ultrastructurally normal basement membrane containing laminin, collagen, fibronectin, and perlecan (12). In our myotube cultures, DG is necessary for the synaptic localization of laminin (Fig. 8) (18). The constructs ␣-DG and ⌬cytoDG clearly target laminin to AChR aggregates (Fig. 8). This suggests that the ␣-DG subunit may function as an ECM protein to assemble a basement membrane involved in the regulation of AChR aggregation (Figs. 4 -6) (74) and in the accumulation of laminin at AChR aggregates (Fig. 8). Extracellular interactions via ␣-DG appear to be essential in DG function at NMJs because the disruption of ␣-DG interactions with laminin by blocking antibodies (43), with laminin-2 by deletions (57), or by hypoglycosylation of ␣-DG, as observed in myd mice (81), leads to aberrant NMJs with disrupted AChR aggregates at the end plate. Also, the idea that the extracellular domains of DG appear sufficient to target and maintain laminin to AChR aggregates (Fig. 8) and to contribute to basement membrane assembly is consistent with observations that laminin binding to sulfatides (6) can seed basement membrane assembly via DG and utrophin and emphasizes the contribution of extracellular interac-tions of the DGC to this process. In our study, full-length DG and ␣/⌬ecto␤DG contain the cytoplasmic regions of ␤-DG responsible for interaction with Grb2 (growth factor receptor-bound protein 2), rapsyn, caveolin 3, and dystrophin/utrophin (15,(82)(83)(84). These regions appear to functionally complement those of the extracellular portion of the DG complex by regulating AChR aggregate morphology (Figs. 4 -6) and basement membrane assembly (Fig. 8). Our results raise the question of how extracellular domains of DG function in skeletal muscle in AChR aggregation in ways often attributed largely to direct or indirect interactions of AChRs with cytoskeletal proteins. Interactions of matrix proteins have the potential to introduce a level of extracellular protein-protein interactions that eventually signals into the cell interior via several mechanisms. These may include functions of glycosylated ␣-DG in signaling cross-talk with integrins; laminin bound to DG could bind via distinct sites to ␣6 or ␣7 integrins (7,57) in muscle to signal intracellularly. Other mechanisms could involve perlecan, a ligand for ␣-DG and a well known coreceptor for FGF2 (85,86); neuregulin-1, which regulates AChR synthesis and is a growth factor/ECM protein that is bound to heparan sulfate proteoglycans (87); or AChE, which binds to ␣-DG via perlecan (35,36) and interacts directly with MuSK to alter its distribution (50). Similar extracellular interactions may occur in central nervous system synapses where DG is expressed in neurons of the cerebral cortex and hippocampus (88) and has been implicated in long term potentiation (61) and in synaptic transmission in the retina (89). Conceivably, DG and the DGC may regulate the aggregation of neurotransmitter receptors by virtue of their stabilization in the plasma membrane in a fashion similar to that at NMJs. Further studies to elucidate the mechanisms of DG function in cell survival and synapse formation may contribute to our understanding of the muscle wasting and mental retardation associated with muscular dystrophies.
2018-04-03T03:14:23.493Z
2006-05-12T00:00:00.000
{ "year": 2006, "sha1": "df6ee382bd9b3d4e54cf7978d9a57981162af9a0", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/281/19/13365.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "75c2586d8e468038d115a2949826bc1b598e9982", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
55651922
pes2o/s2orc
v3-fos-license
The high-mass disk candidates NGC7538IRS1 and NGC7538S Context: The nature of embedded accretion disks around forming high-mass stars is one of the missing puzzle pieces for a general understanding of the formation of the most massive and luminous stars. Methods: Using the Plateau de Bure Interferometer at 1.36mm wavelengths in its most extended configuration we probe the dust and gas emission at ~0.3",corresponding to linear resolution elements of ~800AU. Results: NGC7538IRS1 remains a single compact and massive gas core with extraordinarily high column densities, corresponding to visual extinctions on the order of 10^5mag, and average densities within the central 2000AU of ~2.1x10^9cm^-3 that have not been measured before. We identify a velocity gradient across in northeast-southwest direction that is consistent with the mid-infrared emission, but we do not find a gradient that corresponds to the proposed CH3OH maser disk. The spectral line data toward NGC7538IRS1 reveal strong blue- and red-shifted absorption toward the mm continuum peak position. The red-shifted absorption allows us to estimate high infall rates on the order of 10^-2 Msun/yr. Although we cannot prove that the gas will be accreted in the end, the data are consistent with ongoing star formation activity in a scaled-up low-mass star formation scenario. Compared to that, NGC7538S fragments in a hierarchical fashion into several sub-sources. While the kinematics of the main mm peak are dominated by the accompanying jet, we find rotational signatures from a secondary peak. Furthermore, strong spectral line differences exist between the sub-sources which is indicative of different evolutionary stages within the same large-scale gas clump. Introduction The characterization of accretion disks around young high-mass protostars is one of the main unsolved questions in massive star formation research (e.g., Beuther et al. 2007aBeuther et al. , 2009Cesaroni et al. 2007;Kraus et al. 2010). The controversy arises around the difficulty to accumulate mass onto a massive protostar when it gets larger than 8 M because the radiation pressure of the growing protostar may be strong enough to revert the gas inflow in spherical accretion scenarios (e.g., Kahn 1974;Wolfire & Cassinelli 1987). Different ways to circumvent this problem are proposed, the main two are (a) scaled-up disk accretion (e.g., Yorke & Sonnhalter 2002;Krumholz et al. 2009;Kuiper et al. 2010) partially requiring initial turbulent gas and dust cores (e.g., McKee & Tan 2003) and including ionization radiation (e.g., Keto 2002) and magnetic fields (e.g., Peters et al. 2011), and (b) competitive accretion and potential (proto)stellar mergers at the dense centers of evolving massive (proto)clusters (e.g., Bonnell et al. 2004Bonnell et al. , 2007Bally & Zinnecker 2005). Based on observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). The data are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/ Over recent years, much indirect evidence has been accumulated that massive accretion disks do exist. The main argument stems from massive molecular outflow observations that identify collimated and energetic outflows from high-mass protostars, resembling the properties of known low-mass star formation sites (e.g., Henning et al. 2000;Beuther et al. 2002;Wu et al. 2004;Zhang et al. 2005;Arce et al. 2007;López-Sepulcre et al. 2009). Such collimated jet-like outflow structures are only explainable with an underlying massive accretion disk driving the outflows via magneto-centrifugal acceleration. From a modeling approach, numeric simulations and analytic calculations of massive collapsing gas cores result in the formation of massive accretion disks as well (Yorke & Sonnhalter, 2002;Kratter & Matzner, 2006;Krumholz et al., 2009;Kuiper et al., 2010;Peters et al., 2011). Although alternative formation scenarios are proposed, there is a growing consensus in the massive star formation community that accretion disks should also exist in highmass star formation. However, it is still poorly known whether such massive disks are similar to their low-mass counterparts, hence dominated by the central protostar and in Keplerian rotation, or whether they are perhaps self-gravitating non-Keplerian entities. While indirect evidence for massive disks is steadily increasing, direct observational studies are largely missing. This dis-crepancy can mainly be attributed to the clustered mode of massive star formation, the typically large distances and the high extinction. Hence, spatially disentangling such structures is a difficult task. While several disk candidates exist around early B-stars (e.g., Cesaroni et al. 1997Cesaroni et al. , 2005Schreyer et al. 2002;Shepherd et al. 2001;Zhang et al. 2002;Chini et al. 2004;Kraus et al. 2010;Keto & Zhang 2010;Fallscheer et al. 2011), more massive O-star like systems rather show larger-scale toroid-like structures not consistent with classical Keplerian accretion disks (e.g., Beltrán et al. 2004;Beltrán et al. 2011;Beuther et al. 2005;Beuther & Walsh 2008;Sollins et al. 2005;Keto & Wood 2006;Cesaroni et al. 2007;Fallscheer et al. 2009). However, the nondetection of Keplerian structures around O-stars does not imply that they do not exist, it rather indicates that they are likely on smaller spatial scales hidden by the toroids. Therefore, penetrating more deeply into the central structures at the highest possible spatial resolution is the next step to go. The high-mass accretion disk candidates NGC7538IRS1 and NGC7538S: The source selection for such a project is driven by the scientific goals and the technical feasibility. The two disk candidates NGC7538IRS1 and NGC7538S combine the best of both worlds: On the one hand, they are two already wellstudied massive accretion disk candidates in different evolutionary stages at a still modest distance of ∼2.7 kpc (e.g., Sandell et al. 2003;Sandell & Wright 2010;Pestalozzi et al. 2004Pestalozzi et al. , 2006Moscadelli et al. 2009;Puga et al. 2010), and they are easily observable in a mosaic mode since their spatial separation is only of order ∼80 . On the other hand, with a R.A. of 23 hours and a Decl. of 61 degrees they are ideal Plateau de Bure Interferometer (PdBI) targets to be observed in long tracks resulting in the best achievable synthesized beams. Figure 1 presents a large-scale 1.2 mm continuum map on top of near-infrared K-band data (Sandell & Sievers, 2004;Puga et al., 2010) (see also Reid & Wilson 2005). NGC7538IRS1 has been extensively studied, and the central source is estimated to a mass of ∼30 M and a luminosity of ∼ 8 × 10 4 L (e.g., Willner 1976;Campbell & Thompson 1984;Pestalozzi et al. 2004). While Campbell (1984) and Sandell et al. (2009) report an ionized jet in north-south direction, Minier et al. (2000) and Pestalozzi et al. (2004Pestalozzi et al. ( , 2009 present CH 3 OH maser observations indicative of an accretion disk almost perpendicular to the outflow. Partly different interpretations arise from mid-infrared continuum imaging (De Buizer & Minier, 2005): They detected elongated mid-infrared emission in northwestsoutheast direction aligned with a bipolar outflow reported in CO by Davis et al. (1998), and even earlier in NH 3 by Keto (1991) who also performed radiation transfer calculations for the region. This large-scale mid-infrared emission may stem from the inner walls of the outflow. The jet and outflow emission is interpreted from speckle data as due to a precessing jet (Kraus et al., 2006). On smaller scales, the mid-infrared emission appears elongated almost perpendicular to that outflow axis which De Buizer & Minier (2005) interprete as an inner accretion disk of approximate size of ∼ 900 AU. Klaassen et al. (2009) also identify a velocity gradient in dense gas in northeastsouthwest direction. Furthermore, the rare H 2 CO maser emission from this source (Forster et al., 1985;Pratap et al., 1992;Hoffman et al., 2003) is also consistent with a very young disk candidate. Surcis et al. (2011) present new CH 3 OH and H 2 O maser observations, and their data are also consistent with a rotating structure in northeast-southwest direction and an outflow opposite to that. Hutawarakorn & Cohen (2003) show the OH maser emission in this region. Figure 2 sketches the different axis and other features found in the literature. While maser, ionized gas and warm dust are well studied for this source, a good characterization of the dust and thermal gas emission was lacking. Recently, Qiu et al. (2011) observed the region with the Submillimeter Array (SMA) at 3 × 2 resolution in mm continuum and line emission, and they revealed 9 mm sources within a projected area of 0.35 pc. Compared to the blue-shifted absorption observed by Keto (1991) and Zheng et al. (2001) in the lowexcitation NH 3 lines that is indicative of outflowing gas motions, Qiu et al. (2011) detected first red-shifted absorption toward the main mm core which they interprete as signature of ongoing infall. Furthermore, the general structure of their proposed multiple outflows is consistent with the northwest-southeast outflow previously reported by, e.g., Keto (1991); Davis et al. (1998); Klaassen et al. (2011). Our data now resolve the region at again an order of magnitude higher spatial resolution, allowing us to study the central core in unprecedented detail. Fig. 1. Overview of the NGC7538 complex. The grey-scale presents the K-band image from Puga et al. (2010), and the contours show the single-dish 1.2 mm continuum data from Sandell & Sievers (2004). The colored stars and circles show the central positions and FWHM of the primary beam of the PdBI at 1.3 mm wavelength for NGC7538IRS1 and NGC7538S in the north and south, respectively. The contour levels are from 250 mJy beam −1 to 5.25 Jy beam −1 in steps of 500 mJy beam −1 . The beam of the 30 m observations and a scale-bar are shown at the bottom-left and right, respectively. NGC7538S is supposed to be younger than NGC7538IRS1 but also hosts CH 3 OH Class ii, H 2 O and OH maser emission (Kameya et al., 1990;Argon et al., 2000;Pestalozzi et al., 2006). 2. PdBI 1.36 mm continuum images toward NGC7538IRS1 and NGC7538S in the left and right panel, respectively. The contour levels start at 4σ values and continue in 8σ and 4σ steps for NGC7538IRS1 and NGC7538S (1σ values are 29 mJy beam −1 and 0.7 mJy beam −1 , respectively). Several potential disk and outflow axis reported in the literature are presented (Davis et al. 1998;De Buizer & Minier 2005;Sandell et al. 2009;Pestalozzi et al. 2004Pestalozzi et al. , 2009Sandell & Wright 2010, see Introduction for more details). The open stars, square, triangle and six-pointed star mark the positions of the OH, H 2 O, class ii CH 3 OH and H 2 CO masers (Argon et al., 2000;Kameya et al., 1990;Pestalozzi et al., 2006;Hoffman et al., 2003). A scale-bar and the synthesized beam are shown in both panels. The box zooms into the region around mm1 in more detail. The coordinates are relative to the phase centers given in sec. 2. About ∼ 80 south of NGC7538IRS1 ( Fig. 1), it roughly coincides with a far-infrared source with luminosity ∼ 1.5 × 10 4 L (Werner et al., 1979;Thronson & Harper, 1979) corresponding to an early B-star. Recently, Sandell & Wright (2010) and Wright et al. (2012) also report the detection of NGC7538S in deep mid-infrared IRAC Spitzer observations at wavelengths between 4.5 and 8 µm, as well as a weak cm continuum source likely stemming from a thermal jet. Sandell et al. (2003) resolved a 30000 AU rotating structure with a bipolar outflow emanating perpendicular to that by means of BIMA interferometric mm observations . Wright et al. (2012) resolved that structure into three separate mm sources. Recent interferometric observations (between ∼ 3 and 8 resolution) in different molecular line tracers largely confirm this picture (Sandell & Wright, 2010). However, their work indicated that many of the molecules are affected by the jet/outflow, and clear rotational signatures were hard to isolate. While the previous data are consistent with a rotating structure, the spatial resolution was not sufficient to analyze the disk candidate in detail. Figure 2 again sketches the main features found in the literature. The different ages of the two targets make them ideal candidates to investigate also disk evolutionary properties within the same observations. Observations The two sources were observed in two tracks -A and B configuration on January 26th, 2011, and February 10th, 2011, respectively -in two fields where each field was centered on NGC7538IRS1 and NGC7538S (see Fig. 1). The phase centers for the two fields were R.A (J2000.) 23h 13m 45. The continuum emission was extracted from broad band data obtained with the WIDEX correlator with four units and two polarizations covering the frequency range from 217.167 to 220.836 GHz. NGC7538IRS1 is such an extremely linerich source that barely any line-free region exists in the spectrum. Therefore, extracting a line-free continuum is difficult, and we produced our continuum map from the whole bandpass. Although there obviously is some line contamination in the final continuum image, the continuum level itself (> 1.8 Jy beam −1 ) is so extraordinarily high that the relative contribution from the lines is negligible. To get a quantitative estimate of the line contamination, we also produced a continuum image from only a very small line-free region (∼ 71 MHz) between the CH 3 CN lines. The peak flux in this image is only 1.1% below the peak flux in the continuum map produced from the whole bandpass. Obviously the line contamination is negligible and we use the map based on the full bandpass because of the lower rms noise. NGC7538S is less line-rich, and therefore, we could produce the continuum image from the line-free parts of the spectrum. A comparison of the continuum images for NGC7538S with and without line contamination shows that they are almost identical. Although NGC7538IRS1 exhibits more line emission, the previous comparison also indicates that including the lines in the NGC7538IRS1 continuum bandpath only marginally changes the real fluxes. The full bandpath spectral line data with a chemical analysis will be presented in a forthcoming paper. The 1σ continuum rms for NGC7538IRS1 and NGC7538S are 29 mJy beam −1 and 0.7 mJy beam −1 . The difference in rms can be explained by the fact that for sparse antenna interferometers like the PdBI, the rms is usually not the thermal rms but it is dominated by the side-lobes of the strongest source in the field. And since NGC7538IRS1 is far brighter than NGC7538S, also the rms for NGC7538I is significantly higher. To extract kinematic information, we put several highspectral resolution units with a nominal resolution of 0.312 MHz or 0.42 km s −1 into the bandpass covering the spectral lines listed in Table 1. The spectral line rms for 0.5 km s −1 wide spectral channels measured in emission-free channels is 8 mJy beam −1 and 7 mJy beam −1 for NGC7538IRS1 and NGC7538S, respectively. The v lsr for NGC7538IRS1 and NGC7538S are ∼ −57.3 km s −1 and ∼ −56.4 km s−1, respectively (Gerner et al. in prep., van der Tak et al. 2000;Sandell & Wright 2010). The data were inverted with a "robust" weighting scheme and cleaned with the clark algorithm. The synthesized beam of the final continuum and line data is ∼ 0.31 × 0.29 (PA 110 • ). Continuum emission Zooming in from the large-scale emission ( Fig. 1), Figure 2 presents the small-scale structure of the region in the 1.36 mm continuum emission at a spatial resolution of ∼ 0.3 or ∼ 800 AU. While NGC7538IRS1 remains a single source with peak flux in excess of 1.8 Jy beam −1 , NGC7538S is resolved into several sub-sources labeled mm1 to mm3. NGC7538S mm1 shows even additional fragmented substructure which we label as mm1a and mm1b. The sub-source mm1a is elongated in approximately the north-south direction, and it is likely that this elongation corresponds to an unresolved substructure again. Comparing the high-resolution mm continuum data with the near-infrared image by Puga et al. (2010), the mm peak in NGC7538IRS1 is clearly associated with the main infrared source IRS1, whereas the three mm peaks in NGC7538S have no near-infrared counterpart. However, NGC7538S mm1 has recently been detected by Spitzer at wavelengths between 4.5 and 8 µm (Sandell & Wright, 2010;Wright et al., 2012). It should also be noted that the 8 additional sources reported by Qiu et al. (2011) within an area of 0.35 pc are not detected by our higherresolution PdBI observations. This differences can be attributed to our smaller primary beam (FWHM of ∼ 22 ) as well as the lower brightness sensitivity one automatically achieves when going to higher spatial resolution (our 3σ continuum rms of 87 mJy beam −1 corresponds to an approximate brightness sensitivity of ∼24 K). Table 2 presents the measured peak and integrated fluxes of the sub-sources shown in Fig. 2. The integrated fluxes are measured within the 4σ contours. The single-dish 1.2 mm MAMBO data shown in Figure 1 exhibit peak fluxes of 5702 and 2872 mJy beam −1 for NGC7538IRS1 and NGC7538S, respectively. Comparing these numbers to the integrated fluxes we measure with the PdBI (Table 2), we find that toward NGC7538IRS1 only ∼ 48% of the flux is filtered out with the interferometer. Qiu et al. (2011) also measure with the SMA at about an order of magnitude lower spatial resolution (3 × 2 ) an integrated flux of 3.6 Jy, only ∼20% higher than our fluxes measured with ∼ 0.3 resolution. In comparison to that, toward NGC7538S approximately 90% of the single-dish flux is missing in the PdBI data. This large difference indicates that NGC7538IRS1 is extremely concentrated toward the central mm continuum peak whereas NGC7538S exhibits emission on much larger scales. Since NGC7538IRS1 has significant amounts of free-free emission (e.g., Pratap et al. 1992;Keto et al. 2008;Sandell et al. 2009), we correct the fluxes for that contribution in Table 2. As shown by Keto et al. (2008), several Hii region models can fit the data, and the exact free-free contribution is hard to isolate. Here, we assume ∼1000 mJy to be produced by the free-free emission, the rest is attributed to the dust emission. Recently, Wright et al. (2012) report the cm free-free fluxes from NGC7538S mm1a, and following their approach we correct 8 mJy free-free flux contribution in Table 2 as well. Assuming optically thin emission from dust following the standard approach by Hildebrand (1983) we can estimate gas masses and column densities at an assumed temperature. Because NGC7538IRS1 is a strong infrared source and hot core, following Qiu et al. (2011) we assume a dust temperature of 245 K for that source. NGC7538S is supposedly younger and colder, and we assume a dust temperature of 50 K for the corresponding sub-sources. Regarding the dust properties, we calculate the masses and column densities following Hildebrand (1983) (H83) on the one hand, and Ossenkopf & Henning (1994) (OH94) for thin ice mantles at densities of 10 6 cm −3 on the other hand. The gas-to-dust ratio is taken as 186 following Draine et al. (2007) and Jenkins (2004). Depending on the dust properties, toward NGC7538S we find core masses between 4 and 38 M and column densities between 0.7 × 10 25 and 1.8 × 10 25 cm −2 , corresponding to visual extinctions on the order of 10 4 mag. While such extinctions are very high, similar values have been reported in the past at correspondingly high spatial resolution (e.g., Beuther et al. 2007c;Rodón et al. 2008). One should keep in mind that such high extinction values are only found at the highest spatial resolution achievable with interferometers. At lower resolution, the emission smears out and lower values are found. Regarding the core masses in NGC7538S, at first sight they do not appear extraordinarily high, however, considering that approximately 90% of the gas are filtered out on larger scales, we only observe the densest structure that is embedded in a much larger gas reservoir. The situation is considerably different for NGC7538IRS1 where excessively high column densities on the order of 10 26 cm −2 are found (corresponding to visual extinctions above 10 5 mag), as well as core masses between 43 and 115 M (depending on the dust properties) within a projected size of ∼2000 AU. To the authors' knowledge this is an extraordinary concentration of mass within small spatial scales and will be discussed in more detail in section 4.1.1. For comparison, we can also calculate the total gas masses of the two regions based on the single-dish data. As approximate clump sizes, we integrate the flux in the area within the 750 mJy beam contour in Figure 1. For NGC7538IRS1 and NGC7538S, we get integrated 1.2 mm fluxes of 20.9 and 6.2 Jy, respectively. Following the same approach as above, we can calculate the gas masses for the two dust models H83 and OH94. On these large scale we use the temperature estimates from Sandell & Sievers (2004) who estimate 75 and 35 for NGC7538IRS1 and NGC7538S. With these numbers we get total gas masses for NGC7538IRS1 and NGC7538S of 2512 and 1757 M (H83) or 1011 and 706 M (OH94), respectively. Spectral line emission All spectral lines listed in Table 1 were detected toward NGC7538IRS1 and most of them also toward NGC7538S. While also velocity gradients are identified in both regions, the detailed spectral line signatures between NGC7538IRS1 and NGC7538S are considerably different, in particular we detect strong absorption signatures toward NGC7538IRS1 but not toward NGC7538S. NGC7538IRS1 Figure 3 presents the CH 3 CN(12 k − 11 k ) (0 ≤ k ≤ 3) spectra toward the mm continuum peak as well as toward a position approximately 1.1 offset south of the continuum peak. While the offset spectrum is a typical CH 3 CN emission spectrum, the spectrum toward the continuum peak is dominated by absorption features. While absorption features in interferometric data should always be taken with caution because missing flux problems can also artificially produce such features, the fact that we see the absorption only toward the continuum peak but not to-ward an offset of only ∼ 1.1 is a strong indicator for the absorption being a real feature. While Qiu et al. (2011) reported only redshifted absorption in dense gas tracers at a spatial resolution of ∼ 3 × 2 , and Keto (1991) and Zheng et al. (2001) found only blue-shifted absorption in lower-density NH 3 lines, Figure 3 already shows that at the high-spatial resolution of our observations, the dense gas shows blue-and red-shifted components simultaneously. To identify potential rotational symmetries in these data, Figure 4 presents the 1st and 2nd moment maps (intensityweighted peak velocities and line-widths) of our highest excited CH 3 CN(12 6 − 11 6 ) line (E u /k = 326 K). Although these moment maps are affected by the absorption close to the continuum peak, both maps clearly identify a velocity gradient in northeast-southwest direction, consistent with the previous lower-resolution Submillimeter Array observations in OCS and SO 2 by Klaassen et al. (2009). While the 1st moment map exhibits a blue-red velocity gradient extending about 10 km s −1 which is perpendicular to the northwest-southeast outflow structure reported by Davis et al. (1998) and Qiu et al. (2011), it is interesting that also the line-width map shows a significant line- width increase close to approximately this axis while the linewidths are considerably smaller northwest and southeast of that. The 2nd moment map in Figure 4 gives visually the impression of a disk-like structure, however, again this needs to be taken with caution because that signature can be affected by the absorption of the gas against the strong continuum. Figures 5 and 6 now show the position-velocity diagrams along the northeast-southwest cut outlined in Figure 4. The typical hot molecular core and high-density gas tracers CH 3 CN and HCOOCH 3 exhibit absorption signatures that are dominated by a red-shifted component but show some blue-shifted absorption as well. This signature appears rather independent of the excitation temperature because the shown k = 2 and k = 6 CH 3 CN(12 k − 11 k ) components cover a range in excitation temperatures E u /k of 250 K (see Table 1). As expected, the more optically thin isotopologues CH 13 3 CN does not exhibit such absorption features. While the red-shifted emission part of these spectra is consistent with a typical Keplerian rotation structure, the blue-shifted part of the emission spectrum does not show such a signature. Figure 5 also presents a Keplerian curve for a 30 M central object (see section 1), again showing the reasonable agreement on the red part of the spectrum but not on the blue side. It is also interesting to note that the higher excited CH 3 CN(12 6 −11 6 ) shows on average a better agreement with the Keplerian curve than the lower excited CH 3 CN(12 2 − 11 2 ) line. This is likely due to the fact that the higher excited line traces gas closer to the star which hence exhibits higher velocities. In contrast to that, one can also argue that the non-correspondence of the blue part of the spectrum with Keplerian rotation is not much of a surprise because Keplerian rotation implies that the whole structure is dominated by the central object. This is clearly not the case considering that the central star should have a mass of ∼30 M (see Introduction) and the gas mass of the central struc-ture traced by the mm continuum emission is of that order or even higher as well ( Table 2). The position-velocity diagrams presented in Figure 6 show slightly different absorption signatures. While almost all of them show red-shifted absorption as well (except CH 2 CO), the blueshifted absorption is at least as strong as that, if not stronger. From a blue-shifted perspective, the lowest-excited H 2 CO line shows a particularly interesting feature because in addition to the blue-shifted component at around -59 km s −1 , it exhibits another blue-shifted component at even more negative velocities around -65 km s −1 . A similar feature was recently also reported in the 1 and 2 cm H 2 CO lines (AAS poster by Yuan et al. 2011). A different way to investigate the various absorption features is presented in Figure 7 where we show the spectra of the various lines extracted directly toward the mm continuum peak position. The absorption features discussed in the paragraphs above are exactly recovered there. Many of the spectra clearly show a double-dibbed signature red-and blue-shifted around the v lsr . To check whether the additional higher-velocity absorption component in the H 2 CO line is real absorption against the continuum or rather due to missing flux on larger scales, similarly as shown for CH 3 CN in Figure 3, we also extracted the H 2 CO spectrum toward the position 0.2 /−1.1 to the south. And like for CH 3 CN, the H 2 CO spectrum exhibits a pure and "normal" emission spectrum at that position. This implies that the additional absorption at ∼ −65 km s −1 should be real. Implications of the observed redand blue-shifted absorption toward NGC7538IRS1 will be discussed in section 4.2.1. NGC7538S Toward the second region NGC7538S we clearly detect all CH 3 CN lines, as well as the spectral lines from OCS, HC 3 N, H 2 CO and CH 3 OH. In contrast to that, HCOOCH 3 , NH 2 CHO and CH 2 CO are barely detected. There is only a tentative detection of the latter two molecules toward mm2. Regarding the clearly detected molecules and spectral lines, it is interesting that all of them are detected toward the two mm sub-peaks mm1 and mm2 but none of them toward the third mm peak mm3. This already indicates peculiar chemical and evolutionary differences between mm1 and mm2 on the one side and mm3 on the other side. Furthermore, within mm1 we always detect mm1a in the spectral line emission but no molecular line is found toward mm1b. A detailed spectral and chemical analysis of all the other broadband line data we observed simultaneously will be presented in a forthcoming paper. Here we concentrate on the kinematics of the mm peaks mm1 and mm2. Figure 8 presents the 1st and 2nd moment maps (intensity weighted peak velocities and line widths) of respective lines toward that region. While the lower excited lines like H 2 CO(3 2,2 − 2 2,1 ) or CH 3 OH(4 2,2 − 3 1,2 ) show also a bit more extended emission, molecular emission from high-density tracers like CH 3 CN or HC 3 N are largely confined to mm1, mm2 and their close environment. While there is no obvious velocity difference between mm1 and mm2, toward both sub-peaks we detect in all lines velocity gradients across the mm continuum peaks, for mm1 almost north-south and for mm2 in northeast-southwest direction. Toward mm1 it is interesting to note that the molecular emission to the north is confined almost to the same region as the mm continuum emission whereas the molecular line data extend significantly outside the southern 4σ contours of the mm continuum emission. Since the main mm1a peak is close to the northern edge of that structure, the velocity structure of the gas is rather asymmetric with respect to that peak. The most blue-shifted gas almost peaks toward mm1a and the further one goes south, the more redshifted the gas gets. For the higher density lines of CH 3 CN(12 k − 11 k ), OCS(18-17) and HC 3 N(24-23), the line-width or 2nd moment peak is also not exactly toward the main mm continuum peak but a little bit to the south, almost at the tip of the central, north-south elongated mm continuum contour. As discussed in section 3.1, we cannot properly resolve a secondary component there, nevertheless, it appears likely that this slightly elongated structure will resolve into a binary system at even higher spatial resolution. Figure 9 presents position velocity cuts of selected lines along the axis marked for mm1 in Figure 8. While the emission appears relatively symmetric around the v lsr , as already mentioned above, the velocity is not symmetric around the main mm peak mm1a which is put at offset 0. Even if one shifts the center by ∼ 0.25 south toward the peak of the 2nd moment maps, it still does not appear as a symmetric position velocity cut. The data clearly confirm that the most blue-shifted gas is centered on mm1a and the red-shifted emission continuously moves to the south. Furthermore, the pv-diagrams do not exhibit any signature of Keplerian rotation. These signatures indicate that the observed velocity structure from mm1 unlikely stems from rotation. Since the jet axis is aligned approximately in northwestsoutheast direction (Figure 2), not much offset from the main velocity gradient observed here, it may well be that the velocity gradient is strongly influenced by the central jet and outflow. In comparison to mm1, Figure 10 shows the positionvelocity cuts through mm2 along the axis shown in Figure 8. Since mm2 is much smaller in spatial extend and only barely resolved by our observations, the pv-diagrams also exhibit less prominent signatures of velocity gradients. Nevertheless, a velocity gradient is identifiable, most prominently in the OCS(18-17) line. While the structure is too small for a more detailed analysis, it is interesting to note that the OCS(18-17) data are at least consistent with Keplerian rotation around a ∼1 M central object, whereas the expected rotation curve of a more massive object represents the data significantly worse. While one should not take these masses at face-value, they nevertheless indicate that the central mass in NGC7538S mm2 is significantly lower than that in NGC7538IRS1. A different way to investigate the spectral structure of the two sources is again via directly looking at the spectra toward the peak positions (Figures 11 and 12). While the spectra toward mm2 exhibit more or less Gaussian shapes around the v lsr , this is not the case for the spectra extracted toward mm1a. The mm1a spectra are strongly dominated by the blue-shifted gas which was already identified in pv-diagrams, but we see an additional red-shifted component that is separated by a little fluxdepression (not absorption) around the v lsr . This will be discussed in more detail in section 4.2.2. Fig. 11. CH 3 CN(12 k − 11 k ) spectra (0 ≤ k ≤ 3) toward NGC7538S mm1a (black) and mm2 (red). NGC7538IRS1 While fragmented sub-sources like those in NGC7538S have been observed regularly with interferometers toward highmass star-forming regions (e.g., Beuther et al. 2007b;Zhang et al. 2009), the very massive, centrally condensed and unresolved source NGC7538IRS1 is peculiar and less typical (e.g., Bontemps et al. 2010). In particular, the fact that one observes a near-infrared source toward that position but at the same time has column densities in excess of 10 26 cm −2 (corresponding to visual extinctions in excess of 10 5 mag) is surprising. This seems to indicate that the outflow/jet from the source must be very close to the line of sight allowing the infrared radiation to escape through the outflow cavity. This is reminiscent to similar sources like W3IRS5 or G9.62+0.19 where also the infrared sources are detected in spite of high column densities because of the alignment of the outflow close to the line of sight (e.g., Rodón et al. 2008;Hofner et al. 2001;Linz et al. 2005). Furthermore, it is astonishing how much mass is concentrated in a very small area in NGC7538IRS1. Table 2 and Figure 2 show that at least more than 40 and potentially even more than 100 M are concentrated within a source with projected diameter of approximately 2000 AU. To the authors knowledge, there is no other region known with so much mass in such a small area. Assuming 50 M within a sphere of diameter of ∼2000 AU, this corresponds to a mean H 2 density of ∼ 2.1 × 10 9 cm −3 which is also high compared to other star-forming regions. It is also about 2 orders of magnitude larger than the average densities derived by Qiu et al. (2011). This difference can largely be attributed to the much smaller size of the core in the PdBI data (a factor 3 in radius) as well as different assumptions in the mass calculations (we adopted a slightly different free-free contribution and gas-to-dust ratio). Puga et al. (2010) report an age spread between 0.5 and 2.2 Myrs for the surrounding infrared cluster whereas the various signatures of ongoing star formation toward the central mm and infrared source (e.g., maser emission, outflows) indicate that the central and most massive object is still in an active star formation process. It appears that in that region the most massive star forms last compared to the lower-mass population, similar to other studies like, e.g., Kumar et al. (2006) or Wang et al. (2011). Furthermore, the strong concentration of mass within a single object without much further fragmentation (except those on larger scales as reported by Qiu et al. 2011, see also Bontemps et al. 2010) is consistent with a scaled-up low-mass star forma- Fig. 12. CH 3 CN(12 5 − 11 5 ) and HC 3 N(24 − 23) spectra toward NGC7538S mm1a and mm2. tion scenario for the formation of high-mass stars (e.g., McKee & Tan 2002, 2003Krumholz et al. 2007Krumholz et al. , 2009Kuiper et al. 2010). NGC7538S The substructure one finds in NGC7538S is indicative of hierarchical fragmentation on different scales. While the singledish mm continuum data show only one large-scale gas clump with a projected diameter of ∼0.5 pc (e.g., Figure 1, Sandell & Sievers 2004;Reid & Wilson 2005), first interferometric observations already revealed an elongated gas clump with an extend of ∼30000 AU∼0.15 pc that showed velocity signatures indicating rotation. Recently, Wright et al. (2012) resolved that elon-gated structure into three mm continuum sources which we confirm here at even higher spatial resolution. Our new data now also indicate that mm1 splits up likely in ≥3 sources forming a trapezium-like system (e.g., Ambartsumian 1955;Megeath et al. 2005;Rodón et al. 2008). The hierarchical fragmentation observed in NGC7538S resembles the structures recently discussed by Zhang et al. (2009) and Wang et al. (2011) for the infrared dark cloud G28.34, although on projected smaller spatial scales in NGC7538S. Following the simple toy-model outlined in Beuther et al. (2012), in a cluster-forming scenario with a typical Kroupa (2001) initial mass function and a star formation efficiency of ∼30%, one needs approximately a 1000 M initial gas clump to form a cluster with at least one 20 M high-mass stars. As estimated in section 3.1, NGC7538S fulfills that criterium (and NGC7538IRS1 even more), and while mm2 supposedly does not form a high-mass star but rather a low-to intermediate-mass object (see section 3.2.2), the higher gas mass (Table 2) and the other high-mass star formation indicators discussed in the introduction and outlined in Figure 2 indicate that mm1 likely forms a high-mass star within this still very young cluster-forming region. NGC7538IRS1 The exceptional red-and blue-shifted absorption features presented in section 3.2.1 give various insights into the physical processes around that source. As discussed in section 4.1.1, the jet/outflow from this source has to be oriented approximately along the line of sight. Therefore, blue-shifted absorption features against the continuum peak position should be associated with expanding motions from the jet/outflow. Also the fact that we observe in the line with the lowest excitation temperature (H 2 CO(3 0,3 − 2 0,1 ) with E u /k ∼ 21 K, see also Table 1) an additional absorption feature at even more blue-shifted velocities is consistent with this picture. The lower-excited line traces colder gas further away from the source, and many outflows are known to exhibit Hubble-like velocity structure where the velocity increases with distance from the source (e.g., Arce et al. 2007). Hence colder gas further outside should show absorption at velocities further blue-shifted than closer to the source. In contrast to that, the red-shifted absorption is indicative of infalling gas. Following the approach outlined in Qiu et al. (2011), assuming a spherical infall geometry one can estimate mass infall rates 1 according toṀ in = 4πr 2 ρv in whereṀ in and v in are the infall rate and infall velocity, and r and ρ the core radius and density. The latter two values are ∼ 1000 AU and ∼ 2.1 × 10 9 cm −3 , respectively (see section 4.1.1). As infall velocity we use 2.3 km s −1 which corresponds to the difference between the peak of the red-shifted absorption at ∼ −55 km s −1 and the v lsr at around ∼ −57.3 km s −1 (Gerner et al. in prep, van der Tak et al. 2000). With these numbers, we derive an infall rate estimate ofṀ in ∼ 7 × 10 −2 M yr −1 . This is approximately a factor 20 larger than the rate estimated by Qiu et al. (2011). This difference is due to the higher spatial resolution of our data where we resolve the core on smaller scales (radius of 1000 AU here compared to 3000 AU in Qiu et al. 2011) which additionally results in higher average densities of the central core (see section 4.1.1). Considering that the accretion does not occur in a spheri-cal mode over 4π but rather along a flattened disk structure with a solid angle of Ω, the actual disk infall ratesṀ disk,in should scale likeṀ disk,in = Ω 4π ×Ṁ in . Based on the simulations by Kuiper et al. (2012) and R. Kuiper (priv. comm.), such outflow covers approximately 120 degree opening angle and the disk 60 degree (to be doubled for the north-south symmetry). Since the opening angle does not scale linearly with the surface element, full integration results in ∼50% or ∼ 2π of the sphere being covered by the disk. This results in disk infall rates ofṀ disk,in ∼ 3.5 × 10 −2 M yr −1 , still very high and in the regime of accretion rates required to form high-mass stars (e.g., Wolfire & Cassinelli 1987;McKee & Tan 2003). Although we cannot prove that the gas falls in that far that it can be accreted onto the star (and does not get reverted by the innermost radiation and outflow pressure), such high infall rates should be a pre-requisite to allow accretion even when the central high-mass star has ignited already (e.g., Keto 2003;Kuiper et al. 2010Kuiper et al. , 2011). An additional caveat arises from the potential contribution of the accretion luminosity to the total luminosity of the region. If one used the infall rates as actual accretion ratesṀ acc and estimated the accretion luminosity L acc via the classical L acc = GM * Ṁacc R * (with G the gravitational constant, M * and R * the estimated stellar mass of 30 M and a stellar radius following Hosokawa & Omukai 2009), one would derive unreasonably high accretion luminosities in excess of the measured luminosity. Therefore, some parameters in this equation have to be different. Most likely this is the accretion rate because not all gas will fall on the star but a large fraction will likely be expelled again by the energetic outflow. Estimating that ratio is out of the scope of this paper. Nevertheless, the data indicate that a significant fraction of the measured luminosity may still stem from the accretion processes. As outlined in the introduction, the disk orientation in this region has been subject to intense discussion. Although the 1st moment map of NGC7538IRS1 is distorted from the absorption toward the peak, our data clearly support the orientation of the disk along a northeast-southwest orientation that was also proposed by De Buizer & Minier (2005), Klaassen et al. (2009) or Surcis et al. (2011). We do not find signatures of rotational motion along the more east-west oriented structure that was proposed as a disk from CH 3 OH maser observations (Pestalozzi et al., 2004(Pestalozzi et al., , 2009). However, we cannot exclude that the dense gas northeastsouthwest oriented emission and the east-west CH 3 OH masers belong to the same torus-disk structure as suggested by Surcis et al. (2011). NGC7538S The spectral line signatures in NGC7538S vary considerably among the three main sub-sources. While mm1 and mm2 are strong line emitters, mm3 shows no line emission in any of the discussed lines of this project. Contrary to that discrepancy, the continuum emission from mm2 and mm3 is very similar in size, column density and mass. Therefore, it is most likely that the spectral line differences are real chemical differences among the two sub-sources mm2 and mm3. It is tempting to interprete that these differences are due to different evolutionary stages. While mm2 shows also rotational signatures and is already a star-forming core, mm3 may well still be in a younger and starless phase. In this picture, sources that are separated by less than 10000 AU and that are embedded within the same large-scale gas clump may not evolve coeval at all. What are the physical rea-sons for this evolutionary differences? Unfortunately, our data do not allow us to draw conclusions on that point. The additional blue-redshifted gas components visible in the spectra toward mm1a (Figures 11 and 12) indicate that while the pv-diagrams (Figure 9) are dominated by the larger structure encompassing also the dust elongation toward the south, there exist additional velocity structure toward the peak mm1a. Although we do not spatially resolve that substructure, it may well stem from a smaller embedded disk centered on mm1a which is also the likely driving source of the jet in that region (Fig. 2). Since the continuum source mm1 is already resolved in at least two sub-sources (mm1a and mm1b), and mm1a is elongated indicating the existence of an additional source, it is likely that mm1 hosts a multiple system where large-scale kinematic structures are present (best visible in the pv-diagram in Figure 9) as well as potential small-scale rotational structure around individual sub-sources only identified in the spectra (Figures 11 and 12). Comparing the larger almost north-south velocity gradient in mm1 with the orientation of the jet in approximately northwest-southeast direction, it appears that the dense gas kinematics in mm1 are strongly influenced by the jet. This makes the identification of rotational signatures even harder. It is likely that higher spatial resolution as well as spectral lines sensitive only to the innermost region around the central protostar are needed to disentangle the rotational structure around mm1a from the kinematic signatures caused by the jet. Going to larger spatial scales, Sandell et al. (2003) already identified a velocity gradient across the whole 30000 AU structure that encompass the three regions mm1 to mm3 approximately along the connecting axis of the sub-sources. Interestingly, Sandell et al. (2003) find that the large-scale rotation is consistent with Keplerian motion. Since the kinematics around mm1 appear to be dominated by the jet, Keplerian signatures cannot be expected for that subregion. The situation is less obvious for mm2 ( Figure 10) where the velocity structure at least does not disagree with Keplerian rotation. Regarding the alignment of axis, the proposed rotational axis of the large-scale toroid (Sandell et al., 2003) and the structure around mm2 are approximately aligned whereas the jet-axis dominating mm1 is almost perpendicular to that. Unfortunately, such low-number statistics do not allow us to derive further conclusions from that. Conclusions Very-high-resolution mm continuum and spectral line observations of the two high-mass disk candidates NGC7538IRS1 and NGC7538S reveal intriguing information about the small-scale morphology and kinematics of these two regions. NGC7538S appears as a relatively typical source that fragments down to the smallest resolvable scales. The large-scale single-dish gas clump forms an elongated torus of ∼30000 AU (Sandell et al., 2003) that fragments into three cores with separations on the order of 5000 AU. At even higher spatial resolution, these cores show additional substructure and the most massive one fragments even further. These data are consistent with hierarchical fragmentation. While the kinematics of the main mm peak mm1 appears to be strongly influenced by the jet/outflow emanating from the source, a spectrum extracted toward the central peak mm1a is indicative of additional unresolved rotational motions. Higher-resolution data are needed to resolve that. The spectral lines toward mm2 also exhibit a velocity gradient, and although barely resolved, the data are consistent with Keplerian rotation around a low-to intermediate-mass object. Therefore, in NGC7538S we are witnessing the formation of a very young cluster where the sources within mm1 have the potential to form a high-mass star at the end of the evolution. An additional interesting feature is that mm1 and mm2 are strong spectral line emitters whereas mm3 is not. While mm2 and mm3 appear very similar in the continuum emission, the strong diversity in the spectral lines indicate different evolutionary stages. Hence, even within areas of ∼10000 AU diameter, we find cores that are likely not evolving coeval. Determining physical reasons for such discrepancies is beyond the scope of this paper. NGC7538IRS1 remains a single source even at ∼800 AU spatial resolution (∼ 0.3 ). This is even more surprising considering that the source is embedded in an already existing nearinfrared cluster. NGC7538IRS1 has extremely large gas and dust column densities corresponding to visual extinction values on the order of ∼ 10 5 mag. The fact that we still see the central source in the infrared implies that the jet/outflow from that region should be aligned closely to the line of sight allowing us to glimpse through the outflow cavity close onto the central source. The central 2000 AU around the source contain a large gas mass on the order of 50 M , implying central average densities in the regime of 10 9 cm −3 . Since the position-velocity diagrams of NGC7538IRS1 are distorted by the absorption, interpretation of kinematic signatures are more difficult. Nevertheless, we clearly identify a velocity gradient in northeast-southwestern direction, consistent with the proposed mid-infrared disk emission orientation (De Buizer & Minier, 2005) and perpendicular to the outflow axis. Our data do not support the proposed rotational axis based on CH 3 OH maser emission (Pestalozzi et al., 2004(Pestalozzi et al., , 2009) that is inclined to our observed axis by approximately 60 deg. At ∼0.3 spatial resolution, almost all observed spectral lines reveal strong absorption signatures toward the peak of the mm continuum emission (that coincides within the errors with IRS1) in NGC7538IRS1. While some lines, in particular the lower excitation temperature lines like those of H 2 CO appear to be dominated by blue-shifted absorption indicative of outflowing gas, the higher-excitation and higher-density lines exhibit clear redshifted absorption that has to be due to infalling gas. Since the jet/outflow is supposed to be aligned along the line-of-sight, it is no surprise that infalling and outflowing gas are observed at the same spatial position. Estimated mass infall rates are very high, on the order of 10 −2 M yr −1 . Although we cannot proof that the gas will continue to be accreted by the central star, nevertheless, the conditions are sufficient to allow accretion still during that already luminous and active phase of the protostellar evolution. Combining the large infall rates with the fact of barely any fragmentation of the gas and dust core, these data are consistent with high-mass star formation proceeding in a scaled-up version of low-mass star formation. While the presented data already reveal many new insights for both regions, significant information is still missing. In particular, the proposed accretion disk signatures for both sources -NGC7538IRS1 and NGC7538S mm1 -are "contaminated" by absorption and jet signatures, respectively. To overcome these issues, one likely needs to resort to even higher excited lines that are neither absorbed by the envelope nor emitted by the outflowing gas.
2012-05-24T14:18:09.000Z
2012-05-24T00:00:00.000
{ "year": 2012, "sha1": "aa499cacd8ff53275254ff82d519f844b1a06595", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2012/07/aa19128-12.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "aa499cacd8ff53275254ff82d519f844b1a06595", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221887666
pes2o/s2orc
v3-fos-license
Acrofacial purpura and necrotic ulcerations in COVID‐19: a case series from New York City Acrofacial purpura and necrotic ulcerations in COVID-19: a case series from New York City Dear Editor, As COVID-19 continues to spread worldwide, the prognostic significance of its cutaneous manifestations has been increasingly scrutinized, including that of retiform purpura. Additionally, an increased incidence of thromboembolism has been seen in COVID-19, and histopathologic evaluation of retiform purpura in COVID-19 patients demonstrated thrombotic vasculopathy suggestive of a hypercoagulable state. To better characterize purpura and necrotic ulcerations in hospitalized COVID-19 patients and examine incidence of systemic coagulopathy in this population, we performed a retrospective review of patients seen within a tertiary care center during peak incidence of COVID-19 in New York City. After IRB approval, we reviewed patient charts for whom dermatology and wound care were consulted at NYU Tisch and Bellevue Hospital from March 1 to May 1, 2020. Over 3,800 patients were hospitalized for COVID-19 during this time. Inclusion criteria consisted of positive SARS-CoV-2 PCR (severe acute respiratory syndrome coronavirus polymerase chain reaction) and presence of purpura and/or necrotic ulceration. Salient laboratory values and clinical outcomes were documented (Table 1). In an attempt to exclude typical hospital-acquired sacral pressure ulcers, patients solely with sacral purpura/necrosis were excluded. We identified 21 PCR-positive COVID-19 patients with purpuric and/or necrotic ulcerations on the ears, face, distal extremities, and/or genitalia (Fig. 1). Fourteen of 21 patients had multiple sites of involvement including eight patients who also had sacral ulcers. In 17/21 patients, sites in direct contact with medical devices including nasal cannula, endotracheal tube, urinary catheter, or pulse oximeter were involved; devices were in place for a range of 2–30 days (median 11 days) at the time of dermatologic evaluation, and time between hospitalization and first identification of skin manifestations ranged from 2 to 33 days (median 19 days). Case age varied greatly and was younger overall (25–88 years, median age 56) than in prior reports of retiform purpura in COVID19. Only 3/21 patients were female. Most patients were critically ill; 19/21 required invasive mechanical ventilation and 18/21 required vasopressors within 2 weeks of lesion onset. All patients were intermittently proned. In terms of systemic hypercoaguability, five patients developed deep vein thromboses and one experienced myocardial infarction. Sixteen developed acute kidney injury, possibly related to renal microthrombosis. Therapeutic anticoagulation was initiated in 16/21 (76%) for a thrombotic event or elevated D-dimer: 13 prior to the recognition of cutaneous findings, while the remainder were transitioned from prophylactic to therapeutic doses of anticoagulation after cutaneous eruptions were noted. Recent reports document a high incidence of coagulopathic events in COVID-19. While the exact pathomechanism remains unclear, direct invasion of endothelial cells by SARSCoV-2 virus and complement-mediated endothelial injury may promote a microthrombotic syndrome with potential for cutaneous involvement. In our review of 21 patients, we demonstrate a propensity for acrofacial purpura and necrotic ulceration in COVID-19, often associated with minor pressure (including intermittent proning or contact with medical devices) and occurring on nonsacral sites. Moreover, we identify a 29% rate of detectable thromboembolic events, 76% incidence of acute renal injury possibly related to microthrombosis, and 90% incidence of severe COVID-19 pneumonia in this cohort, despite a younger median age than previously reported. The majority of patients were men, likely reflective of increased COVID-19 disease severity as described in men compared with women. While sacral ulcerations are frequently seen in critically ill patients, acrofacial purpura and necrosis are less common. We posit that a microthrombotic syndrome associated with COVID19 may result in acrofacial cutaneous purpura/necrosis and that pressure-associated tissue hypoxemia is an inciting factor in areas not typically prone to pressure-induced injury. We highlight these cases to suggest increased vigilance for pressure-related cutaneous injury in severely ill COVID-19 patients. Further, observation of necrotic ulcerations may warrant heightened clinical suspicion for a procoagulant state and/ or signs of other end-organ damage. These cutaneous findings may have implications regarding appropriate therapeutic anticoagulation targets, although additional prospective studies are needed. and examine incidence of systemic coagulopathy in this population, we performed a retrospective review of patients seen within a tertiary care center during peak incidence of COVID-19 in New York City. After IRB approval, we reviewed patient charts for whom dermatology and wound care were consulted at NYU Tisch and Bellevue Hospital from March 1 to May 1, 2020. Over 3,800 patients were hospitalized for COVID-19 during this time. Inclusion criteria consisted of positive SARS-CoV-2 PCR (severe acute respiratory syndrome coronavirus polymerase chain reaction) and presence of purpura and/or necrotic ulceration. Salient laboratory values and clinical outcomes were documented (Table 1). In an attempt to exclude typical hospital-acquired sacral pressure ulcers, patients solely with sacral purpura/necrosis were excluded. We identified 21 PCR-positive COVID-19 patients with purpuric and/or necrotic ulcerations on the ears, face, distal extremities, and/or genitalia (Fig. 1). Fourteen of 21 patients had multiple sites of involvement including eight patients who also had sacral ulcers. In 17/21 patients, sites in direct contact with medical devices including nasal cannula, endotracheal tube, urinary catheter, or pulse oximeter were involved; devices were in place for a range of 2-30 days (median 11 days) at the time of dermatologic evaluation, and time between hospitalization and first identification of skin manifestations ranged from 2 to 33 days (median 19 days). Case age varied greatly and was younger overall (25-88 years, median age 56) than in prior reports of retiform purpura in COVID-19. 3 Only 3/21 patients were female. Most patients were critically ill; 19/21 required invasive mechanical ventilation and 18/21 required vasopressors within 2 weeks of lesion onset. All patients were intermittently proned. In terms of systemic hypercoaguability, five patients developed deep vein thromboses and one experienced myocardial infarction. Sixteen developed acute kidney injury, possibly related to renal microthrombosis. 4 Therapeutic anticoagulation was initiated in 16/21 (76%) for a thrombotic event or elevated D-dimer: 13 prior to the recognition of cutaneous findings, while the remainder were transitioned from prophylactic to therapeutic doses of anticoagulation after cutaneous eruptions were noted. Recent reports document a high incidence of coagulopathic events in COVID-19. 1 While the exact pathomechanism remains unclear, direct invasion of endothelial cells by SARS-CoV-2 virus and complement-mediated endothelial injury may promote 2 a microthrombotic syndrome with potential for cutaneous involvement. In our review of 21 patients, we demonstrate a propensity for acrofacial purpura and necrotic ulceration in COVID-19, often associated with minor pressure (including intermittent proning or contact with medical devices) and occurring on nonsacral sites. Moreover, we identify a 29% rate of detectable thromboembolic events, 76% incidence of acute renal injury possibly related to microthrombosis 4 , and 90% incidence of severe COVID-19 pneumonia in this cohort, despite a younger median age than previously reported. 3 The majority of patients were men, likely reflective of increased COVID-19 disease severity as described in men compared with women. 5 While sacral ulcerations are frequently seen in critically ill patients, acrofacial purpura and necrosis are less common. We posit that a microthrombotic syndrome associated with COVID-19 may result in acrofacial cutaneous purpura/necrosis and that pressure-associated tissue hypoxemia is an inciting factor in areas not typically prone to pressure-induced injury. We highlight these cases to suggest increased vigilance
2020-09-25T13:01:40.951Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "c79eb6813eb8056bc081b4354f8c53bd4efd5f04", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7537226", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8b724f4cf7ad1ad77d02899db24e2c271328c99d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119231490
pes2o/s2orc
v3-fos-license
Precession-driven dynamos in a full sphere and the role of large scale cyclonic vortices Precession has been proposed as an alternative power source for planetary dynamos. Previous hydrodynamic simulations suggested that precession can generate very complex flows in planetary liquid cores [Y. Lin, P. Marti, and J. Noir,"Shear-driven parametric instability in a precessing sphere,"Physics of Fluids 27, 046601 (2015)]. In the present study, we numerically investigate the magnetohydrodynamics of a precessing sphere. We demonstrate precession driven dynamos in different flow regimes, from laminar to turbulent flows. In particular, we highlight the magnetic field generation by large scale cyclonic vortices, which has not been explored previously. In this regime, dynamos can be sustained at relatively low Ekman numbers and magnetic Prandtl numbers, which paves the way for planetary applications. I. INTRODUCTION The Earth's magnetic field, which is believed to be generated by fluid motions in the outer core through the so-called geodynamo mechanism, has been in existence for at least 3 billion years according to paleomagnetic records. 1,2 It is thought that the geodynamo is powered by compositional and thermal convection in the outer core. 1 However, this conventional view of the geodynamo is called into question because of a tight energy budget. 3 In particular, recently revised estimates of thermal conductivity that are higher than previously thought have placed the convection geodynamo in a more restricted position. 4 Alternatively, Bullard 5 first proposed that precession, a change of the orientation of the rotation axis, is a potential power source to generate the Earth's magnetic field. From an energetic point of view, precession driven laminar flow cannot extract sufficient energy to maintain the Earth's magnetic field. 6,7 In contrast, turbulent flows driven by precession can dissipate much more energy and thus are possible to sustain the geomagnetic field. 8,9 In addition to the geodynamo, it has been proposed that the ancient lunar dynamo may be sustained by precession. 10,11 However, these studies did not take into account the constraints of realistic magnetohydrodynamics (MHD) driven by the precessional forcing. Gans 12 first experimentally studied MHD in a precessing cylinder filled with liquid sodium, showing the signature of amplified magnetic fields but ultimately no self-sustained dynamo action. In the last decade, several numerical simulations have demonstrated that precession-driven flows can sustain magnetic fields through dynamo action in spherical, 13,14 spheroidal 15,16 and cylindrical 17,18 geometries. In contrast with laboratory experiments and planetary cores, numerical simulations usually adopted a much higher (P m 1) magnetic Prandtl number P m (the ratio of the kinematic viscosity to the magnetic diffusivity) than that is appropriate for liquid metals. Therefore, the simulations are generally dominated by viscous dissipation rather than the Ohmic dissipation. In addition, due to limited computational resources, numerical models use a relatively large Ekman number (E 10 −4 ) which measures the typical ratio between the viscous force and the Coriolis force. The present numerical study aims at shedding light on precession driven dynamos at relatively low Ekman numbers and magnetic Prandtl numbers. We work in a full sphere geometry. In this geometry only viscous coupling to the boundary is possible, and topographic couples that are effective in cylindrical and spheroidal geometries are entirely absent. Based on our hydrodynamic simulations, 19 we investigate precession driven dynamos in different flow regimes. It is of particular interest that the nonlinear evolution of precessional instabilities can lead to a few dominant cyclonic vortices (i.e. rotating in the same direction as the background rotation), which are elongated along the rotation axis of the fluid. 19,20 Hereafter we refer to these vortices as large scale cyclonic vortices (LSCV). These large scale vortices are thought to be a favourable flow structure for magnetic field generation. 21 Indeed, our numerical simulations suggest that precessiondriven LSCV can sustain dynamos at relatively low magnetic Prandtl numbers (P m < 1). This allows us to investigate precession driven dynamos in the parameter regime in which the diffusivities have the correct hierarchy for planetary applications. The plan of this paper is as follows. Sec. II introduces the governing equations and numerical models, while Sec. III presents numerical results. The paper closes with a summary and discussion in Sec. IV. II. NUMERICAL MODELS We consider a sphere of radius R filled with a homogeneous, incompressible and electrically conducting fluid of density ρ, kinematic viscosity ν, electrical conductivity σ and magnetic permeability µ 0 (equal to the vacuum magnetic permeability). The sphere rotates at Ω o = Ω ok and precesses at Ω p = Ω pkp , wherek andk p are unit vectors along the spin and precession axes, respectively ( Figure 1). Using the radius R as the length scale, Ω −1 o as the time scale and Ω o R √ ρµ 0 as the unit of magnetic field B, the dimensionless MHD equations governing the fluid velocity u and the magnetic field B in the mantle frame (attached to the container) can be written as, 13,22 ∂u ∂t where the precession vectork p is given bŷ Here (î,,k) are unit vectors in Cartesian coordinates (x, y, z) whose z-axis is along the rotation vectork and α p is the angle between the rotation axis and the precession axis. We set α p = 60 • in all simulations unless otherwise specified. The system is controlled by three dimensionless parameters: the Ekman number E which measures the ratio between the viscous force and the Coriolis force, the Poincaré number P o which measures the dimensionless precession rate and the magnetic Prandtl number P m which is the ratio between the kinematic viscosity and the magnetic diffusivity. These parameters are defined as follows: where η = (σµ 0 ) −1 is the magnetic diffusivity of the fluid. Negative (positive) values of P o correspond to retrograde (prograde) precession. We consider only retrograde precession (negative P o ) in the present study. Equations (1)(2) are numerically solved by a fully spectral code. 23,24 The velocity field u and magnetic field B are decomposed into toroidal and poloidal fields in a spherical coordinate system (r, θ, φ): which automatically satisfy ∇ · u = 0 and ∇ · B = 0. The scalar fields are then expanded as where Y m l (θ, φ) are the spherical harmonics of degree l and order m. Note that we may use M < L as a restricted truncation in azimuth, and thus l * =min(l, M ). The radial dependences of the scalar fields are expanded in the so-called Worland polynomials W l n (r), i.e. W l n (r) = r l P −1/2,l−1/2 n (2r 2 − 1), which are combinations of a prefactor r l and the onesided Jacobi polynomials. The Worland polynomials exactly satisfy the parity and regularity at the origin of the sphere. 25 We use a total of N polynomials for each l. Some of our more intensive calculations require truncations in (N, L, M ) as high as (127,255,127). The no-slip boundary condition for the velocity u is adopted and given by 23 For the magnetic field B, we use an insulating boundary condition which leads to a vanishing toroidal component and the poloidal component matching the potential magnetic field outside the sphere 23 on the boundary r = 1. The numerical code has been benchmarked in several contexts including that of precession driven flows and MHD calculations. 24,26,27 III. RESULTS The simulations performed for this work are detailed in Table I and the magnetic energy E m in the fluid volume The kinetic energy can be decomposed into its symmetric part and anti-symmetric part as we have done in hydrodynamic simulations. 19 where The basic flow driven by precession is symmetric around the origin and any anti-symmetric flows must be due to instabilities. Therefore, the anti-symmetric kinetic energy is an indicator of instabilities. All results are presented in the mantle frame in order to have the same view as that from which we observe the Earth's magnetic field. A. Laminar dynamos It has been shown that precession driven laminar flows can sustain dynamos due to the Ekman pumping/suction at large Ekman numbers. 13 In this section, we gradually reduce the At a given Ekman number, we choose a suitable Poincaré number such that the flow is in the stable regime while containing as much kinetic energy as possible based on the hydrodynamic simulations (see Fig. 11 in Ref 19). For each combination of E and P o , we vary the magnetic Prandtl number P m in order to find a critical value. The velocity field starts from a steady state of the hydrodynamic simulation while the magnetic field starts from a saturated dynamo state at higher E or P m . If the magnetic energy can be sustained for around one magnetic diffusion time τ : then we say it is a successful dynamo. Note that τ is around 10 times the e−folding time of the slowest decaying dipole field. 28 Failed dynamos show exponential decay of the magnetic energy. Figure where E kp is the poloidal kinetic energy and V is the volume of the sphere. Figure 2 shows the regime diagram in the plane of (R m , E). The critical magnetic Reynolds number is around 700 for E > 10 −3 , which is very close to the critical value 770 obtained by Tilgner 13 at E = 1.4 × 10 −3 (there is a small solid inner core with radius of 0.1R in his study). At smaller Ekman numbers (E 7 × 10 −4 ), it seems that the critical magnetic Reynolds number increased as the Ekman number is decreased, at least for the R m defined above. One should bear in mind that we also adjust the Poincaré number as the Ekman number is decreased in order to keep the flow laminar in Figure 2. The flow would become unstable if we decrease the Ekman number but fix the Poincaré number. In the unstable regime, the critical magnetic Prandtl number can be smaller than in the laminar regime. 13 We will focus on dynamos driven by unstable flows in Section III B. Although the large scale magnetic fields are generated due to the LSCV in the bulk, the fields below the boundary (r = 1− √ E) are characterized by small scales in Figure 9(a). The small scale fields in the boundary layer are much weaker than the large scale field associated with the LSCV in the bulk. We believe that the small scale fields are related with viscous boundary layer instabilities. 34,35 The magnetic fields are extended upward to outside of the fluid domain. Since the magnetic potential decays as (1/r) l+1 outside the fluid domain, the field outside is mostly dipolar or quadrupolar. For example, Figure 9(b) shows contours of B r on the surface r = 2 (roughly corresponding to the Earth's surface if we assume that the boundary r = 1 represents the core-mantle boundary). We observe a weak dipole field whose moment lies in the equatorial plane in this snapshot. However, the field structure varies in time (see Movie 2 in supplemental material 47 ). The exterior field can be either The maximum of the magnetic energy is at l = 4 but the spectrum is almost flat for l < 10. We note that the toroidal component of the magnetic energy is dominant compared to the poloidal component. In Figure 11, IV. DISCUSSION Based on previous hydrodynamic simulations, we have shown precession driven dynamos in different flow regimes. In the laminar regime, dynamo action operates mainly in a thin layer beneath the boundary since the bulk fluid is nearly a solid body rotation. Our simulations at lower Ekman numbers than previous studies clearly show that precession driven laminar dynamos are more difficult to obtain at low Ekman numbers, as has been pointed out previously. 13 The main result of the present study is that we have demonstrated magnetic field generation by large scale vortices in a precessing sphere, which has not been explored (Table I) However, there are still several uncertainties concerning this conclusion due to the possible effects of the solid inner core 41 and interactions between precession and convection. 42 For the case of the Moon, we have estimated that the growth rate of precessional instability due to the conical shear layer is two orders of magnitude larger than the viscous decay rate, 19 suggesting very complex flows in the lunar core driven by the 18.6 yr precession of the moon. Therefore, a lunar dynamo driven by precession is possible during the evolution history of the Moon, 10 particularly if the large scale vortices are formed due to the precessional instability. However, the Moon does not have an observable internal magnetic field generated by dynamo action at present time. Note that not all of the power deposited by the precession is available to sustain a lunar dynamo. 10 There is also a threshold power requirement to maintain the lunar core in a well-mixed adiabatic state, which is not matched at present day. 10,43 Although we have made great efforts to push towards the parameter regime of planets, our simulations are still far away from the realistic parameter regime, and there is little prospect of approaching the realistic parameters which require considerable computational resources. Therefore, it would be helpful in future to extract some generic scaling laws from numerical models as in the studies of convection driven dynamos. 44,45 On the other hand, laboratory experiments with liquid metals can reach more extreme parameters, which would significantly compensate for the limitations of numerical models. A liquid sodium experiment of a precessing cylinder with the height and diameter of 2 meters is under construction in Dresden, Germany, 46 which is expected to provide new insights on precession driven dynamos.
2016-06-10T08:44:22.000Z
2016-06-10T00:00:00.000
{ "year": 2016, "sha1": "9d30d58b574c057c609ac1ab654723656236ea9d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1606.03230", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9d30d58b574c057c609ac1ab654723656236ea9d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
41351945
pes2o/s2orc
v3-fos-license
Testing drug combinations to slow aging Aging is a complex multifactorial process, meaning that multiple pathways need to be targeted to effectively prevent or slow aging [1]. A number of molecular pathways are well known for influencing aging, but only a few have been successfully targeted with individual drugs, and these drugs do not individually target all aging pathways. However, combinations of these drugs might have the potential of effectively broadening the scope of aging targets. There are a number of drug combinations that could be combined based on different but overlapping pharmacological activities. Since the number one criterion for selecting drugs should be based on known anti-aging effects, for example, in preclinical mouse studies, the number of drugs available to consider is markedly reduced. Three drugs with well-validated anti-aging effects in laboratory animals, rapamycin [2,3], acarbose [4], and SS31 [5,6], are well suited to therapeutic multiplexing as a way to enhance healthy aging and stop the development of lesions associated with aging and physiological dysfunction based on interactive cellular mechanisms of each drug. The inter-drug relationship of these three drugs can easily be perceived by explicitly defining the mechanism of action of each drug and how it overlaps and extends the mechanism of action of each of the other drugs in the complex. A plausible explanation then of how they could act as a multiplex in targeting molecular pathways in aged mice is as follows: Aging is a complex multifactorial process, meaning that multiple pathways need to be targeted to effectively prevent or slow aging [1]. A number of molecular pathways are well known for influencing aging, but only a few have been successfully targeted with individual drugs, and these drugs do not individually target all aging pathways. However, combinations of these drugs might have the potential of effectively broadening the scope of aging targets. There are a number of drug combinations that could be combined based on different but overlapping pharmacological activities. Since the number one criterion for selecting drugs should be based on known anti-aging effects, for example, in preclinical mouse studies, the number of drugs available to consider is markedly reduced. Three drugs with well-validated anti-aging effects in laboratory animals, rapamycin [2,3], acarbose [4], and SS31 [5,6], are well suited to therapeutic multiplexing as a way to enhance healthy aging and stop the development of lesions associated with aging and physiological dysfunction based on interactive cellular mechanisms of each drug. The inter-drug relationship of these three drugs can easily be perceived by explicitly defining the mechanism of action of each drug and how it overlaps and extends the mechanism of action of each of the other drugs in the complex. A plausible explanation then of how they could act as a multiplex in targeting molecular pathways in aged mice is as follows: (1) Acarbose given orally blocks intestinal alpha glucosidase so that carbohydrates are not broken down and absorbed. This results in lower blood glucose levels and prevents postprandial insulin spikes. The lower blood glucose and decreased need for insulin activate adenosine monophosphate-activated protein kinase, which tends to block mtorc1, the drug target for rapamycin. The lowered blood glucose provides less available substrate for mitochondrial metabolism thereby sensitizing mitochondria to increased electron transport chain (ETC) efficiency induced by SS31bound cardiolipin. The decreased need for insulin helps alleviate insulin resistance induced by rapamycin-suppressed mtorc2. (2) Rapamycin blocks mtorc1 resulting in suppression of protein synthesis, suppression of mitochondrial metabolism, and activation of autophagy. The suppressed mitochondrial metabolic activity enhances the binding of SS31 peptide to cardiolipin thereby increasing ETC efficiency for vital ATP production, and decreases the generation of mitochondrial reactive oxygen species thought to play a role in progressive aging and age-related diseases. The suppression of protein synthesis helps conserve valuable cellular resources. Activation of autophagy helps eliminate dysfunctional and senescent cells thereby preventing additional cellular burden. (3) SS31 peptide binds to the phospholipase cardiolipin exclusively at the inner mitochondrial membrane [7]. This binding increases the hydrophobic interaction between cardiolipin and cytochrome c, thereby enhancing the function of cytochrome c as an electron carrier from complex III to complex IV on the ETC. This also results in a decrease in the peroxidase activities of cytochrome c. Thus, cardiolipin serves as a base for SS31 to optimize oxidative phosphorylation. Activation of autophagy helps eliminate dysfunctional and senescent cells thereby preventing additional cellular burden. The decreased substrate conditions triggered by acarbose and the downregulation of mitochondrial proteasome by rapamycin provide a highly sensitive environment in the mitochondria for the efficiencyenhancing effects of SS31 peptide. The concept of drug multiplexing to slow aging looks good on paper, but drug combinations have yet to be tested in any meaningful way. Historically, testing single drugs in mouse lifespan studies has provided useful information, but it is costly and time consuming. More importantly, lifespan studies are difficult to recapitulate in humans, making translation of the preclinical information challenging. And especially relevant is the fact that lifespan studies in mice are not well-suited to testing drug combinations that could more effectively target multiple factors involved in aging. Thus, new paradigms for testing therapeutics aimed at slowing aging are needed. The Geropathology Grading Platform (GGP) was developed by the Geropathology Research Network to provide a grading system for lesions associated with aging [8]. One of the advantages of the GGP as a drug testing paradigm is that middle-age mice can be treated for as little as two months and see differences in lesion scores. For example, the platform was used to compare lesion grades in 26-month-old C57BL/6 mice treated with rapamycin for two months, starting at 24 months. Rapamycin-treated mice had significantly lower lesion scores compared to placebo treated mice [9]. The GGP measures biological aging aligned with mouse lifespan studies and physiological findings, and helps provide predictive power for antiaging effects that drug combinations might have in clinical trials. Thus, the GGP would be a critical tool in preclinical studies to determine if drug combinations slow aging (Figure 1). However, just like measuring lifespan in mice, the GGP is not something directly used in people because autopsies are not commonly done in clinical trials. Therefore, aging biomarkers are needed to determine if fundamental mechanisms of aging are being targeted. Surrogate aging biomarkers offer the potential to predict a subject's outcome such as improved function, extended survival, or arrest of age-related disease. Preliminary observations suggest that proteins in the serum or peripheral blood cells correlate with the GGP [10]. The focus is on blood and serum, because these are readily available from patients and can be serially sampled with minimal risk. The translational impact emphasizes the importance of molecular markers as clinical end points suggesting the feasibility of identifying serum peptides and other molecular end points that reflect biological rather than chronological age. While the future for expanded use of drug combinations in treating various diseases and conditions, including aging, is highly promising the path toward eventual regulatory approval can be challenging and should be considered in any preclinical studies undertaken. The potential beneficial functional synergy gained from the logical and judicious use of rational drug combinations, such as rapamycin, acarbose and SS31, is obviously complicated by the fact that different drugs with different metabolic, pharmacokinetic, and toxicity profiles are being superimposed on top of one another. This may be of no consequence, but on the other hand could result in combinatorial enhancement of negative outcomes due to such things as imposition of conflicting metabolic pathways, altered absorption profiles, or additive toxicities. To address these concerns, the Food and Drug Administration provides a guidance document 'Co-development of two or more new Rapamycin blocks mTOR slowing cellular activity ss31 binds cardiolipin enhancing mitochondrial bioenergy Acarbose blocks intestinal a-glucosidase suppressing glucose absorption and insulin spikes Rapamycin and acarbose are given orally in the mouse chow, while SS31 is given by parenteral injection so the three drugs are concurrently delivered on a daily basis to middle age mice for 2 to 4 months. Geropathology Grading Platform The expectation is that combo-treated mice will have decreased incidence and severity of age-related lesions and lack any unexpected lesions associated with toxicity compared to mice treated with placebo and individual drugs. Additional molecular and metabolic endpoints will be used to correlate with lesions scores for assessing anti-aging effects. Figure 1. Slowing aging might best be achieved by drug combinations that target multiple interactive molecular pathways. However, very little testing of drug combinations has been done because of a lack of preclinical screening paradigms. This figure demonstrates how the interaction of a prototype drug combination, rapamycin, acarbose, and SS31, can be tested for anti-aging effects in a preclinical setting using the aged mouse and a geropathology grading system. investigational drugs for use in combination' [11]. This document provides an understanding of outcomes that investigators should be aware of during the preclinical development process. Focusing not just on the benefits of combination products but also the potential liabilities early on can speed the development process. It is thus reassuring that the GGP provides an informative system that enhances the combination drug development process by empowering concurrent assessment of both positive and negative effects. In summary, the concept of drug multiplexing as a powerful platform to slow aging is promising but has not yet entered the mainstream of aging research. The combination of rapamycin, acarbose, and ss31 peptide, three drugs with individually documented anti-aging effects, is a logical approach designed to complement mechanisms of action of their molecular targets and robustly enhance a delay of aging and age-related disease not seen with mono-therapeutic approaches. Support for the preclinical investigation of this drug combination as well as other drug combinations is urgently needed to determine dosages, frequency of administration, and criteria for when to start administering the drugs, i.e. focus on treatment at older ages, or prevention at younger ages. These parameters are essential in correlating translational molecular end points from mouse to humans for drug testing in clinical trials. Disclosure statement No potential conflict of interest was reported by the authors.
2018-04-03T05:02:42.769Z
2017-11-23T00:00:00.000
{ "year": 2017, "sha1": "78f771cb199f6909592e891462ba61264def654f", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20010001.2017.1407203?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "78f771cb199f6909592e891462ba61264def654f", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
229507708
pes2o/s2orc
v3-fos-license
TYPOLOGY OF DYSGRAPHIA ERRORS IN PERFORMING THE WRITING TASK 'PRODUCING YOUR OWN TEXT' BY BILINGUAL PRIMARY-SCHOOL STUDENTS : The article presents the results of a field logopedic study of third-and fourth-grade bilingual students and the research material created by them while carrying out a writing task of producing a text of their own. The aim of the experiment is to analyze the quantitative and qualitative parameters of the dysgraphia errors in bilingual students (ethnic Roma and Turks) at the text level and on this basis to identify their typology and dependence on some linguistic and social factors. For the purposes of the research, a toolkit has been developed that includes groups of methods for the study of the following: the psychosomatic and the academic status, the elementary graphic habits; the phonemic gnosis; writing in different situations; identification and typologization of typographical errors through a criterion system. The results clearly show that the prevailing errors made by the bilinguals are analytic-synthetic. Regarding the other types of errors, significant differences have not been registered. The analysis of the results showed significant differences in terms of gender, ethnicity/language status and type of settlement. Introduction The language development of the child is an integral part of its social development and one of the components of its socialization as a person. Socialization, in turn, is a process of mastering a complex of social roles. Each role corresponds to certain norms of social and linguistic behavior, depending on the situation and conditions of communication. The process begins in kindergarten and continues in primary school, where children learn not only the correct speech, but also the peculiarities of written communication. The problems of children from ethno-cultural and ethno-linguistic minorities are already evident in kindergarten, they deepen and become more complicated in primary school. These problems are related not so much to the influence of ethno-cultural and socio-cultural factors, as to the linguistic peculiarities of the environment in which the children live. The linguistic situation in which the children of bilinguals (Turks and Romas) are raised and brought up is different depending on the degree of mastery of the Bulgarian language as a second language. As a science that takes as its subject matter language and speech disorders, as well as their rehabilitation and prevention, speech therapy has a special status in the study of bilingualism and in solving the problems related to the education, upbringing and socialization of bilinguals. Literature Review The concepts discussed in the field of theoretical interpretations of bilingualism point to the finding that there are no significant differences in the definition of the phenomenon of bilingualism. Many authors' research reflects a wide range of issues related to the education, literacy and social adaptation of bilingual children (Simeonova, 2007;Stamov, 1989;Kyuchukov, 2002;Weekes, 2005;Sebastian et al. 2011;Houghton & Zorzi, 2003;Koleva, 1994). Researchers agree on the notion that bilingualism is a linguistic phenomenon with a socio-psychological basis and is not associated with neuropsychological and general somatic pathogenesis in these children. Bilingualism is a psychic mechanism (knowledge, skills and habits) that allows a linguistic person to move freely from one language to another, depending on the linguistic situation. There are also differences in the positions of individual authors, but they are manifested in the definition of the types of bilingualism and are by no means contradictory to each other. Rather, they are a consequence of the definition aspectthe criteria by which the types of bilingualism are grouped and differentiated. The coexistence of two languages in an individual is a complex phenomenon. The bilinguals' use of language, as pointed out by Wei (2002), involves such factors as degree (the proficiency level of the language that an individual has), function (for what an individual uses his languages, the roles his languages played in his total pattern of behavior), alternation (the extent to which one alternates between one's languages, how one changes from one language to another, and under what conditions) and interference (how well the bilingual keeps his languages apart, the extent to which he fuses them, how one of his languages influences the use of another). Particular difficulties in mastering a second language and achieving complete bilingualism are created by the phenomenon of interference, which is by consensus defined as a change in the structure or the elements of one linguistic system under the influence of another. According to the researchers, interference is widespread at all levels of the language sign system in second language acquisition (Kyuchukov, 2002;Rowse & Wilshire, 2007). The penetration of phonetic, grammatical and lexical elements of the mother tongue into the second language (the one being studied) is initially at the level of oral expression, since oral speech is formed in the process of natural development. Written communication is the result of specialized training. Written speech is a form of verbal speech through graphically represented language means. Here, sounds are replaced by graphic linguistic characters. The foundation of writing is language and speech as a universal medium of communication. The analysis of the literary sources on the problems of mastery of writing and interference in children bilinguals shows that, as a whole, it has a negative impact on the acquisition of the second language, both at the level of oral and written production (Chen et al. 2010;Abu-Rabia & Siegel, 2002). At the same time all studies provide support to Cummins' "threshold hypothesis" which holds that "a threshold level of linguistic competence must be attained both in order to avoid cognitive disadvantages and allow the potentially beneficial aspects of bilingualism to influence his cognitive and academic functioning" (Cummins, 1976, p. 3). Figueredo (2006) carried out a review of 27 studies examining the development of spelling ability in bilingual children. The review supported the notion that positive or negative transfer will take place depending on individual language characteristics. Positive transfer will occur when commonalities exist among orthographies (such as common letters) or strategies used, e.g., phonological or visual skills. On the other hand, negative transfer will occur when, due to lack of competent L2 awareness, rules specific to first language (L1) are generalized to second this one. In the studies examined in the review, eight found positive transfer and three found negative transfer effects, and one study did not find any cross-linguistic effects. In the course of the study of the literary sources, the conceptual conclusions about the empirical statements and the solutions to the research problem were formed: Firstly, in the process of active interaction with the child's social and physical environment, cognitive processes develop, sustainable communication and social-behavioral models are mastered. Secondly, the psychological and pedagogical analysis of the development of children bilinguals shows serious difficulties in their inclusion in social microstructures and in the acquisition of the necessary educational content. Thirdly, language interference in children bilinguals influences the overall verbal communicationboth oral and written. As written speech has more stringent requirements in its implementation, it takes more time and is more difficult to master. Therefore, the risk of experiencing difficulties in the process of its acquisition is higher. The bilingual child requires longer and more focused efforts to learn a particular volume of vocabulary, as well as to learn the stylisitc and grammar rules of the respective language and adhere to them in written practice. As a result, a conflict is generated between the requirements of the comprehensive school, on the one hand, and the objective capabilities of the child, on the other. Difficulties arise in the literacy mastering and the acquisition of written communication competences. The main aim of this study is to analyze the quantitative and qualitative parameters of the dysgraphia errors in bilingual students (ethnic Roma and Turks) at the text level and on this basis to identify their typology and dependence on some linguistic and social factors. Data and Methodology The null hypotheses presented correspond to the main purpose of the research and the conceptual conclusions from the analysis of the scientific literature. They are: ▪ Н01: There is no significant difference between the number of analytical-synthetic spelling errors (ASDE)a consequence of the influence of language interference, and the other two types of dysgraphia errors (VSCPL, WL) made in the written text by students in the formed groups (monolinguals and bilinguals). ▪ Н02: There is no significant difference between the number of types of dysgraphia errors observed in students speaking Bulgarian verbal language (monolinguals) and students (Romas and Turks) using other linguistic codes in communication and Bulgarian language. ▪ Н03: There is no significant difference between the number of types of dysgraphia errors recorded in the boys and in the girls participating in the experiment. ▪ Н04: There is no significant difference between the number of types of dysgraphia errors registered in the students living in different demographic areas: big city, small town, village. The experiment has the character of a field logopedic study. The subject area defines the research field. The object of the study is the written communication of third-and fourth-grade bilingual students (of Turkish and Roma ethnic origin). The subject of the study is the typology of the dysgraphia errors made by the students in performing the written task of "producing their own text". The linguistic task is accomplished through a written retelling of a text whose content and volume are tailored to the students' age and curriculum. In accordance with the research aim purposive sampling was used. The sample was formed from students from the third and fourth grades of primary schools in different localities in Bulgaria. 548 students from 27 classes of 10 general education schools in six populated areas were covereded. The latter are divided according to their types: type Ibig city, type IIsmall town, type IIIvillage. Two large research groups are identified: bilinguals (Romas and Turks) and monolinguals (Bulgarians), which represent the control group. The groups are differentiated on the basis of certain criteria: ethnicity, gender, type of settlement. The total number of Roma bilingual students is 161 (73 girls and 88 boys), the ethnic Turks are 146 (70 girls and 76 boys) and the Bulgarians are 241 (138 girls and 103 boys). A typical feature of the settlements covered is that they are home to compact masses of bilinguals (of Roma and Turkish ethnic origin). Outside of the Bulgarian general education school, they only communicate in the language (dialect) of their ethnic community. The control group consists of students of Bulgarian ethnic origin (monolinguals), of the same classes in the described settlements. They speak the Bulgarian language. The experimental study consists of verbal stimuli in Bulgarian. Linguistic test covers all levels of the hierarchical structure of the language sign system. The verbal material (texts) is in accordance with the age and cognitive characteristics of the students. The identification and typologization of the dyscgraphia errors was performed according to the following classification criteria: Spatial-co-ordination-spelling errors (VSCPL); Misspellings errors (WL); Analytical-synthetic spelling errors (ASDE). The evaluation of the results is carried out according to the quantitative and qualitative criteria of the diagnostic procedure. The statistical procedure is performed by formulating the necessary statistical hypotheses. The obtained research data were subjected to processing and statistical analysis using a built-in function ANOVA: Single Factor Microsoft Office 2010. Results The analysis of the spelling errors was performed on the basis of the completed language task to retell in a written form a text through presenting the stimulus twice. The analysis was conducted in two directions: the distribution of the errors by type and the distribution of students who made dysgrphia errors according to ethnicity, gender, and type of settlement. When examining the results of this designed study four separate hypothesis were tested. The first null hypothesis suggested that the number of ASDE (a consequence of the influence of language interference) manifested by the monolinguals and bilinguals would not be significantly different from other two types of dysgraphia errors (VSCPL, WL) made in the written text. The next three null hypotheses proffered that the number and quality of dysgraphia errors demonstrated by observed students would not be significantly different depending on their ethnicity, gender, and the type of settlement they inhabited. The differences are statistically significant when F>Fcrit and the alternative hypothesis is accepted. The first null hypothesis aimed to determine the difference between the total number of analyticalsynthetic spelling errors (ASDE) and the other two types of dysgraphia errors (VSCPL, WL) made in the written text by students in the formed groups. Review of the data indicated the predominance of the analytical-synthetic spelling errors over the errors related to space-coordinate letter placement and misspellings errors (Table1). The values of the check value confirm the statistical significance of the observed difference (F=3.982298, F crit=1.42667). The obtained results give a reason for rejecting the null hypothesis and accepting the alternative hypothesis. Source: Authors The purpose of the second null hypothesis was to ascertain the difference between the number of types of dysgraphia errors observed in students speaking the Bulgarian verbal language (monolinguals) and students (Romas and Turks) using other linguistic codes in communication and the Bulgarian language. Source: Authors Analysis of the presented results showed that the participants bilinguals performed significantly worse on linguistic task than the monolingual control students (F=2.088929, F crit=1.234529). It can be explained by the fact that the written language is significantly much more abstract than the oral language. The written versions of the narrative contain syntactic complexity. The bilinguals are at risk for writing impairment due to their poor comprehension and metalinguistic skills and specific linguistic processing. They often struggle with the planning, organization, and revision needed to write, and frequently devote a disproportionate amount of cognitive energy to mechanics such as spelling, handwriting, punctuation. These results give support to Bishop and Clarkson (2003) views that writing is non-spontaneous and postulates greater explications, greater elaboration, greater formality. The written retelling requires skills for analysis and synthesis of linguistic and graphic constructions, transformation of linguistic and speech components into graphic patterns, an increased attention. The limit of these skills is the basis of the dominant amount of analytical-synthetic errors (F=2.657197, F crit=1.549975). The registered results (Table 2) reflected a statistically significant difference, as a result of which the second null hypothesis was rejected and Н1 was accepted. The third null hypothesis aimed to determine the difference between the total number of types of dysgraphia errors recorded in the boys and in the girls participating in the experiment. After the applied one-factor analysis of variance, it was found that between the boys and girls of the Bulgarian general education school there are statistically significant differences in the number of dysgraphia errors, which is proven by the values of the check value F (F=5.323297, F crit=2.424364). As a result, this null hypothesis was rejected and the alternative Н1 accepted. Source: Authors This difference is most pronounced in ASDE errors (Table 3), given the values of F ((F=26.3774166, F crit=3.500464), which are a major indicator of the impact of linguistic interference on males. The fourth null hypothesis had to establish the difference between the total number of types of dysgraphia errors registered in the students living in different demographic areas: big city, small town, village. It was proven that the type of settlement is also a statistically significant factor for the relative share of dysgraphic errors (F = 2.570829, F crit=2.088929). The smallest number of errors was registered in the students from the big cities (623). The number of errors made by students in the small town (1373) and students living in rural municipalities (1459) is more than twice as high. Source: Authors The dominant position of significance (Table 4) again belongs to the analytical-synthetic errors (F= 5.323297367, F crit=2.657197), which are abundant in students living in small areas. Consequently, this null hypothesis was also rejected and the alternative Н1 accepted. Of course, differences that are not statistically significant (VSCPL and WL) should not be underestimatedthey can be a sign of certain trends and are also indicative. Discussion The present study examined the typology of dysgraphia errors in producing of written text by bilingual primary-school students. Written retelling is a complex task to accomplish in a linguistic and generally cognitive aspect. It is extremely indicative of the level of mastery of a language and its graphical system. The students had to retell a text read by the researcher based on the fairy tale genre. The writing activity was given as a seatwork in the classroom. A large number of errors were made in the implementation of the writing sample by the students in both groups. Fourteen types of errors, distributed in three groups, were found in the text written by Bulgarian (monolinguals), Turkish and Roma (bilingual) students. The analytical-synthetic errors prevailed over spelling errors and the errors, related to violation of the coordinate-spacing of letters. At the same time, although the monolinguals tend to fare better on literacy measures at school age when compared to students with bilingualism, they still struggle with correct use of written codes and spelling, most likely due to some phonological deficits. The latter, however, are strongly expressed in bilinguals. The information gained from the written materials of students, as well as related literature revealed that the major source of the most errors is the interlingual interference. This is because the students always thought in their first language when they produced written Bulgarian language elements. The difference in language patterns is a big trouble for bilinguals. Interlingual interference is also the main cause of errors found in other researches (Kaweera, 2013). Intralingual interference is another crucial source of the bilinguals' errors. It is expressed in the students' confusion of using the target language. Their knowledge of the target language is incomplete, so they combined the knowledge of Turkish and Roma with that of Bulgarian. Very limited knowledge of Bulgarian grammar and vocabulary leads the students to commit errors. The data from the study confirm the need to improve language skills at the lexical and grammatical level. Apart from the different phonetic systems and the linguistic interference that has proven to be a fundamental factor determining the dominant number of dysgraphia errors in the students from the Turkish and Roma ethnic communities, their presentation is largely related to the social context. Some authors even remarked that social factors are more important cause of the low level of bilingualism than linguistic factors (Jones, 2004). This study identified some factors that affected the performance of textbased writing in both groups of studentsmonolinguals and bilinguals. One of the most important factors was ethnicity. It is directly related to bilingualism and manifests itself as ethno-cultural features of the studied groups of individuals. While the Bulgarian students have done much better in performing the linguistic task, the same cannot be said of their Turkish and Roma peers. Тhe sociocultural status of the Turkish and Roma language community affects the linguistic environment and the input bilinguals receive. The main reasons for the strong influence of the factor are the presence of a high degree of segregation and the lack or insufficient linguistic contacts in the integrating phonetic system, poor verbal communication outside their own ethnic group. As Moll (1992) states that for such students, the only input is teachers or classmates. The students are exposed to the second language only in the classroom where they spend less time in contact with the language covering a smaller discourse type. The limited exposure to the target language and lack of opportunities to practice speaking do not let their communicative abilities of L2 fully develop. The social distance between interlocutors can have a considerable influence on second language proficiency. Obviously, the specificity of the value system of the respective ethnic group, which does not meet the goals for higher education, is demotivating and neglecting. The other no less important factor was gender. The study revealed the fact that the boys tend to have a much worse score in comprehension and written coding then the girls, and this difficulty may be exacerbated somewhat with development. The results of this study are consistent with the results of other studies which found that the gender is also associated with writing difficulties (Kingdon, Serbin and Stack, 2017). Yoshimasu and college (Yoshimasu et al. 2010) also prove a higher prevalence of spelling difficulties in boys as compared to girls, though Moll & Landerl (2009) did not find a gender difference in writing difficulties. It is worth noting that different cultures sometimes define learning difficulties differently, and ways of identification writing difficulties can affect gender prevalence. Yet in most studies there tend to be more boys then girls. The results obtained, regarding the lower success of the boys in the written task, confirm the opinion in the research literature that girls develop linguistically better and at a faster pace than boys. The main reason is the biological differences between the genders, the slower maturation of the central nervous system in boys compared to girls, which is also manifested in the period of literacy. Some characteristic features of the genders probably have an additional influencethe female is generally more diligent in performing school tasks. The type of settlement has also a statistically significant influence on the performance of the linguistic task. The results confirmed twice as many errors among the participants living in the small regions of the country. In small settlements and especially in villages, students (mostly Roma) are often absent from school, which generates a negative effect on the process of mastering graphic communication. The delays and unjustified absences of students from classes can be considered as an indicator of a specific attitude of their families to regular school attendance, which directly affects educational outcomes and leads to erosion of long-term motivation. In support of the above findings are data from specific studies (Jones, 2004). In addition, the present study inspires the idea that the settlement is an expression of the inevitable influence of the characteristic patterns of thought and behavior for a particular local community. Conclusion The results of the empirical study show that the linguistic situation in which the children of bilinguals are raised and educated is crucial for the degree of mastery of the Bulgarian language as a second (nonnative) language. The main reason is the influence of language interference in children who are mastering more than one language systems in their communication. Its effect on second language learning reflects on the mental attitudes and motivation of the bilingual child in the learning process and on its socialization and integration in general. The high requirements of the general education school call for more attention from the specialists (speech therapists, pedagogues) and 'a unique pedagogical style' (Shivacheva-Pineda, 2019;Teneva, 2018) in the teaching of the children to read and write, as well as to master the written communication. The results of the children who were covered in the survey revealed significant differences depending on gender, ethnicity and the type of settlement. The dominant number of errors was made by the boys of all groups, which is a significant predictor of dysgraphic identification. Significant differences in the typographic errors were found according to the type of settlement. The presence of a high degree of segregation and lack or insufficient language contacts in an integrative environment, the highly restricted verbal communication outside of the socioethnic/linguistic group, are among the leading determinants of the low levels of bilingualism.
2020-11-26T09:04:42.324Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "5eb6c0487b9caae0ce20617d50bd2912a058ef43", "oa_license": "CCBY", "oa_url": "https://ojs.cbuic.cz/index.php/pss/article/download/82/288", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ab98e136d8804642d60863cb3a80706536db56f6", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
210833567
pes2o/s2orc
v3-fos-license
Anthranilamide-based Short Peptides Self-Assembled Hydrogels as Antibacterial Agents In this study, we describe the synthesis and molecular properties of anthranilamide-based short peptides which were synthesised via ring opening of isatoic anhydride in excellent yields. These short peptides were incorporated as low molecular weight gelators (LMWG), bola amphiphile, and C3-symmetric molecules to form hydrogels in low concentrations (0.07–0.30% (w/v)). The critical gel concentration (CGC), viscoelastic properties, secondary structure, and fibre morphology of these short peptides were influenced by the aromaticity of the capping group or by the presence of electronegative substituent (namely fluoro) and hydrophobic substituent (such as methyl) in the short peptides. In addition, the hydrogels showed antibacterial activity against S. aureus 38 and moderate toxicity against HEK cells in vitro. Results and Discussions Synthesis of anthranilamide-based short peptides. The library of anthranilamide-based short peptides was designed based on three main modifications, which include modifying the acyl group R1 (modification A), introducing substituents at the 5 position of the capping group (modification B), and incorporation of anthranilamide-based short peptides into the bola amphiphile (BA) scaffold (modification C) Fig. 2. A capping group of a short peptide-based hydrogelator could govern the stiffness and stability of the resulting hydrogels 18,26 . Therefore, hydrogels 1-4, bearing different aromaticity on their capping groups, are envisaged to exhibit different properties. Fluorine, a small electronegative atom, and methyl were introduced (modification B) to investigate the effect of an electron-withdrawing or electron-donating group on the subsequent scaffolds, yielding compounds 5 and 6, respectively. Aside from conventional, linear low molecular weight gelators (LMWG), BA and C 3 -symmetric molecules comprise rather different molecular structures and are reported to have self-assembly properties [27][28][29][30][31] . Therefore, anthranilamide-based, short peptides were incorporated into BA and C 3 -symmetrical scaffolds. A Bola-amphiphile is an amphiphile which contains two hydrophilic ends that are connected via a hydrophobic spacer and which can cooperatively induce fibre formation [32][33][34] . The anthranilamide-based short peptides 7-9 were incorporated in different positions (ortho-, meta-, and para-) of benzene dicarboxamide to examine the effect of molecular structure on the properties of corresponding hydrogels. In addition, hydrogelators 10, linked via an oxalyl spacer, were also synthesised to investigate the effect of using a shorter hydrophobic spacer. Further building on the bola-amphiphile scaffolds 7-10 which contain two hydrophilic ends connected by a hydrophobic spacer, the anthranilamide-based peptides were incorporated into the C 3 symmetric benzene-1,3,5-triscarboxamide (BTA)scaffold. BTA derivatives have been reported to form helical columnar stacks as the result of cooperative π-π stacking, which led to excellent hydrogelation 35 . Hence, the BTA-bridged anthranilamide-based short peptide 11 was expected to form a supramolecular hydrogel. Initially, to obtain anthranilamide-based short peptide, isatoic anhydride was ring-opened using a modified known procedure 36 . Isatoic anhydride was heated under reflux with methyl l-Phenylalanyl-l-phenylalaninate hydrochloride in the presence of an inorganic base to provide compounds 12a-c as pure products after purification with column chromatography. Their structures were confirmed by 1 H NMR as characteristic singlets around δ = 6.3 ppm and δ = 8.8 ppm corresponded to aniline and amide protons, respectively. Compounds 12a-c were treated with the respective acyl or sulfonyl chloride followed by ester hydrolysis reaction to give the carboxylic acids 1-6 in excellent yield (70-83%) (Scheme 1). Self-assembly of anthranilamide-based short peptides. Molecular self-assembly can be defined as a spontaneous process where disordered molecules or systems form more defined structures as result of intermolecular interactions 37 . In order to form a self-assembled gel, intermolecular interactions and balance between hydrophobicity and hydrophilicity of a short peptide-based gelators often play a significant role. An aromatic capping group of a short peptide-based hydrogel is usually designed in such a way that it could provide not only intermolecular interaction sources, but also a balance between hydrophobicity and hydrophilicity in the molecule. The partition coefficient, log P value, is a universal tool used to predict the hydrophobicity of a molecule. Having log P values between 2.53 and 5.25, which are considered to be ideal 38 , the anthranilamide-based hydrogelators 1-11 are expected to formed stable hydrogels. Additionally, the anthranilamide-based hydrogelator also provides a carbonyl group which might act as a hydrogen bond acceptor. Several triggers such as physical stimuli, pH switch, solvent switch, or a combination of these, were employed to assess the self-assembly of compounds 1-11 in water. Initially, hydrogel formation of compounds 1-11, at 1% (w/v), was investigated using a combination of pH switch and temperature switch. The compounds were heated in dilute sodium hydroxide (NaOH) in order to deprotonate the carboxylic acid group at the C-terminus of the short peptides. Upon cooling the solutions to room temperature, clear or opaque hydrogels with pH ranging from 9-12 were observed for N-acetyl 1, N-benzoyl 2, N-naphthoyl 3, fluoro substituted 5 and methyl-substituted 6, as shown in Fig. S1. The other compounds (4, 7-11) remained as clear solutions even after 48 hours. In an attempt to induce gelation in peptides which failed to form hydrogels using the previous method, glucono-δ-lactone (GdL) was added to basic solutions of 4 and 7-11. Compared to a mineral acid, GdL promotes the formation of a homogenous hydrogel due to its slow hydrolysis and fast dissolution rate in water 39,40 . Addition of 3 equivalents of GdL, to protonate the C-terminus of the peptide, resulted in formation of translucent hydrogels for BA 8-10 (Table 1, Fig. S2). However, using this method, a white precipitate was observed from the N-naphthalene sulfonyl 4, ortho 7, and C 3 -symmetric 11. In addition, the effect of salt which can induce gelation in basic peptide solutions 41 , due to high ionic was also investigated for peptides 1-11. Addition of NaCl or CaCl 2 to high pH solutions of compounds 1-11 resulted in precipitation of these short peptides, presumably due to a salting out process. In a final attempt to induce gelation in peptides 4, 7 and 11, a solvent switch method using dimethyl sulfoxide (DMSO), methanol, or ethanol as co-solvent was employed [42][43][44][45][46][47] . The majority of the anthranilamide-based short peptides formed a turbid solution which then clarified to various degrees, over the course of seconds to minutes, and subsequently formed clear or opaque hydrogels (Table S1, Fig. S2). However, gels were observed only for the first 5 minutes for N-naphthalene sulfonyl 4 and BA 7 (ortho-), as water and a white precipitate was observed to evolve over time when using DMSO: water at 1% w/v (Fig. S4). Peptide 11 formed a precipitate for all solvent switch conditions tested. Given that these hydrogels were designed with antibacterial applications in mind, due to the potential toxicity concerns arising from the use of organic co-solvents, further characterisation of peptides 4, 7 and 11 was not carried out. The critical gel concentration (CGC) represents the minimum amount of the anthranilamide-based short peptide required to form a hydrogel. The CGC of anthranilamide-based short peptides were qualitatively assessed by varying the short peptide concentrations and conducting vial inversion test [48][49][50] . Hydrogels composed of anthranilamide-based short peptides exhibit relatively low CGCs ranging from 0.07-0.30% (w/v) with gelation achieved either through combinations of pH and temperature switch (Table 1). Here, gelation time represents the time required for these short peptides, at 1% (w/v), to form self-supporting hydrogels as assessed through the vial inversion test. characterisation of anthranilamide-based hydrogels. It has been reported that interaction between aromatic units, in short peptide hydrogelators, plays a prominent role for self-assembly to occur 51,52 . Therefore, the π-π stacking interaction between aromatic groups of anthranilamide-based short peptides was investigated using 1 H NMR, UV-Vis, and Circular Dichroism (CD) spectroscopy. H NMR and UV-Vis To investigate the role of the π-π stacking interactions, concentration dependent 1 H NMR studies were performed on N-acetyl 1, N-benzoyl 2, and N-naphthoyl 3, bearing different aromatic cap. Upon increasing the concentration of these hydrogelators, notable up-field chemical shifts ( δ = 0.1 ppm) were observed for the aryl proton from the aromatic cap which indicated their involvement in π-π stacking that initiate self-assembly process (Figs. 3a and S5) 53,54 . The peak broadening (Fig. 3a, green) indicated that the self-assembly of short peptides 1-3 started to occur at very low concentrations. However, complete transformation from solution phase into gel phase was observed at their CGC, as indicated by disappearance of the 1 H NMR features (Figs. 3a and S5, red) 55 . To support the 1 H NMR analysis, concentration dependent UV-Vis was carried out. UV-Vis absorption of N-benzoyl 2 showed a bathochromic shift (from 203 nm − 217 nm) and enhancement of a shoulder peak ranging from 240 nm -350 nm as the concentration was increased from 0.003 mg mL −1 to 0.050 mg mL −1 (Fig. 3b). Similarly, N-acetyl 1 (bearing less aromatic cap) and N-naphthoyl 3 (bearing a more aromatic cap) also exhibited bathochromic shifts from 202 nm − 213 nm and enhancement of shoulder peaks ranging from 240-350 nm (Fig. S6). These results further support the observation that aromatic groups promote the self-assembly of anthranilamide-based short peptides to form well defined nanofibres leading to hydrogel formation. Circular Dichroism (CD) spectroscopy and ATR-FTIR Peptides often exhibit conformational motifs such as a-helix, ß-sheets, or disordered coils which can be determined using CD spectra and FTIR spectroscopy. Initially, the far UV region (240 nm -190 nm) where the main absorbing group is a peptide bond, was investigated using a CD spectrophotometer. The negative band at ~195 nm and relatively low ellipticity above 210 nm observed for N-acetyl 1 indicated formation of a random coil secondary structure 56 . On the other hand, the N-benzoyl 2 and N-naphthoyl 3 possessed ß-sheet secondary structures as indicated by the presence of positive maxima at around 190-200 nm (π → π*) and negative minima around 230 nm (n → π*) (Fig. 4a) 57 . The presence of two positive maxima, at 197 nm and 220 nm, for 2 and 3 suggested formation of dipeptide nanotubes which are rich in ß-sheet secondary structure facilitated through π-π stacking interactions from the aromatic capping group 58,59 . www.nature.com/scientificreports www.nature.com/scientificreports/ Anthranilamide-based short peptides 5 and 6, bearing fluoro and methyl as substituents, exhibit similar CD patterns to those of N-benzoyl 2 (Fig. 4b), which suggest that introduction of a substituent at the 5-position of anthranilamide-based short peptides does not affect their secondary structure (Fig. 4c,d). ß-sheet secondary structure was also observed from dilute solutions of both BA 8 (meta-) and 9 (para-). On the other hand, BA 10 (linked via an oxalyl linker) showed a characteristic disordered coil by appearance of a strong negative band below 200 nm and a weak positive band at ~218 nm 60 . In addition, CD spectroscopy was also used as a tool to investigate gradual thermal denaturation of anthranilamide-based short peptides. Despite the CD signal of N-benzoyl 2 (as model compound) being gradually decreased, the overall ß-sheet feature was preserved upon increasing temperature from 25 °C to 60 °C. This result indicates that there are no significant conformational changes and demonstrates thermal stability of N-benzoyl 2 under physiological conditions (Fig. 5), which is important to note for future antimicrobial and cytotoxicity studies. FTIR was used to further confirm the formation of either ß-sheet structure or disordered coil in anthranilamide-based hydrogel 1-10 (Figs. S7-14). The amide I region of D 2 O gels made from N-benzoyl 2, N-naphthoyl 3, and BA 8-9 exhibited peaks that correspond to ß-sheet structure (1625 cm −1 −1640 cm −1 ) 57 . Meanwhile, hydrogels made from N-acetyl 1 and BA 10 showed peaks at 1647 cm −1 and 1640 cm −1 , respectively, which support their disordered coil structure observed by CD spectroscopy 61,62 . In addition, peaks correspond to their respective secondary structure were also observed in xerogels (air-dried H 2 O gel) of these anthranilamide-based short peptides 1-10 (Figs. S7b-S14b). This suggested that secondary structures of these hydrogels were retained, regardless of whether the peptide was in a lyophilized or hydrated environment 63 . Mechanical properties. Mechanical properties of hydrogelators prepared from anthranilamide-based short peptides were investigated using a rheometer. All hydrogels, except those composed of 6 displayed frequency-independent behavior during frequency sweep tests (FST) (Fig. S15). Hydrogels composed of 6 appeared to undergo irreversible deformation at frequencies >2 Hz, indicating that the gel network is metastable 58,59 . The stiffness of a hydrogel can be approximated by its Gʹ value, where a higher value corresponds to a stiffer hydrogel 60 . It can be seen that changing the capping group from N-acetyl to N-benzoyl or N-naphthoyl significantly increases the stiffness of the resulting hydrogel from 3.4 kPa to 16.9 and 5.7 kPa, respectively ( Table 2). This result might be ascribed to enhanced aromatic or hydrophobic interactions due to N-benzoyl 2 and N-naphthoyl 3 capping groups. Hydrogel 5 (with a fluoro substituent), also showed characteristics of a stable hydrogel with similar stiffness (11.4 kPa) to N-benzoyl 2, indicating that the installation of an electron-withdrawing fluoro group at this position does not affect the overall mechanical properties of the hydrogel. In contrast, hydrogel 6 bearing an electron-donating methyl substituent showed a notable decrease in strength with Gʹ = 120 Pa (Table 2). Hydrogel 6 showed frequency-dependent behaviour at frequencies above 2 Hz, where hydrogel rupture ultimately occurred. At frequencies below 1 Hz, however, frequency independent behaviour was observed. www.nature.com/scientificreports www.nature.com/scientificreports/ For the bola amphiphile-type peptides, depending on the spacer, diverse mechanical properties were observed for hydrogel 8-10 owing to their different self-assembled structures. BA 8, which is linked at the meta-position, gives a moderately stiff hydrogel with Gʹ = 1.6 kPa. Interestingly, the structural isomer (para-9) yields a stiffer hydrogel (Gʹ = 12.9 kPa), possibly due to a more linear packing motif which could stabilize the multilayer nano-structure. In addition, BA connected through an oxalyl spacer 10 also showed characteristics of stable hydrogels with Gʹ = 10.3 kPa. The strain sweep test (SST) was conducted to determine the linear viscoelastic region (LVER) of a hydrogel. A larger LVER suggests that the hydrogel is more resistant to an applied oscillatory strain, such as that which can be applied by cells 64 . Hydrogels composed of N-acetyl capped peptide 1 show deformation upon the application of a relatively small strain (0.2 ± 0.01%), indicating its unstable nature, potentially due to a lack of aromatic/hydrophobic interactions 65 . Aromatic -aromatic interactions are known to form more stable supramolecular hydrogels 66 . In agreement, the N-benzoyl 2 and N-naphthoyl 3 were more amenable to applied strains, with LVERs up to 5.6 ± 0.03% and 1.9 ± 0.02%, respectively (Fig. S16). The presence of fluoro, as an electron-withdrawing substituent, in hydrogel 5 increased the LVER from 5.6% to 8.5%. In contrast, hydrogel 6, bearing a methyl substituent, exhibited a significantly shorter LVER (0.6%). This result might be accounted for by the electronic contributions from the substituent on the anthranilamide-core, which affected the overall mechanical properties of the resulting hydrogels. Compared to BA 8 (meta-), the isomeric structure BA 9 (para-) exhibited a shorter LVER, presumably due to its more linear packing which reduces the flexibility of the resulting hydrogels. In addition, BA 10 (oxalyl-) displayed a significantly shorter LVER, potentially due to the less aromatic and shorter spacer compared to BA 8 and 9. network morphology of the self-assembled gels. To gain some insight into the morphology of the hydrogels, xerogels of 1-3, 5-6, and 8-10 were imaged using atomic force microscopy (AFM). In general, these hydrogels possessed fibre-like structures with different diameters as shown in Fig. 6. N-acetyl bearing hydrogel 1 consisted of two fibre populations, small fibres with diameter of 60 ± 13 nm and larger fibres with diameter of 180 ± 20 nm (Fig. 6a). Straight fibres with no junction zones were observed and might explain the brittle characteristics of these hydrogels as measured by rheology. It has been reported that less aromatic hydrogelators tend to form thicker fibres than more aromatic hydrogelators 17,18 . Consistently, N-benzoyl 2 and N-naphthoyl 3 (bearing more aromatic groups) exhibit fibres with smaller diameters of 18 ± 5 nm and 12 ± 5 nm, respectively. Introducing substituents, such as fluoro or methyl, did not significantly change the overall morphology of the fibres (Fig. 6d-e). The fluoro substituted hydrogel 5 possessed fibres with similar diameter (16 ± 0.1 nm) to those of hydrogel 2. Meanwhile, hydrogel 6 (having a methyl substituent) showed a notable decrease in diameter (9 ± 0.1 nm). This is somewhat surprising, given the significant differences observed in the mechanical properties (storage modulus and LVER) for hydrogels of 5 and 6. In addition to exhibiting distinct mechanical properties, the isomeric peptides BA 8 (meta-) and BA 9 (para-) also showed divergent fibre morphology (Fig. 6f-g). Hydrogel 8 (meta-), showed formation of small straight fibres (15 ± 2 nm), while the hydrogel made from para-9 displayed twisted fibres with diameter of 112 ± 10 nm. The formation of thicker fibres along with extensive bundling, observed from BA 9 (para-), clarify their much higher Gʹ value compared to BA 8 (meta-) shown in rheology 67 . In addition, BA 10 (linked via an oxalyl spacer) showed formation of slightly curved fibres with diameter of 29 ± 7 nm. N-acetyl 1, N-benzoyl 2 and N-naphthoyl 3 was assessed, in order to evaluate the effect of increased hydrophobicity on antibacterial activity. Further, the stiffest (BA 9) and softest (methyl 6) of the remaining gelators were tested to discern the relationship of hydrogel stiffness to antimicrobial activity. Staphylococcus aureus, the most common causative organism for skin and soft tissue infections, was chosen as the bacterial strain to assess the hydrogels antibacterial activity 68 . These hydrogel surfaces was challenged with bacterial inoculum of 3 × 10 4 cfu mL −1 and viability was measured after 18 hour of incubation. Interestingly, N-benzoyl 2, N-naphthoyl 3, and BA 9 (para-) exhibited significant bacteria activity with bacteria reduction of 4.4 Log 10 , 9.0 Log 10 , and 1.9 Log 10 , respectively (Fig. 7a). In contrast, the less stiff hydrogels (N-acetyl 1 and methyl 6) did not show significant bacterial reduction. This result was anticipated, as rheological properties (in particular higher Gʹ value) were reported to give rise to antibacterial function of a hydrogel by providing mechanical support to individual fibres and fibrous networks 69 . Furthermore, the observed antibacterial activity was not due to the presence of NaOH as the concentration of NaOH in all of the hydrogel tested were found to be below their MIC against S. aureus (7.5 mg/mL) 70 . This results is consistent with previous studies, where short peptide-based hydrogels without cationiccharge, were shown to exhibit antibacterial activity against Gram positive and Gram negative bacteria 71-74 . Cytotoxicity of hydrogels 2, 3, and 9. In order to successfully treat infection, antimicrobial compounds need to exhibit high toxicity towards bacteria but low toxicity towards mammalian cells. Although the diphenylalanine moiety is known to be non-toxic, the cytotoxicity of short peptide based hydrogelators are often defined by their capping group 19,[75][76][77] . With this in mind, the cytotoxicity of lead candidates N-benzoyl 2, N-naphthoyl 3, and BA 9 (para) from the antibacterial studies were examined against HEK293T cells, as a robust mammalian cell model. As the antibacterial activity of the 2, 3 and 9 was evaluated in the gel phase, contact cytotoxicity of hydrogels at concentrations of 0.25, 0.5, and 1% (w/v) was examined. At these concentrations, which are far above their CGC, both 2 and 9 exhibited moderate cytotoxicity towards HEK cells (Fig. 7b) with no significant variation upon concentrations. Unfortunately, the N-naphthoyl hydrogel 3 showed poor cell viability against HEK cells (Fig. 7b, green), potentially owing to its increased hydrophobicity, which has previously been shown to correlate with cytotoxicity for short peptide gelators 78 . conclusions Anthranilamide-based short peptides have been successfully incorporated as LMWG, bola amphiphile (BA), and C 3 symmetric molecule via ring-opening reactions of isatoic anhydride in solution phase with excellent yield. The short peptides reported herein formed hydrogels using combinations of pH switch and heat or solvent switch as a trigger in relatively low concentrations (0.07-0.30%). The hydrogel properties (such as mechanical strength, secondary structure, and fibre morphology) can be modulated by varying hydrophobicity or introducing substituents on the capping group. Hydrogels made from N-benzoyl 2 and BA 9 showed most favorable viscoelastic properties. In addition, these hydrogels exhibit antibacterial activity against S. aureus and moderate toxicity against HEK cells. Further modification to the hydrogel scaffold is required to improve the cytotoxicity of the hydrogels for biomedical applications, such as topical antibacterial gels. Materials and Methods Synthesis. All chemicals and solvents used were purchased from Chemimpex, Combi Blocks, or Sigma Aldrich and were used without any further purification. The anthranilamide short peptides were synthesised via ring opening reactions of isatoic anhydride. The detailed synthetic procedure for hydrogelators 1-11 along with their characterization data (IR, 1 H NMR, 13 C NMR, and HRMS) are given in Supplementary materials. preparation of hydrogels. To a pre-weighed compound in a glass vial, 1 M NaOH (2-8 molar equiv.) was added followed by addition of Mili-Q water. The suspension was then heated and vortexed vigorously for 15 minutes to completely dissolve the compound. In the case of N-acetyl 1, N-benzoyl 2, N-naphthoyl 3, fluoro 5 and Figure 7. (a) Antibacterial activity against S. aureus 38 using viable count method (n = 2, *p < 0.001, ns = nosignificant difference between hydrogels and negative control) of hydrogels made from N-acetyl 1, N-benzoyl 2, N-naphthoyl 3, methyl 6, and BA 9 (para-) at 1% w/v. (b) Cytotoxicity of hydrogels made from N-benzoyl 2, N-naphthoyl 3, and BA 9 (para-) against HEK293T cells. cD Spectroscopy. The anthranilamide-based hydrogels were prepared at 0.8% (w/v) and were diluted 10 times before being transferred into a 0.5 mm path length cuvette. Meanwhile, for the concentration dependence, the hydrogel made from N-benzoyl 2 at 0.8% (w/v) was diluted to make up concentrations ranging from 0.058 mM -0.470 mM before being transferred into a 0.5 mm path length cuvette. CD spectrum were obtained using a ChirascanPlus CD spectrometer (Applied Photophysics, UK) scanning wavelengths of 180-500 nm with a bandwidth of 1 nm, 0.6 s per point, and step of 1 nm 22 . The outcome of three experiments were then averaged and plotted into a single plot value. The high tension (HT) value of each experiment was maintained to be below 600 mV (Fig. S17). Attenuated total reflectance fourier-transform infrared spectroscopy (ATR-FTIR). D 2 O gels of anthranilamide-based short peptides 1-10 were pre-formed in a glass vial at 3%w/v. Heat was applied to these D 2 O gels to trigger the transformation to their solution-phase. Subsequently, two drops of each D 2 O gels was placed on the ATR crystal and allowed to stand for 5 minutes. On the other hand, xerogels were formed in-situ by applying nitrogen to two droplets of hydrogel made from anthranilamide-based short peptides 1-10 at 3% (w/v). The spectrum was recorded using a Spectrum 100 FTIR spectrometer (PerkinElmer, USA) fitted with a 1 mm diamond-ZnSe crystal from 4000-650 cm −1 with 4 cm −1 resolutions and 32 scans. Rheology measurements. The viscoelastic properties of hydrogels made from anthranilamide-based short peptides were determined using Anton Paar MCR 302 rheometer with a 25 mm stainless steel parallel plate configuration, as previously described 22,23 . Pre-formed hydrogel was warmed, using a heat gun, and the resulting solution (560 µL of 1% (w/v)) was transferred onto the rheometer plate. The other plate was lowered to its measuring position, 1 mm gap, and the solution was allowed to stand for 2-24 h for the gel to completely form. To prevent solvent evaporation, a solvent trap using Mili-Q water and a Peltier temperature controller hood was employed. The frequency sweep test (FST) was performed at constant strain of 0.1% using frequency of 10 Hz to 0.01 Hz. Meanwhile, strain sweep test (SST) was conducted at constant frequency of 1 Hz using 0.1% to 100% strain. In addition, a temperature sweep test was examined to determine Tg values using constant frequency of 1 Hz and constant strain of 0.1% with temperature ramping from 25 °C to 90 °C. The rheology data presented are an average of three repeats. Atomic force microscopy. A drop of these pre-gel solution was casted onto a mica substrate and the droplet was gently spread using a glass slide. Pre-gel solution of anthranilamide-based short peptides were obtained by either heating the thermo-reversible hydrogels 1-3 and 5-6, or quickly transferring solutions of 8-10 after GdL addition but prior to gelation. The samples were left to dry overnight before being imaged. Imaging was undertaken on a Bruker Multimode 8 Atomic Force Microscope in Scanasyst Air (PeakForce Tappings) mode, which is based on tapping mode AFM, but whereby the imaging parameters are constantly optimized through the force curves that are collected, preventing damage of soft samples 22 . Bruker Scanasyst-Air probes with a spring constant of 0.4-0.8 N m −1 and a tip radius of 2 nm were used in this experiment. Antibacterial activity. Antibacterial activity of the anthranilamide-based hydrogels were performed using modification of a known method 79 . Initially, a single colony of S. aureus 38 was grown overnight in Luria-Bertani (LB) broth medium (Sigma-Aldrich) at 37 °C. The resulting bacteria culture was centrifuged and harvested. The bacteria pellet was re-suspended in the same volume of LB, twice. Optical density (OD) of the resulting culture was adjusted to 0.1 at 600 nm in LB (10 8 cfu mL −1 ) and the bacterial solution was further adjusted to 3 × 10 4 cfu mL −1 . The bacteria solution (1 mL) was carefully casted on top of the pre-formed hydrogels (1% w/v; 1 mL) in glass vials. As a negative control, bacteria solution without a hydrogel was used in the experiment. These vials were incubated at 37 °C for 18 h. Serial dilution was performed to the 100 µL of the bacteria solution (on top of the hydrogels) using phosphate-buffered saline (PBS). 20 µL of each dilution, were carefully transferred into nutrient agar plates and incubated at 37 °C for another 18 h. The following day, bacterial growth inhibitions were recorded using viable count methods. This experiment was performed twice in triplicate. Multiple sample comparison was performed using one-way ANOVA at p < 0.05. www.nature.com/scientificreports www.nature.com/scientificreports/ cytotoxicity assays. Similar to previously reported method 22 , cytotoxicity measurement was performed using an Alamar Blue colorimetric assay on HEK293T cells. HEK293T cells were passaged using standard cell culture procedures. Cells were detached with trypsin and centrifuged (1000 rpm for 3 min). After supernatant was removed, the cells were re-suspended in Dulbecco's Modified Eagle Medium (DMEM) at a concentration of 60,000 cells/mL. Cells were seeded onto hydrogels at a concentration of 6,000 cells/well. Hydrogels were prepared as described above and 100 µL cast into the wells of a 96 well plate in triplicate. After incubating overnight, 100 µL media was added to the set hydrogels and incubated overnight. The following day, excess media was aspirated and cells seeded atop the hydrogels as above. After incubation for 24 hours, 10 µL Alamar Blue was added to each well, followed by further incubation for another 4 hours. Wells containing cell-free hydrogels, no hydrogel substrate, and a negative control of 20% (v/v) DMSO were prepared as controls. BioRad Benchmark plate reader was used to measure the absorbance at 570 nm and 596 nm. Each experiment was repeated at least three times.
2020-01-21T16:14:52.124Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "8c0feeaa94b0fe69b11f067be46e336504f47d89", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-57342-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c0feeaa94b0fe69b11f067be46e336504f47d89", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
15568466
pes2o/s2orc
v3-fos-license
Local Renyi entropic profiles of DNA sequences Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. Background Biological sequences are the ultimate support for the description of Biological Systems. In particular, key aspects of sequence analysis are known to play a role in integrated analysis of regulatory networks: for example in motif searching and inference. Over the last decades and more recently due to the development of a considerable number of whole genome sequencing projects, several efforts have been made to mathematically model DNA sequences. In particular from the statistical side, the use of Markov based models [1] has widespread and proven to be effective in tackling the problem of data mining of biological sequences, through variable length Markov chains [2,3], interpolated Markov models [4], fractal prediction machines [5] for symbolic time series based on Chaos Game Representations [6], to name just a few. Other algorithmic approaches based on the computational side have also proven to be useful [7]. All this effort allowed establishing important relations between the results obtained (computationally and statistically) with real biologically significant findings. From these models developed for DNA, it is now apparent that each genome has pervasive [8] motif and compositional characteristics in terms of the frequencies of its constitutive L-tuples or L-length motifs, which gave rise to the genomic signature concept [9]. This fact can be directly employed for horizontal transfer detection and characterization, coding vs. non-coding discrimination [8,10], study and compare DNA through the use of composition profiles [11] and spectra [12] and other applications partly reviewed in [13]. In this regard and more specifically, an important statistical problem in bioinformatics that emerged is the evaluation of the number of repetitions occurring in biological sequences. More generally, they can occur in distinct hierarchical levels, from single symbols [14] to genes. In fact, in a recent paper, the number of gene repetitions was shown to be a key aspect of gene expression and phenotype [15]. Apparently theses repetitions, not only at nucleotide level, might play a key role in genome organization and functionality of networks. The notions of repetitions, entropy and correlation in DNA are unquestionably connected [16][17][18] and references therein -the degree of predictability of a sequence, which is closely related with its internal repetition and compression, can be measured by its entropy. The major importance of this research has provided evidence that is already too vast to fully account for. In particular, the relation between motif over-or under-representation is usually related with their biological function. This creates the need for an efficient method to analyze, for different parameters sets, the degree or scale of each DNA region. In a recent report [19], the authors defined a new continuous measure of DNA entropy, based on non-parametric density estimation applied to Chaos Game Representation (CGR) and Universal Sequence Maps (USM) within the Rényi theory. The idea therein explored was that there is a close relationship between the statistics of the sequences, given by their constitutive motifs, and their entropy, measured under information theory methodologies. In that report the Rényi entropy was estimated in a global approach, and the measures obtained were compared with random sequences by Monte Carlo simulation. Although the main concepts were then introduced, that report was incomplete in the sense that just a global analysis was conducted. Specifically, no exploration of local patterns and fine tuned neighboring analysis was conducted, which is finally allowed by the present work, with the introduction of the concept of the Entropic Profile (EP). Entropic profiles were defined previously but in a different context and scope: they were estimated using the histograms of the L-mer or L-tuple frequencies in DNA [20]. In that report the authors could discriminate between random and natural DNA sequences using the Shannon entropies of the histograms obtained from the CGR for different resolutions or oligomer lengths. Although the same name was used, that previous endeavor focused on a global perspective of sequence entropy [19] whereas this report proposes and investigates a local entropy formulation instead. In fact, the results obtained by Oliver and colleagues are global features for each DNA sequence, different from the present proposal of local based information per position/symbol. Another type of sequence profile also explored was based on linguistic complexity [21] and low entropy DNA zones [22]. In the present report the definition of entropic profile arises from the direct estimation of a local density, derived from the Parzen's window method described before. In our last report this estimation allowed the calculation of a global entropy measure, according to the Rényi definition. This report describes the next logical step of exploring complementary methods to access local information as to identify the location and composition of the conserved sequence which existence might have been anticipated from the global measures of entropy. The rationale is to have a function that assesses, for each position in the sequence (illustrated here for DNA), the information content of L-tuple suffixes directly from the density kernel function estimate. Such a solution should enable the scale-independent extraction of motifs without the need to identify complex state automata for unit succession. In addition to our preceding report on Rényi entropy for global characterization of sequences, the study reported here also builds on the identification of a kernel function that produces a more accurate density estimation in CGR/ USM projections of symbolic sequences [23]. The more conventional use of symmetrical functions as we did with a Gaussian Parzen kernel produces a rough fit to the characteristically fractal nature of iterative map projections. That approximation did suffice for assessment of global entropy [19] but it is not refined enough for the intended density estimation resolved locally at the sequence unit level. Future applications of the methodologies here proposed might include motif inference and extraction, providing tools for the construction and inference of generalized sequence models for whole genomes. Results and Discussion This section presents some entropic profiles calculated for the DNA sequences described below. The relation between this values and former results is also investigated. Additionally, the influence of the parameters on the profiles is discussed. DNA sequence dataset description For sake of clarity this report uses the same dataset previously studied [19], thus allowing a comparison of results, in the continuity of the former proposal. In particular, the results for a subset of those sequences with known present motifs will be shown and extensively studied. In order to further test the estimation of the profiles to more challenging datasets, the analysis of whole genomes is also included. More specifically, the detection of Chi sequences in Escherichia coli and Haemophilus influenzae will be assessed. These genomes have been extensively analyzed after the completion of its DNA sequencing projects, thus constituting an excellent dataset to test new procedures. In particular, several important motifs have been studied elsewhere and can be compared directly with the proposed method. The following Table 1 recalls the DNA sequences examined. All the datasets and additional information are available in the webpage referred to above. Entropic profiles and parameters optimization The tests consisted on calculating the entropic profiles (EP) for different combination of parameters L and φ and check for particular features. The use of artificial DNA allows the accurate study of the impact of the parameters on the profiles obtained. The results can be directly obtained by using the deduced formulae of Equations 5 for L, φ (x i ) and their corresponding normalized values L, φ (x i ) (Eq.3), after specifying the parameters (see Methods and online software). The results presented in this section are focused on the analysis of specific positions, known to be important and/ or contain statistical significant motifs as suffixes. For example, Figure 1 represents the profiles obtained for the sequence m4 with the motif 'ATCG' implanted. This motif was implanted n = 20 times at equally spaced positions p = 50+i100, i = 1, ..., 20 (see details in [19]). By studying one of the positions where this suffix ends (as an illustrative example p = 353 was chosen), one immediately assesses for which combination of parameters L and φ the maximum values of the profiles is obtained. In this case this maximum is achieved with L = 4 and φ approximately of 1 (one might further search this parameter space continuously in order to optimize φ but this is not pertinent in this explanatory step). As seen from the Figure 1a) and 1b), there are parameter combinations for which that particular position/suffix is highlighted, with normalized density values way above alternative choices. It was not by chance that the maximum was attained at L = 4, since this is precisely the length of the suffix highly repeated, so L max ≥ 4 was expected to be a local maximum of EP. In the other panels of Figure 1 the entropic profile for the complete sequence is plotted, using the parameters previously optimized for the chosen position (p = 353). These plots allow the overview of all the sequence using local information obtained for a specific putative important suffix and, in fact, using this combination of parameters one immediately recovers all the positions where the known motif appears, which are simply the peaks on the graph. Panel d) shows a detail of the EP (from position 300 to 400), clearly illustrating the position where the implanted motif "ATCG" ends, with a density local maximum around EP(353) = 3.9. The expected number of counts under a first-order Markov Chain model would be 10.7 (p-value = 0.0027, z-score = 2.78). In Figure 1e) and 1f) is also shown the corresponding density estimations on the CGR map for two distinct f g The artificial sequences m3, m4 and m5 are obtained by generating random DNA (with symbol emission probabilities p A = p T = p C = p G = 0.25) and subsequently implanting the motifs described (respectively 'ATC', 'ATCG' and 'ATCGA') in specific positions. The sequence Es corresponds to the concatenation of real DNA from 20 promoter regions of Bacillus subtilis [45,46], with known consensus structured motif TTGACA-(space)-TATAAT with at most one point mutation or substitution. The sequences Ec and Hi are the complete genomes of Escherichia coli and Haemophilus influenzae extracted from NCBI GenBank. Entropic profile (EP) for sequence m4 Table 1). Several parameter combination L and φ are presented, and the corresponding EP val- n parameter sets. Comparatively with the Gaussian function this kernel is better adjusted to the CGR square-based geometry and presents a more clear-cut profile, as expected. The darker squares correspond precisely to the implanted motif sub-quadrants. The following figures present the same results obtained with the other datasets under study. In Figure 2 the same pattern occurs, with maxima EP(393) = 3.8, obtained for L = 3, again the implanted motif length. It should be mentioned that occasionally, for some positions where the motif "ATC" appears, the maxima occurs for a value L > 3. This can also happen and simply means that longer, non-implanted motifs appeared more often that would be expected by chancein this case "ATC" is embedded in a longer significant motif, i.e. is contained in a longer string with potential significance. Interestingly, when plotting all the EP for the sequence using L = 3, one obtains additional, nonimplanted motifs, which occurred just by chance -extra peaks with non-equal spacing in Figure 2c) and 2d). In fact, the probability of one specific motif of length 3 (under a null model of symbol equiprobability) is 4 -3 , which implies, for a sequence of 2000, that the expected number of counts is roughly equal to 31. This simply Entropic profile (EP) for sequence m3 Table 1). Same analysis conducted. See legend of Figure 1. means that the motif already existed in the random sequence m3 before the implantation took place. The detail graph - Fig. 2d) -shows precisely these "extra" appearances. If one uses a first-order Markov chain model as previously the expected number of counts becomes 60.08 (p-value = 2.8E-10, z-score = 6.2). A similar interpretation can be made regarding sequence m5: the positions where the suffix "ATCGA" appears have maximal values (x) for L = 5, although with high values in the range L = 4 to L = 7, which indicate nested signifi-cant motifs. The entropic profile for the complete 2000 base-sequence shows the maxima of the equally spaced motif (see Fig. 3), where it is noticeable an extra peak that corresponds to a previously existing motif ATCGA (ending at position 729). By spanning the parameters space (L, φ) it is possible to Table 1). Position analysis for sequence m5, analogous to those conducted previously. See legend of Figure 1. h t i y high relative value. By using these optima in the EP one obtains a profile that highlights immediately the suffixes where the highly repeated motif appears. Some other maxima appears sometimes (results not shown), but were discovered to correspond to other interpretable extreme values. The expected number of counts for this motif is just 3.07 that, comparing with the observed 21 occurrences, gives a p-value≈0 (z-score = 10.02). Finally, Figure 4 shows part of the results for the real DNA sequence in the position corresponding to the ending of the TATA box (motif = "TATAAT"). The graph for this position shows precisely that L = 6 is an interesting scale to search for. The EP, in contrast to the former ones, does not exhibit a clear trend. In fact, differently to the former sequences, which were artificially generated and presented non-degenerate highly conserved motifs, the real DNA exhibits several point mutations that introduce some "noise" in the estimations. When plotting the complete profile for this sequence and observing one detail it is possible to recover the complete structured motif, known to bind to specific transcription factor binding sites, with values EP(TATAAT) = 4.3 and EP(TTGACA) = 3.6. It should be stressed however, that these results are biased towards the sequence itself: in this particular case, the concatenation of the promoter regions of B. subtilis provided a set with conserved motifs, at least to the point where they could be detected by density estimations. Of course, if non-conservation is allowed up to a higher level, the EP becomes noisier and eventually the signal will be lost, hampering the recovering of any significant motif if no pre-processing correction is performed. The analysis based of Markov chains gives for the TATAAT motif an expected number of counts of 1.60 (p-value≈0, z-score = 10.38) and 0.94 for TTGACA (p-value≈0, z-score = 9.54). The most common motif EP(AAAAAA) = 5.4 is highly periodic which explains the peak, although under a Markov chain it is expected to occur 11.67 (p-value = 0.1245, z-score = 1.15). The two last datasets are constituted by whole genomes from two Gammaproteobacteria: Escherichia coli K12 and Haemophilus influenzae Rd (see Table 1 for NCBI GenBank accession numbers). The study of the regions where Chi sequences appear will be analyzed in both genomes. Chi (crossover hotspot instigator) sites are homologous recombinational hotspot octamer sequences which modulate the exonuclease activity of RecBCD. This enzyme is necessary for chromosomal dsDNA repair and integration of exogenous dsDNA, which supports the idea that Chi sites have a biologically functional role [24]. Since Chi motifs are orientation-dependent and strandspecific, the sequence to be analyzed should be previously processed to comply with this property. This means that one should extract the whole genome and use the DNA sequence from the origin of replication up to the terminus plus the reverse complementary sequence, since chromosome replication in bacteria start from one replication origin (oriC) and proceeds bi-directionally until the replication forks reach the termination site (terC). These pre-processed genomes will conform the 5'->3' direction of replication and therefore will be used throughout the analysis. The oriC and terC positions (referred to the NCBI GenBank database) have been estimated based on experimental data and asymmetric properties [25] and are specified in Table 2. Chi sequences (see Table 2) are statistically overrepresented in the genome of E. coli (5'-GCTGGTGG-3'), appearing more often than would be expected by chance whereas in H. influenzae (5'-GNTGGTGG-3' and 5'-GST-GGAGG-3' show Chi activity) they are known to be less frequent and less conserved. This makes for two different datasets with distinct features that involve a different degree of difficulty to detect these regions. The study of Chi sites have been subject to many analyses and therefore constitute an excellent test dataset to assess the strength of the entropic profile approach to detect these motifs. In particular several recent papers have assessed its statistical significance using Markov models [1], analyzing the 8-tuple frequency for the whole genome of E. coli [26] and also comparing Chi site conservation in both organisms [24]. The expected number of an 8-tuple in E. coli and H. influenzae using a Markov model of order 0 (only nucleotide abundance is taken into account) is respectively 70.796 and 27.926. One immediately sees that in E. coli this motif is highly represented whereas in H. influenzae this fact is less evident. Interestingly, when analyzing whole genomes, several motifs appear with p-values near 0, i.e. they occur in exceptionally high number when considering a Markov chain model. This fact does not allow their accurate comparison and is a major drawback of using solely the p-values to assess the statistical significance and correctly compare and order the relative importance of these motifs. Therefore, as explained in the Methods, the normalized z-scores are also reported for clarity. For example, using a first order Markov Chain model the expected number of counts for the chi-sequence in E. coli and H. influenza is 85.06 and 12.34 respectively. Although this motif has a p-value≈0 for both sequences, the corre-sponding z-score of 73.37 and 12.43 respectively puts it in different ranks among all motifs of the same length. When analyzing one (random) position where Chi sequence ends in E. coli (exactly in the same way as the previous analysis) the following profiles are obtained ( USSs are involved in natural competence, which is a genetically controlled form of horizontal gene transfer in some bacterial species, related to their ability to take up DNA from the surrounding environment (reviewed in [28]). This process allows genetic exchange in bacteria, which is the only organism known to actively take up DNA from the environment and recombine it into their own genome [29]. The DNA uptake machinery on the cell surface preferentially binds and takes up fragments containing this specific short sequence. In particular H. influenzae is able to take up double-stranded DNA of its own species and closely relatives, facilitated through the recognition of USS, which are indeed over represented in its genome. One interesting statistical aspect of the USS distribution, besides its extremely over-representation, is that these sequences appear equally partitioned in both strands and are remarkably and significantly evenly spaced around the chromosome [30]. They can be constituted by the 9 bp core referred to but allowing a longer 29 bp consensus. The USS evolutionary origin and function was recently addressed [31] by confronting two models, preference first hypothesis and a molecular drive hypothesis. Nevertheless this issue remains controversial [32]. Through the analysis of H. influenzae complete genome conducted above one obtains peaks on the entropic profiles precisely at these ubiquitous motifs, which definitely obscures the retrieval of Chi sequences, whose number of occurrences is not at all comparable with USS frequency. In fact, the profile obtained for the maximum values (L, φ) shows that the Chi sequence (with G) attains a maximum entropic density value of 0.12, which is way below the The number of occurrences of Chi motifs in the genomes shows that they are overrepresented in E. coli (761 occurrences) but not in H. influenzae (maximum of 77 occurrences). detection level when compared with the value obtained for USS which was equal to EP(AGTGCGGT) = 9.78 and EP(AAGTGCGG) = 11.13. This phenomenon is well understood, and some authors name it "contamination" [1]: the highly overrepresented expressed motif contaminates the calculation of low expressed segments. The program R'MES [33] lists precisely USS motifs and their variants showing this behavior. One idea to assess the sta-tistical significance excluding this bias is to delete, from the original sequence, the regions/positions where this ubiquitous 9-tuple appears [1]. This is approximately comparable to perform exact Markov calculations and therefore can be used to further study the sequence. The obtained values for the transformed sequence were nevertheless very low around EP = 0. 16 (results not shown). After investigation what might be happening it was found Entropic profile (EP) for sequence Ec -complete genome of EE. coli that other motifs emerged even when USS were all deleted from the genome. For example, the 8-tuple AAAATTTT (p-value→1, z-score = -10.70) appears with high EP values, along with other motifs constituted by long successions of A's and T's. These long adenine-thymine tracts, previously detected for other organisms such as Yeast [34,35], might have an important role due to their strong DNA bending properties [36]. Although the detection of Chi sites failed, other Entropic profile (EP) for sequence Hi -complete genome of H. influenzae Figure 6 Entropic profile (EP) for sequence Hi -complete genome of H. influenzae. a) and b) Analysis of position 36532 (from the beginning of replication). c) and d) Detail for the EP for positions 36200 to 38200 and 36500 to 36600. The highest peaks in the EP correspond to uptake signal sequences (USS+) 5'-AAGTGCCGGT-3', its reverse complement (USS-) 5'-ACCG-CACTT-3' and related motifs, such as AGTGCGGT and AAGTGCGG. The Chi sites are not particularly well conserved neither overexpressed [24] and therefore are not easily detected with this approach. This effort highlight an important possible procedure, to be explored further, that one should plot the motifs hierarchically and delete the influence of more ubiquitous motifs that highly "pollute" the calculations, starting from the most exceptional. In fact, from the profile information we could further envisage an algorithm that automatically extracts putative motifs for each position. This is accomplished by searching the parameters space for which the estimation is maximal for position i: and then use these parameters to retrieve the suffix . Using this methodology one obtains precisely the implanted motifs of the previous datasets. As an example, the "TATA"-box referred to before is correctly inferred and also the above mentioned examples with the artificial sequences ( Figure 7). It should be stressed however that this is not the most convenient procedure for motif inference problems since several algorithms already exist that perform these searches very efficiently. Nevertheless is interesting to find that combinatorial and probabilistic methodologies are comparable as the latter come with broader opportunities for theory development albeit leading to advantageous numerical solutions. The observation that there is a close relationship between the overrepresentation, detected by the majority of the algorithms, and the proposed Entropic Profiles with its density and statistical significance measure suggests that it could provide a way of simultaneously finding and statistically classifying the motifs instead of pursuing the two goals separately. The analysis also showed that the statistical significance zscores and p-values are unequivocally related with the entropic profiles, since most of the algorithms detected the same motifs. Over-represented motifs exhibit a very low p-value, very near zero, and high z-scores and EP values; common motifs, that appear a median and/or expected number of times, have high p-values and low zscores, which indicate its non-exceptionality under the Markov chain model considered. These are the motifs that also attain low EP values. The full correspondence between both methods is still under study. By expressing the density estimation as a function of the suffix counts, one is also allowed to search for under-rep-resented segments, i.e., those whose density is below average. Although not explored in this work, minimum entropic profile values might also play a role in under-represented motifs detection. In fact, rare motifs/substrings are known to correspond to traits/regions with very specific functions in high precision biological processes. The use of unique sub-strings, or UniMarkers, that appear only once in the genome, recently allowed to locate single nucleotide polymorphisms (SNP's) [37,38]. These unique substrings were shown to be clustered close to genes [39]. All these positions can be detected as low-density areas in the CGR and consequently correspond to local minima in their Entropic Profiles. Another example also related with low-density points is related with 6-tuple palindromes. These short sequences, which often correspond to restriction sites, are under-represented in E. coli and in the bacteriophage lambda [1,40], thus providing a self-protecting effect. More generally this methodology can be used to find heterogeneous traits in the genome, both related with local under-and over-representation of motifs. This result can indicate the presence of foreign material which can have significant applications in the detection of horizontal transfer [11]. Conclusion In this report, Entropic Profiles (EP) were proposed as a novel local information entropy measure for DNA sequences. This function is built on previous work on continuous Rényi quadratic entropy where the Parzen window method was applied to the non-parametric density estimation of the Chaos Game Representation/Universal Sequence Maps (CGR/USM) of a sequence. Subsequently, the estimation was decisively refined to the accuracy that the determination of local entropy requires. This advance, reported elsewhere, introduced a two-parameter fractalbased kernel, instead of Gaussian functions, which is more adequate to the geometry of the CGR domain. The Entropic Profiles proposed here assess point/symbol normalized deviations from a mean composition signature. EP calculation was based on a density estimation value per position, thus depicting local sequence information related with the statistical significance of a motif, measured as its global over-or under-representation. Furthermore, it was shown that using this kernel the EP can be calculated independently from a particular representation. The local genome scale (or resolution) is defined by the combination of parameters for which a particular suffix emerges. Therefore, this scanning procedure identifies simultaneously the position and the scale at with the sequence composition is singular, by focusing and adjusting the best parameters locally and then looking back to the overall sequence. There is a strong biological rationale for such an approach as the genome is organized to conserve motifs at different scales (lengths) and with varying stringency. The underlying hypothesis is that over-or underrepresented motifs may be indicative of important biological functions. This conclusion was illustrated with the analysis of artificial DNA sequences, reference genomic datasets and also whole genomes from E. coli and H. influenzae, where known regulatory components and motifs were correctly recovered -both as regards position and scale (length) of the conserved segments. By spanning the parameter space of this new function it was possible to study the local scale for which a given suffix and position were implicit. This effort highlighted the interaction between several methodologies in this field. Specifically, it greatly simplifies the Figure 7 Conserved motif detection and extraction. By searching the parameter space (L, φ) for a specific position i and finding the values it is possible to extract the most significant suffix in) the entropic profile context, illustrated here for the first four sequences. Each of the panels corresponds to a different sequence and position where the motif was correctly recovered just by using these maxima: a) m3, b) m4, c) m5 and d) Es (see also Table 1). The profiles for the L max and φ max are also shown: apparently one can obtain a non-decreasing function of the positions, which means that previous suffixes are embedded in the implanted motifs. Conserved motif detection and extraction exploration of fundamental relationships between distinct sequence analysis approaches and concepts such as metrics on strings, information theory and entropy, iterated function systems and statistical significance of DNA segments, providing a common ground in kernel-based learning theory. The procedure proposed here is easily extendable to other kernel function classes, which might be more adequate to model specific traits or genomes. Future work includes the generalization for point mutations and also dealing with nested or embedded motifs. The proposed entropic profiles provide promising new tools for the study of biological sequences, allowing the quantification of repeatability and identifying key parameters for which relevant features arise. Methods This section recalls the background work that led to the new analysis described here and defines the main concepts proposed, namely: the CGR/USM representation of DNA sequences; the assessment of entropy in biological sequences and definition of local Entropic Profile (EP); the use of specialized kernel density estimation functions and its conjugation with the EP method. CGR/USM representation of DNA sequences and Parzen's method The CGR/USM representation, introduced in [6] and generalized to higher-order alphabets in [41], allows the mapping of a discrete DNA sequence onto ‫ޒ‬ n . Formally, the CGR mapping x i ∈ ‫ޒ‬ 2 of a N-length DNA sequence S = s 1 s 2 ... s N , s i ∈ = {A, C, G, T }, i = 1, ..., N is given by : The properties and generalizations of this method have already been studied and extensively applied as a consequence to the natural development of alignment-free techniques for sequence comparison [13,42]. As previously, the variables employed in this work will be the USM coordinates sample points {x i } i = 1, ..., N that correspond to the symbols {s i } i = 1, ..., N in the original sequence. In particular, it was seen in the previous report that these points could be adequately used to estimate the Rényi entropy of the original sequence through the Parzen's window density estimation method [43]. This is a nonparametric technique used to estimate a probability density function f from a sample. This method is one of the most widely used kernel-based methods and consists on the choice of a weighting function or kernel κ θ (x). The estimation (x) of a random vector x is a linear combination of the kernels centered in the observed sample points a i , i = 1, ..., N, and is defined for a specific window width τ (Eq.2): In that former report [19] Gaussian or normal distribution functions were used in order to estimate the Rényi quadratic entropy of the CGR of a given DNA sequence. Due to important algebraic simplifications and properties of the Gaussian kernel it was shown that this calculation was obtained by using a simple potential function of the CGR map. Entropic profile definition The former equations provide a natural method to extract local information from a DNA sequence. By calculating the values θ (x i ) for each coordinate x i that represents the i th symbol in the original sequence and parameter set θ, it is possible to plot, for each position i = 1, ..., N, normalized values θ (x) ≡ θ (x; a) of the density function estimated previously, obtained as the number of standard deviations from the mean (taking into account all the sample points or symbols, omitted for notation simplification): In fact, this corresponds to extracting the local density, estimated for each coordinate that represents a symbol in the original sequence context. For example, if a particular motif appears more often than what would be expected by chance, the density estimation for that particular position/coordinate will be higher than the average m θ . For each parameter set θ one can define the Entropic Profile Therefore, these values obtained with the kernel estimations are related to the statistical significance of the corresponding suffix present at that particular position, since they represent a density, which is strongly associated with the degree of repetition of a given suffix in the sequence. It is worth noting that the proposed entropic profiles are a descriptive measure of local DNA properties and that a full extensive comparison with other methods that search for motifs and assign p-values to the results are out of the scope of this work. Future efforts will quantitatively compare these profiles with other models, e.g. Markov chain models, to confirm for the quantitative correspondence between methods on the assessment of under and overrepresentation of motifs. Fractal kernel definition The former approach used Gaussian distribution function to model the generalized Markov models. One possible drawback of this methodology is related with the domain issue above mentioned, since the normal distribution function is defined in ‫ޒ‬ n whereas the CGR/USM domain is explicitly defined in unit hypercubes. This concern lead to the development of another kernel [23] to be used in the CGR density estimation, which is recalled, reformulated and further discussed in this section. Intuitively, this function rounds the value of x j , respecting the borders of the regions that represent specific k-tuples, which are always given by multiples of 2 -k (see figure 1 in [19]). This might also be interpreted as the number of common digits of the binary representation of x j and x, up to the k th decimal digit. This is more clearly deduced using numeric representation in base 2. For the CGR mapping ≡ (x (1) , x (2) ) ∈ ‫ޒ‬ 2 , the 2D step function for a point ∈ ‫ޒ‬ 2 is defined as , i.e., the function is 1 when both coordinates x (1) and x (2) belong the above mentioned intervals and is zero otherwise. This is due to the indicator function property χ A ∩ B = χ A χ B . For sake of clarity and notation simplification, in the following formulas all the variables x and x j will be assumed in The kernel κ f (x) used in this work and extensively presented in [23] is based on the linear combination of block functions I k , using particular resolutions k and a parameter h that defines the height (or weight) of each block: Additionally, considering the restriction of probability density functions, the following equation is obtained: since and . Defining φ as the (constant) ratio between two consecutive volumes A k and A k-1 , k = 1, ..., L (in 2D): it is possible to express this restriction in terms of φ as: And finally the (normalized) kernel (x) with parameters L, φ and x j is: The underlying idea is to weight, by powers of 4 φ, each step function (x), which corresponds to a sort of generalized Markov model. An illustration of this kernel function (projected to one-dimensional space) is given in Figure 8 for L = 2 which correspond to three blocks (x), k = 0,1,2. Another important property of this function κ is its symmetry regarding x i and x j , in fact, since . Actually, if x i belongs to the interval A k means that x i and x j have the same binary expansion up to the k digit, which obviously implies symmetry. This allows a straightforward generalization under kernel learning theory in which specific transformation of the data with kernel functions induce dot products and norms in other function spaces [44]. In fact, this kernel is related with the Cantor distance in strings, which measures precisely the suffix similarity. Furthermore, it should be clear that this new fractal kernel is more adjusted to the CGR geometry: instead of Gaussian functions that span all ‫ޒ‬ n domain the proposed κ (x) is defined on unit hypercubes, which is definitely more in agreement with these iterative maps. Entropic profiles with fractal kernels When using the above-defined fractal-based kernel, the expression for the estimation for the entropic profile is significantly simplified, thus allowing its optimal and Example of the proposed fractal kernel κ(x) Figure 8 Example of the proposed fractal kernel κ(x). Fractal kernel construction projected to one-dimension, for L = 2 and arbitrary φ. straightforward calculation. In fact, for a particular coordinate, each density block is only different from zero if the points in that neighborhood are close, in the sub-quadrant sense. In other words, for one position, the only nonzero blocks of length k correspond to the nearest points, which are at a distance less than 2 -k apart. Another important note is that this particular kernel, contrary to the Gaussian which only has two parameters (mean and variance), depends on the point x j : in effect, the format of the kernel varies according to the rounding procedure and the particular coordinate x j considered. These results show that the parameter φ is weighting different Markov chain models: φ = 0 means that a zero order, background (equal) frequencies are taken, whereas φ → ∞ corresponds to weighting higher L-tuples, ignoring the lower order counts, which, in the limiting case, is equivalent to a L-order Markov chain. In effect, L, φ (x i ) can be interpreted as a linear combination of suffixes counts up to a given memory length, with increasing (φ > 1/4) or decreasing weights (φ < 1/4). These results came up as quite unpredictably, since the kernel defined above was based on a different rationale. It turned out that both perspectives are equivalent in terms of final formulation. It is also noteworthy the relation between this methodology and generalized Markov models and interpolated Markov chains (IMM). In fact, similar profiles were obtained recently [39] representing the shortest unique substrings in sequences. where In the application section when calculating the normalized values EP θ (i) ≡ θ (x i ), one has to consider a burntin period corresponding to the first symbols in the sequence. Since the estimation of the profile is biased in the sense that only higher order tuples are considered, it is necessary to exclude these first points f(x i ), i = 1, ..., b 0 , given that no information is provided for higher suffixes up to that position. For that reason, this correction was taken into account when using the EP normalized values. This border effect is nevertheless negligible and can be ignored for longer sequences. The background just presented will allow the representation of the entropic profiles EP θ ≡ L, θ as a function of both L and θ and search for key parameters combinations to unravel the scale upon which important features might arise in the original DNA sequence. Markov Chain-based p-value calculation In order to compare our method with previous efforts, we also report the p-values and respective statistical z-scores for the motifs analyzed. These values were calculated using first-order Markov Chain transition probability tables estimated directly from the whole sequences. This estimation was based on the relative frequency of each oligonucleotide, using pseudo-counts to avoid zero transition probabilities when necessary. After this step, the probability of each motif can be easily accessed along with their expected number of occurrences in a specific sequence. The calculation of the p-value of a motif m is therefore the probability of observing more counts N(m) than those expected under that given model, i.e., prob{N(m) ≥ N obs (m)}. The normal distribution was used as an approximation for the distribution of N(m), with expected values and variances described in [1]. These variances took into account the overlap capacity or period of each motif, as described in the same reference. Other approximations, such as using the Poisson distribution, give the same relative order for the motifs. The p-values calculated are reported for each motif referred in the text. To complement the analysis and since many of the motifs studied exhibit very low p-values, practically equal to zero, i.e. they are exceptionally frequent, the z-scores and their relative rank order was also reported. In this way a more accurate comparison can be performed.
2014-10-01T00:00:00.000Z
2007-10-16T00:00:00.000
{ "year": 2007, "sha1": "94f190d55be796262cea4b6e158c2c5ad71ddec0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1471-2105-8-393", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94f190d55be796262cea4b6e158c2c5ad71ddec0", "s2fieldsofstudy": [ "Biology", "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
232067226
pes2o/s2orc
v3-fos-license
Socio-economic constraints on camel production in Pakistan’s extensive pastoral farming The present research is aimed to evaluate the diverse husbandry practices, ethno-veterinary practices, socio-economic status and distressing constraints of camel pastoralists inhabiting desert (Thal) areas of Pakistan, where they maintain herds of Marecha and Barela dromedaries in extensive production regimes. For this purpose, 200 pastoralists were selected at random to fill out an on-site questionnaire. According to the farmers’ responses, it was perceived that their living status had improved in the last decades due to the progressive optimization of camel productivity and herdsmen responsiveness. In contrast, calf mortality rates, some traditional husbandry practices and the lack of market investments continued to be the major constraints affecting camel overall production. Ethno-veterinary medicines are widely applied as primary health care, thus influencing the general health, production potentials and relief of camels in the study region. With this scenario, concerned stakeholders and authorized institutions must re-evaluate the urgent needs of indigenous communities; their education and husbandry skills to promote economic/ financial support in low-income remote areas. In turn, traditional communities will be adapted to the changing socio-economic and cultural values with regard to camel husbandry and welfare. Current societal perceptions and demands within this livestock production industry, where camels are conceived as a sustainable food security animal, if accomplished to the highest possible extent, will increase effectiveness of the camel value chain and breeders’ quality of life will be noticeably enhanced. However, this success could be multiplied if government may devise community education, veterinary cover, marketing facilitates and interest-free small loans for pastoralists. Introduction Livestock production is an integral structural element of the agricultural sector globally by guaranteeing a variety of goods and services, using different animal species and various sets of resources, in a broad array of agroecological and socio-economic circumstances (Thornton 2010). In Pakistan, the livestock sector has a relative contribution of 11.7% to Gross Domestic Product, and is a major source of government revenue and export earnings, which sustains the employment and income of the deprived rural community. It is the only food and cash security for underprivileged masses, extensively contributing draught power energy and manure for fodder and cash crop production (GOP 2019-20). The sale of livestock and their products often constitutes the only source of cash income in rural areas and hence the only way in which subsistence farmers can buy agricultural inputs like seeds, fertilizers and pesticides for cash crop production. Indeed, it represents the main income source for smallholder subsistence farming in some developing countries by providing round-the-year sustainable food and livelihood products (Faraz et al. 2019a). Besides that, at times of crop failure, this economic niche helps to conceal such rent temporal decreasing and raises the socio-economic status of lowincome rural local communities (Faraz et al. 2018). Pakistan ranks eighth in the top ten camel producer countries in the world, with around 1.1 million heads (FAOSTAT 2019) and at least 20 different officially recognized camel breeds (Isani and Baloch 2000). Camel is an important domestic animal well adapted to extremely harsh environments of the desert. Due to its multipurpose role, the camel is gaining importance, particularly as a milk-and meat-producing animal (Farah and Fisher 2004;Faraz et al. 2019b). Camel production systems in Pakistan are mainly based on sedentary regimes where dromedaries (the one-humped camels Camelus dromedarius) are maintained from birth to finishing. Camels are mainly raised on the rangelands having natural vegetation which provide habitat apt for camel herds. The range livestock production system is linked to the pastoral systems whose main product is milk and the main function of livestock is subsistence of the community. Management is characterized by the adaptation of the feed requirements of the animals to the environment through migration; land tenure is communal and transient. In contrast, transhumant and nomadic herds are progressively disappearing because of the advancement of agriculture and the advent of intensive livestock farming systems (Blench 2001;Kaurajo et al. 2020). More than 40% of Pakistan's camel population is in Balochistan, 30% in Sindh, 22% in Punjab and 7% in Khyber Pakhtunkhwa Province (ACO 2006). Numerous research studies (Jasra et al. 1999;Isani 2000, 2003;Khan et al. 2003;Pasha et al. 2013) have discussed and documented the production, management and socio-economic importance of camels in Pakistan. Pakistani camels are well-adapted to their native environment (desert and arid regions) and act as a multipurpose animal for the basic needs, satisfaction and survival of local livelihoods (Samara et al. 2012;Faraz et al. 2019c). Their physiologically unique characteristics allow them to produce even under harsh climatic and extreme environmental conditions, whereas the productive potential of other livestock species are adversely affected and their performance is relatively reduced (Faraz et al., 2013). Such peculiarities are especially exploited by nomadic pastoralists whose subsistence in arid and semi-arid areas of Pakistan is associated with the camel's productive potential (Iqbal et al. 2012). However, the updated technical skills for camel welfare and general health status are lacking for these local communities, so the camels' productive potentialities may be overlooked. The primary cause of failure in most cases by the government and communities has been lack of sufficient understanding of relationships between the biological, economic and social components of each pastoral and rangeland production system (Faraz et al. 2019c). Given the fact that current trends in camel-derived product consumption are expected to change in the present millennium (Khan 2012;Samara et al. 2012), it is imperative to illustrate husbandry practices and related constraints for camel extensive pastoralism in the country. Under this framework, since the role of camels in the economy of Pakistani marginal areas is still scarcely detailed (Faraz et al. 2020), the present study constitutes, to the knowledge of the authors, the first attempt to evaluate the socio-economic status of native pastoralists and extensive cost-effective camel farming in a Pakistan desert region as well as sketch a few recommendations. Study area The present research was carried out at Bhakkar district in the province of Punjab, Pakistan (31°33′ 39″ north latitude, 71°50′ 33″ east longitude). Most of the area lies in the plain of the Thal desert, and the climate ranges from arid to semi-arid subtropical conditions. The mean monthly highest temperature goes up to 45.6°C, while in winter, it varies from 5.5 to 1.3°C. The mean annual rainfall in the region ranges from 150 to 350 mm, increasing from south to north areas (Rahim et al. 2011). Quantitative sample A total of 200 camel pastoralists were selected using a purposive sampling technique. The variables registered were herd composition, mean age of animals, physiological status, milk yield, feeding regime, housing conditions, calves' birth weight, general management strategies, ethno-veterinary practices and different socio-economic conditions perceived by camel breeders as potential constraints affecting camel production. The field study was carried out in accordance with standard guidelines for ethno-veterinary investigation (Albuquerque et al. 2014), including ethnobiological and anthropological methods such as free listing, participant observation and interviews (Puri and Vogl 2004;Bernard and Gravlee 2014). The criteria described by the International Livestock Center for Africa (ILCA) was used to rank the major contributions of dromedary camels from herds involved in the study (ILCA 1990). Microsoft Excel was used for data compilation. Descriptive statistics (frequencies, percentages and average values) for the different variables registered were derived using SPSS software (SPSS 2008;Steel et al. 1997). Herd composition and productive parameters Herd size was quite variable in the pastoralist communities: most herdsmen (70%; 140) reared 2-3 adult animals and 2-3 calves while the remainder was in charge of 4-5 adult dromedaries and the same number of newborns (60% were females). Sex ratio within herds ranged between 1 and 2 males and 2 and 3 females. In terms of average useful life, she-camels are reared for at least 15 years of age whereas male camels are sold for some domestic needs and religious sacrificial purposes (Eid-ul-Adha) at a maximum age of 8 years. The markets are seasonally owned by district governments of a particular area. Most of the camels are slaughtered at religious festivals. However, one day of the week, camels are slaughtered at butcher's shops too. Market value for the camel milk and meat is rising, especially the milk sold in the peri-urban areas, while meat is sold on Fridays at butchers' shops. Various companies purchase milk from these herdsmen through middlemen to export powdered milk. Ordinarily, milk men purchase milk from remote areas of the desert and take it to nearby cities for sale. The number of females with progeny within each herd was about 80%, with most in the lactating stage of 6-18 months when carrying out the questionnaire. Seventy per cent of these fertile females had given birth on average four times. Calves' birth weight was found to be 36-50 and 33-39 kg in male and female calves of Marecha camels while 34-48 and 32-38 kg in male and female calves of Barela, respectively. The main income source for these indigenous communities is the sale of milk, and daily milk yield was found to be 4-8 kg in Marecha and 5-9 kg in Barela camels under extensive pastoral conditions (Table 1). Husbandry general practices Housing facilities and feeding regimes of dromedaries in the study area were explored in-depth. According to the respondents, about 35% of the camels were reared in completely open housing systems, while 65% were in semi-open facilities, with both types of housing properly cleaned and maintained by family members. The housing is an individual type, not communal; they are managed in an extensive system. Most of the camels are managed under the shady trees during the sunny days. The herders have made the semi-open sheds by using bamboos and sirki. The camels are mostly sent for grazing for 8-10 h daily and also fed gram and mung straw and household wastes (Table 1). All surveyed breeders confirmed that they provide water to these animals 2-3 times per day and give stomach powder and/or salts for proper functioning of their digestive system, apart from the grazing plant species that camels have access to during the day. The herders have their land segregated between the groups. The camels graze on the land of herders and in desert areas (jungle grazing as well), consuming crop residues, and browsing trees and shrubs of the desert. The Government has also allocated grazing land of the Forest department. People use ground water through drilled and water pumps; also, in some areas of the desert, the toba (stored rain water) system is available to be used for animals and for the community too. At the breeding season (November to March), cameleers mostly used village bulls for matings. Pastoralists allow males to mate 2-3 times and give extra flushing allowance to bulls in the rutting season. Poll gland secretions and Dulla protrusion were observed in bulls during the breeding season for a proper assessment and optimization of the reproductive status and performance of the animals. For newborn care, it was found that in about 20% of herds in the present study, calves had access to colostrum immediately after birth, however in the remaining herds the pastoralists waited until the placenta had been shed before allowing calves to suck the colostrum. In 90% of calves, restricted suckling was practiced, as they were allowed to suck two teats, mainly from the right side of she-camels, with the left teats used to obtain milk for domestic selfconsumption or to sell. Weaning age was found to be 12-16 months in most of the calves (about 70%). Deworming was only performed by 25% of herders, and calf mortality rates were about 20% (Table 1). Ethno-veterinary practices According to the general opinion and judgement of most herdsmen, their living status had improved in the last decade. They perceived an improvement in camel production rates, management practices, onset of organized farming and value chain effectiveness. Ethno-veterinary practices are still used by some herders for the treatment of complex diseases affecting their animals and are having wide economic impacts. Herders explained the complexity of such diseases in terms of the duration, the intricacy in treatment, morbidity and mortality rate and production losses. This last item not only involved the poor quality of derived products but also enhanced feeding and labour expenses until the animal is completely recovered. According to their experience, the most common diseases and health risks within extensive pastoralism farming in the study area were trypanosomiasis, sarcoptic mange, contagious skin necrosis/lymph node swelling (jhooling), camel pox and snake bite. A common disease trypanosomiasis (surra) badly affects the productive and reproductive life of camels, having symptoms like anorexia, fever, pale eyes, rough appearance and progressive emaciation. The disease is economically important as it diversely affects animal health and productivity. Pastoralists believe that the flies are the major causative agents in the spread of disease, and this belief is supported by the study of Jaji et al. (2017) who also reported the same findings in their study about herd growth parameters and constraints of camel rearing in northeastern Nigeria. While the Pakistani pastoralists try to control the flies, the basic treatment strategy is to neutralize the blood poison by bitter taste of plants and to awake the animal sleepiness. The second important disease is mange; the progressive weakness of the animal makes it prone to the disease. Mange is also economically important as it affects the fertility. It is contagious in nature and affects the draught ability, resulting in poor growth. Pastoralists believe that it is spread by rats and mange is of two types: white and black. White mange is of mild nature and covers only a certain area, the animal itches its body against hard objects and skin becomes thick and balled with whitish scabs. Black mange affects the major parts of the body, and baldness occurs which causes skin to become redblackish and muddy. Animals become emaciated as cracks appear on the body and blood oozes out. Mostly, the crack localizes in the neck area which bleeds and invites the flies that cause infection and make the animals restless. Treatment includes washing and rubbing of the skin with sand and then washing and cleaning with laundry soap so that the affected skin becomes red and clean. Trichlorfon powder added to used engine oil or taramira oil or chopra (phenyl oil + turpentine oil + Neguvon powder) is applied on the skin. Contagious skin necrosis (jhooling) is another disease mostly affecting young camels. The pastoralists believe that the disease is good for future health as purulent fluids drain the unidentified disease factors. Pustules are formed on the body of the camel which recover when the pus is discharged. The soft area of the body like the neck, shoulder and thighs are the main sites for attack. Lymph node swelling, fever, anorexia, emaciation and constipation are major signs of the disease. Treatment includes hot application to the growth and maturation of nodules with the use of fly repellants and supportive therapy. Another common problem is snake bite in the desert area and most of the time the animals died. People use bitter plants to cure the poisonous effects on the animal. Indigenous veterinary knowledge of herders is mainly based on the hot and cold philosophy of food. The soups made of different meat, eggs, cereal, pulses and chilies make the hot food. These are the nutrients which keep the body active and energetic and enhance the activity of the body. Regarding other ethno-veterinary practices, pastoralists mostly use cold application for the treatment of fever and give castor oil orally to control indigestion. For impaction and enteritis, pastorals use butter with ispaghol, taramira oil, lassi and desi ghee. They use different formulations of stomach powder by using herbal and English items. For mastitis control, the pastorals mostly use chillies and pepper, while for camel pox control, they use hot food, hot bread for the treatment of nodules in the mouth. Different hot soups are also used for the cure of disease. Socio-economic status The livelihood of the majority of the pastoralists depended upon the practice of livestock grazing on range vegetation. According to respondents, in District Bhakkar (the largest desert area of Punjab), 48% of the population was illiterate while the majority of the literate persons had only primary education. Forty-two per cent of pastoralists were land owners while the rest were landless or land tenants. Most of the houses were made of mud plastering while the others were made of bricks. The area owned by the pastoralists varies from 1.95 to 4.63 ha per family. On average, 70% of family members of all the pastoralists were involved in open grazing as their major occupation. Livestock herd size varied between 8 and 121 animals. The majority of the pastoralists preferred to rear goats and sheep due to early maturity of these animals. Among camel breeds, Marecha is the most favorite and beautiful raised in that area, as an aesthetic preference for dancing and riding purpose. In contrast, the Barela is very famous for its milking potential. The sources of feeding for livestock during emergencies were wheat straw, gram straw and mung straw, concentrate mixture and cotton seed cake. Thirty per cent of all categories used veterinary facilities while 70% could not, due to more distance from the veterinary hospitals. Major sources of grazing for livestock were crop harvested areas while other minor sources were natural vegetation of road-side village wastes and along canals. Main constraints affecting camel production The second largest desert of Pakistan is the Thal desert, which is rich in indigenous livestock resources and located in District Bhakkar of Punjab province. The herders mainly raise camel, sheep and goats there. The use of these animals for meat and dairy purposes is still limited, due to many cultural and socio-economic factors. The major issues observed regarding intensifying the camel husbandry practices in the study area are discussed here. Camel husbandry has a strong attachment for the herders in the area, and it is interwoven with their socio-economic system and dryland farming. While camel products are a novelty and have yet to achieve preference over cow or buffalo milk and meat, there is a lack of information and guidelines regarding value addition of camel milk and meat products, while attractive market and value chain services regarding camel products are not available. People still consider the use of camel milk as taboo and have not developed a taste for it yet. They usually sell the milk by mixing with cow or buffalo milk. No doubt in urban areas the people are getting aware of the therapeutic worth of camel milk and meat and setting a trend regarding its consumption. The extension services should be provided to guide the pastoralists about the significance of camel products so they better can exploit the hidden gold of their camels. People in Pakistan raise camels mostly for riding, dancing and draught purposes, so the utility of their meat and dairy products as well as wool is minimal. Due to the lack of information on nutritional requirements, guidelines on formulation of camel feed ration and nutritional standards for growth, production and reproduction are immediately desired for improved husbandry and enhanced profit. Lack of advice regarding commercializing the camel husbandry and nutritional profile for rearing camels as meat and dairy animals has not yet been standardized. According to 70% of the respondents, the major issue in camel production is calf mortality, because they are born in harsh and hostile climatic conditions. The calf growing season is mainly May and June-which is the period of forage scarcity, so the cow camel cannot meet her own feed requirements. The feeding allowance for lactation is too small to achieve a better growth rate for calves in that season. In addition to this, poor extension and advisory services for farmer education, empowerment and entrepreneurship is a major hurdle faced by cameleers, which also has to be taken into account. Persistence with traditional husbandry practices, the lack of gender training and the main reliance on ethnoveterinary practices are also issues on the list. As the major livestock chores are met by the females, so there should be gender training in the area to educate the females equally with males to strengthen the camel husbandry practices. Local and mobile veterinary dispensaries should be established to treat the camels in remote areas so that the reliance on ethnoveterinary practices could be minimized. Productive parameters Observed milk production yields in the current study have supported the findings previously reported by Hussien (1989), Gedlu (1996), Kebebew and Baars (1998) and Tezera (1998) who found milk production values ranging from 4.5 to 7.5 l/day in Eastern African camels, in contrast to the findings of Zeleke and Bekele (2001) in Ethiopian camels (1.5-3.1 l/day). Similarly, Khan and Iqbal (2001) reviewed the production of various breeds of Pakistani camels in different production systems and reported range of 3.5-20 kg of milk per day. Recently, Raziq et al. (2010a) evaluated the milk production potential of Kohi dromedaries selected from pastoral herds in northeastern Balochistan and reported an average daily milk yield of 10.2 ± 0.4 kg/day ranging from 6.1 to 11.7 kg. The dromedary camel is a milkproducing animal, and its potential as a commercial dairy animal was evaluated in this study. The highest milk yield 3168 kg was demonstrated in the 5th parity (13.5 years), followed by 3051 kg in the 3rd parity (8.8 years) and 3010 kg in the 4th parity (11.5 years). The lowest milk yield was 1566 kg produced in the 1st parity (4.5 years). In the same context, Faraz and co-workers investigated milk production in Marecha she-camels under extensive conditions (Faraz et al. 2020) and Barela she-camels in traditional systems within the Thal desert (Faraz et al. 2018). Parity and age of the camels significantly affected the milk yield in all the studies, and vast potential exists as regards to milk production that needs to be explored through extensive genetic studies and intense selection on the basis of breeding values. During the current course of the study, it was observed that birth weight of camels significantly affected their productive potential. Birth weight data of dromedary calves from the database of one of the world's largest dairy herds Dubai, UAE, was evaluated by Bene et al. (2020). Based on the results of this study, they concluded that the birth weight of dromedary calves was more influenced by the dam's intrauterine rearing capacity and by the environment, management and feeding of the pregnant female camels than the hereditary growth potential. Considerable differences were found among male dromedaries in their breeding values for the birth weight trait. The birth weight of dromedary calves was the subject of interest in various former investigations reviewed by Tibary and Anouassi (1997). More recently, Bissa et al. (2000) also summarized the available literature but focused primarily on Indian camel breeds. According to their results, the birth weight of dromedary calves belonging to the Bikaneri camel was 26-51 kg, while the average birth weight of male and female calves were 38.2 kg and 37.2 kg, respectively (Bhargava et al. 1965). Similarly, Wilson (1978) and Bissa et al. (2000) found the average birth weight of a dromedary camel calve as 35-39 kg with variations due to genetic and environmental factors. In contrast, Ouda (1995) reported the influence of sex on birth weight of the dromedary to be minimal, and in other studies, no differences in body weight between sexes were observed up to 2 years by Ouda (1995) or up to 4 years of age by Simpkin (1985); this variability was not clearly pronounced in our study sample Socio-economic relevance Despite being overlooked for centuries as a multipurpose animal, camels are now fortunately gaining recognition for their productive potentialities in the last decade (Faraz et al. 2019d. As a consequence, majority of herdsmen interviewed stated that their living status had improved, as it was connected with camel rearing and production. Most of the herdsmen possess she-camels while the male camels remained small in number. Camels are sold generally to the middlemen (beoparies/traders) where the price depends on the market demand and the general health status of the animal; however, herdsmen also take their animals to the nearby livestock markets where they can obtain higher prices. The main income source of the cameleers was the sale of milk, meat, animals and to certain extent draught power or crop cultivation. For newborns, males were sent for slaughterhouses at an early age, except those selected as future breeding stock in the herd. In other cases, some male newborns are castrated and allowed to grow up to 3-6 years of age, then are sold for slaughter at the religious Holy festival (Eid-ul-Adha). Notwithstanding, we encountered many camels with low milk production rates as a direct consequence of erroneous selection practices over decades, which have resulted in an increase in undesirable genetic pool. However, even poor milkers, producing up to 5-6 l of milk per day, can still provide sustenance for whole families due to the 'filling effect' (Faye and Esenov 2005). Elsewhere, it is very well documented that camel husbandry makes a significant contribution to national economies in Sudan as reported by Zubeir et al. (2006). Finally, when uncertain, erratic rainfall causes crop failures, this has a drastic effect on the economy of the small, resource-poor farmer, such that socio-economic and environmental conditions of the area do not allow these people to rely on crop production as a sole source of income. Therefore, the herders keep camels and other livestock species as a security against crop failure and as a means of income supplementing and saving. Despite the small numbers, in comparison to other animals, camels provide an important source of subsistence and income to the desert people in Pakistan. The camel's socio-economic values are widely recognized; in the marginalized societies, although mechanization is also endangering the greater role of camels, they have remained an integral part of the nomadic ecosystem of the country (Faraz and Waheed 2017). Ethno-veterinary practices The herders in the study area had an immense experience and deep indigenous knowledge about prevalent diseases and treatment of the camels. They were wellacquainted with the symptoms and clinical signs of such pathologies and can differentiate properly between which morbid process a camel is suffering or could be suffered in a future moment. As people live deep in the desert or away from towns/cities or veterinary services, they have developed their own way of treatment for various diseases of camels, reported for other places (Volpato et al. 2015). The herdsmen have different ethno-veterinary practices to resolve the issue of diseases, many of which are much alike those reported from other camel habitats around the world (Raziq et al. 2010b). Generally supportive treatments which promote healthy conditions and ensure the animal is fit for normal performance are very common in many societies (Grade et al. 2009). The healers regard ethno-veterinary practices as reliable, harmless, cheap, painless, readily available and easily applicable (Mertenat et al. 2020). However, ethno-veterinary medicine has its own strengths and weaknesses. Not all, ethno-veterinary practices provide ideal and effective circumstances to animal health troubles-no more than does allopathic veterinary medicine (Mathias and McCorkle 1989;Abbas et al. 2002;Lin et al. 2003). Production constraints Calf mortality is a major problem that slows down herd growth in camel production systems, and it is mainly due to poor management and infectious diseases (Farah 2004). The reason behind is the lack of veterinary care and mostly the pastoralists rely on ethno-veterinary practices and traditional treatment methods (Chafe et al. 2008) while it is well proved that the access to veterinary services considerably reduced camel calf mortality (Simpkin 1985). Coding from the literature data, the major constraints about camel production are found to be education, water supply and veterinary services (Abdalatif et al. 2011) and reliance on ethno-veterinary treatments (Jaji et al. 2017) Conclusions and recommendations The notions about camel as 'ship of the desert' and 'beast of the burden' have shifted their place to food provider. Hence, the camel is a very useful desert animal which could be harvested by maximizing its productive potential. The dromedaries are hardy and relatively resistant to many diseases, as well as able to thrive with limited resources more than other domestic livestock species. In Pakistan's desert areas, arid and semi-arid rangelands, they are being used as an important food animal. The camel husbandry system in extensive production is mainly related to traditional practices, ethno-veterinary treatments having numerous production constraints which could be overcome by incorporating modern practices. Based on the results obtained, it is concluded that there is an urgent need for extensive educational and training programmes and/projects for pastoralists with the intent of improving their management practices and to refine their traditional knowledge. That is, ethno-veterinary practices should be preserved in the form of indigenous knowledge while the government should provide health cover and mobile veterinary dispensaries/clinics in desert areas. In addition, village cooperative societies should be developed, incorporating local members. Regular social events must be organized and coordinated to learn and discuss the pastoralists' concerns through these cooperative societies. For value chain further opportunities, herders should be provided with regular markets with ample facilities. Artificial re-seeding of grasses, trees, herbs and shrubs at the proper time (rainy season), along with rotational grazing, could be an added-value initiative. Interest-free small loan facilities should be devised through Agriculture Development Bank, Pakistan, on the recommendation of the cooperative societies to facilitate organized camel farming. Abbreviations FAO: Food and Agriculture Organization; GOP: Government of Pakistan; ACO: Agricultural Census Organization; KPK: Khyber Pakhtunkhwa; ILCA: International Livestock Center for Africa; BCR: Benefit-cost ratio their precious insights in order to strengthen the write-up of the current manuscript, although any errors are our own and should not tarnish the reputations of these esteemed persons. The authors would like to thank all respondents that took part in the survey and also acknowledge the technical support and assistance provided by the persons in charge of camels at the Camel Breeding and Research Station 'Rakh Mahni'. Statement of animal rights The procedures followed are in compliance with the ethical standards of the Animal Welfare Committee from the University of Agriculture Faisalabad. Authors' contributions All authors contributed equally to the research objectives' fulfillment and data presentation. Asim Faraz designed the protocols and conducted field research, Muhammad Younas supervised the field works, Muhammad Shahid Nabeel gave technical and practical support when conducting field research, Abdul Waheed was in charge of data analysis, Nasir Ali Tauqir helped in writing the article and Carlos Iglesias Pastrana was responsible for the thorough proofreading of the paper before submission. The authors read and approved the final manuscript. Funding No funding source available. Availability of data and materials All data generated or analysed during this study are included in this published article. Consent for publication Open access.
2021-02-28T14:36:00.189Z
2021-02-27T00:00:00.000
{ "year": 2021, "sha1": "9234b7cd1767b3d0b4427f8f427ec2b437d07073", "oa_license": "CCBY", "oa_url": "https://pastoralismjournal.springeropen.com/track/pdf/10.1186/s13570-020-00183-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9234b7cd1767b3d0b4427f8f427ec2b437d07073", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
234140901
pes2o/s2orc
v3-fos-license
FOREST MANAGEMENT SILVICULTURE Ratios between aboveground net primary production, litterfall and carbon stocks in scots pine stands (Russia) Background: Forests are the main terrestrial regulators of greenhouse gas concentrations. However, estimates of carbon fluxes in them are characterized by large uncertainties. Therefore, the derivation of predictors for their assessment is an urgent task. The aim was to assess the carbon stocks in the biomass to characterize the intensity of aboveground net production and the amount of litterfall in Scots pine forests of different types on the North-East of the Europe. We estimated biomass and aboveground net primary production (ANPP) of stands using sample trees. For vegetation of ground cover biomass and primary production evaluating, we cut off all aboveground organs on an area of 625 cm 2 and removed the first-year parts. Litterfall was collected over 3–6 years using litter traps. Results: Most of the carbon in the biomass of pine forests is concentrated in trees (C stand ) with dominating role of stem wood. However, in boggy forests, ground vegetation plays a significant role in carbon stocks, both in absolute and relative values. We estimated carbon fluxes in ANPP and stand litterfall. High contribution of needles was detected for these flows. The ratio between ANPP and C stand varied from 0.018 to 0.056 but between Litterfal and C stand ranged from 0.008 to 0.024. Conclusion: The biomass, ANPP and litterfall depended form forest type. Obtained ratios between them can be used for assessing carbon fluxes in large regions using remote data collection of forest biomass. INTRODUCTION The ongoing climate change, associated with an increase in the concentration of greenhouse gases in the atmosphere, has led to the adoption of a number of intergovernmental climate agreements, the most recent of which is the Paris Agreement under the United Nations Framework Convention on Climate Change. If it is ratified, the Russian Federation will declare a reduction in greenhouse gas emissions by 2030 to 70-75% of the 1990 level, primarily due to taking into account the absorption capacity of forests. Forest ecosystems of the Russian Federation, which comprise about 20% of the area of the world's forests, play a tremendous role in the carbon cycle of the biosphere (FAO, 2010). Because of this, studies of the forest carbon cycle in this territory, as well as the impact of climate change on them, do not lose relevance (Schapoff et al., 2016). At present time estimates of carbon stocks in the biomass of Russian forests are fairly close, while carbon storage in soils and its temporal dynamic and fluxes have high uncertainties which are mainly due to a scarcity of experimental data (Schepaschenko et al., 2013;Thurner et al., 2014). Primary production is a key component of forest carbon cycle (Pan et al., 2011). For large regions in boreal zone these parameter was estimated based on forest inventories data or ''semi-empirical" method for assessing NPP that combined the Richards-Chapman growth function and yield tables (Shvidenko et al., 2007). As a predictor in assessing the net primary production of large forest areas, one can use data on the biomass of stands or wood reserves in them, employing conversion factors between them (Keeling and Phillips, 2007). The opposite process of net primary production is litterfall. According to a Ortiz et al. (2013), difficulties in forecasting soil carbon dynamics are due to uncertainties in predicting the mass of litterfall, which is a connecting link in the carbon cycle between soil and biomass (Smith et al., 2015). To forecast of the amount of litterfall entering the soil surface have been used data on the biomass (Ťupek et al., 2015) and net primary production (Lv et al., 2013;Park, 2015) of needles, radial growth of stem wood (Lehtonen et al., 2004), weather conditions (Portillo-Estrada et al., 2013; Bhatti and Jassal, 2014) with that were determined close correlation. In addition there are two indirect ways of assessing this flux: by the loss of carbon during respiration of the soil and by it's tying up in net primary production (Matthews, 1997). In our opinion, the use of soil respiration data for calculations of litterfall has substantial uncertainties associated with the lack of unambiguous assessments of the share of autotrophic respiration in the total CO 2 flux from the soil surface (Goncharova et al., 2019). In this line, (Neumann et al., 2018) concluded that the availability of biomass data can lead to more reliable results compared to climatic parameters when assessing litterfall inflow. An analysis of the literature showed that the information on the interrelation of net primary production or litterfall with biomass of stands for boreal forests is clearly not sufficient. We think that base on these ratios is possible to derivation of conversion coefficients, by analogy with those used to assess forest biomass . It will allow calculating carbon fluxes with net primary production and litterfall in forest ecosystems using biomass data. Thus, they will be helpful to reduce uncertainties when assessing carbon fluxes in the boreal zone. Our study therefore has the following objectives: 1. to assess the carbon stocks in the Scots pine forest biomass, aboveground net primary production (ANPP), and intensity of litterfall inflow in Scots pine forests in Komi Republic (Russia) 2. to calculate the ratio between ANPP and litterfall and carbon stocks in the aboveground organs of stands. Study area The experiments were performed on the territory of Komi Republic (region of Russia) that locates on North-Eastern of East European Plain (Fig. 1). The objects under study were placed in the Chernam (N 62° 01´ E 50°28´) and Lyal (N 62° 17´ E 50°40´) forest ecological stations of the Institute of biology of the Komi Scientific Center of the Ural Branch of the Russian Academy of Sciences and in the buffer zone of the Pechora-Ilych Nature Reserve (N 62° 49´ E 56°52´). The climate in the territory under consideration is temperate continental. Mean annual air temperature varies from +0.3 to +0.5°C (Chernam and Lyal forest ecological stations); closer to the Ural Mountains, it decreases to -0.8°C, with a simultaneous increase in the amount of precipitation from 620 to 675 mm, most of which falls during the warm period of the year. The mean duration of the growing season is 141 days, while the number of days with stable snow cover is 165−175 days. Data collection The work was carried out on permanent sample plots of a rectangular shape embedded in Scots pine forests of different growing conditions and ages. The plot size varied from 0.12 to 0.20 ha. Within each site, tree diameters at breast height with a thickness of more than 6 cm and total heights of all living trees were measured into account. A brief description of the stands is given in Tab.1. The growing stock was calculated according to regional tables, depending on the diameter and height of the tree. The forest types were determined according to classes of soil moisture conditions: stands on over-wetted soil (Sphagnosa type), stands on soils with sufficient moisture (Myrtillus type), and stands on dry soils (Lichen type). The biological productivity of the stands was assessed using 5-16 sample trees selected at each site (Usoltsev, 2007;Repola, 2009). The trees must be healthy and without visible defoliation, change in the main growth axis, stem curvatures, decay signs, frost clefs, or caves. The sample trees were performed after the end of the active growth period and before leaf fall, usually in mid-August. The sample tree selection was based on diameter at breast height (DBH) distribution of all trees in the stand. One or two sample trees were equal to the mean diameter of the stand belongs. One sample tree was close to the largest on diameter trees and one to the smallest trees. The other trees were taken randomly from the range between largest and smallest DBH. The aboveground part of each sample tree was partitioned into next fractions: stem, branches and twigs with needles. For this purpose, the tree was cut into 2-m sections from root collar and weighed. After fresh mass weighting from each section were taken next samples: in first, sample discs were cut from stem for wood/bark ratio determining; in second, twigs with needles samples were collected for twig/needles ratio and their distribution on age. In addition to all samples were used for water content determining in biomass fractions. The roots were excavated from soil and weighed. After that the samples for water content determination were collected. Further processing was carried out in a laboratory. Samples were packed in plastic bags to water content preserve. To determine the biomass of the ground cover vegetation all aboveground organs were cut off on 40-50 plots with an area of 0.25 m x 0.25 m on distance 4-5 m between from each other. Samples of plants for analysis of biomass were collected in mid-August. Sampling in late summer aimed at estimating the maximum biomass accumulation of all species during the growing season (Woziwoda et al., 2014). Underground organs were collected using a drill with an area of 98 cm 2 in 40-50 cores to a depth of 20 cm. To determine the production of shrubs in Myrtillus and Lichen type pine forests, we randomly cut off 10-50 samples of shrubs on PL1 and PM 2 . The number of samples depended on the species frequency occurrence. The inflow of tree litterfall to the soil surface was assessed using 15-20 litter traps of 0.25 m 2 that were installed on distance about 5 m from each other on sites within 3 or more years (Portillo-Estrada et al., 2013). Litterfall was collected twice a year: after snow melting in mid-May and after leaf fall in mid-October. The observation period was different for the stands and given in Tab. 4. Cameral processing of data Under laboratory conditions stem samples of trees were separated into bark and wood. The twigs with needles samples were divided on components with age of fractions determining. Then their fresh mass was weighted. The samples of ground vegetation were sorted into individual plant species. The shoots of first year were cut from sample shrubs for their increment evaluating. The following fractions were separated into the litterfall samples: needles, leaves, branches, bark, and cones. If the litterfall fragments were difficult to differentiate due to their small size, they were placed into a separate fraction, plant remains (Portillo-Estrada et al., 2013), which mainly included bud scales, small fragments of needles, or bark. All samples after cameral processing were dried to absolute dry weight at a temperature of 105°C and weighted. After drying the water content was determined in samples of sample trees for each fraction. Data analysis The mass of the tree roots in Lichen and Sphagnosa types Scots pine forests was calculated by the equation using the ratio of the aboveground and total tree mass (Mokany et al., 2006). Needle growth was assessed as mean by the contribution of the needle mass over 4 years to the total mass. The current growth of stem wood was determined as the mean over the past 5 years by the cuts of stem wood that were made at the level of the root neck, the heights of 1 and 1.3 m, and then every 2 meters up to the top. For this purpose analysis of the radial growth of the stem wood was carried out by four radii using the tree-ring measurement station LINTAB and TM 5 (RINNTECH®, Germany) and software for treering measurement and analysis TSAP-WinTM (RINNTECH®, Germany). According to these data, a tree growth progress by height and diameter was built. The bark increment was taken equal to its litterfall. Branch growth was determined by the middle branch selected from the middle of the crown of each tree as the sum of the middle branch's mean growth of the first order and all branches of the second order, using the following equation [1], where I is the growth of the middle branch (mass units), MbrI is the mass of the first-order branch (mass units), MbrII is the mass of the second-order branch (mass units), A is the age of the branch (years). Based on sample trees data, we derived equations for the dependence of mass and increment of separate fractions from the diameter at breast height of the form M = a×D b , where M is dry mass of biomass or increment component (kg), D is DBH (cm), and a and b are constants (Tab. 2). Type of equation was chosen based on the analysis of the approximating curve. The curve should not cross the Tab. 2 Equations (y=a×D b ) for the dependence of the biomass (above line) and net primary production (under line) of Pinus sylvestris trees fractions on the diameter at breast height (D), kg (at p<0.05). Living trees are the main pool of C biomass , with a proportion of 89-99%. The main fraction comprising more than half (53-62%) of C biomass is the wood of tree stems, whereas needles/leaves account for 3-5%, branches for 3-8%, stem bark for 4-8%, and roots for 18-22%. According to Kruskal-Wallis test there were no significant differences in the proportion of needles and branches in C biomass between the sites (p=0.264 and p=0.181, respectively). However, share of stem wood, stem bark, roots and ground vegetation were depended from site (p<0.05). Aboveground net carbon production The growth of aboveground organ biomass (ANPP) in Scots pine forests varied from 1.29 to 2.91 Mg . C . ha -1 per year (Fig. 3). The Kruskal-Wallis test showed that total ANPP differed between studied sites (p=0.000). Carbon tied up in ANPP comprised 2-6% of its stocks in the aboveground organs. A more intense accumulation of ANPP was noted in middle-age PM1, while a less intense one was found for young PS1. The contribution of individual fractions of the biomass to ANPP varied widely and differed between Scots pine stands (p=0.000). For instance, the share of needles varied from 21 to 32%, that of stem wood from 26 to 53%, that of stem bark from 2 to 9%, that of branches from 10 to 27%, and that of ground vegetation from 7 to 29%. Carbon content of biomass and litterfall was taken as 50% of separate components mass (Payne et al., 2019). The aboveground net primary production was calculated as sum of increment all aboveground tree organs (needle/ leaves, stem wood, stem bark, branches) and vegetation of ground cover (shrubs, mosses, grass and lichens). The ratios between aboveground net production, litterfall inflow, and carbon stocks in the stand biomass were calculated using the following equation [2], where R is the ratio between the ANPP or Litterfall and C stand ; A is ANPP or Litterfall; C is carbon stocks in the stand biomass. Basic descriptive statistics (mean, maximum, and minimum) were used to describe the experimental data. The standard error of the estimate (SEE) was calculated for biomass and increment equations. It is represents the main distance that the observed values deviate from the regression line. Normal distribution was tested using the Shapiro-Wilk test. The homogeneity of variances was checked by Bartlett test. In case normal distribution one-way ANOVA was applied for sites differences testing. The Tukey HSD test was used for multiple pairwise comparison of group. In case the non-normal distributed nature of data set a Kruskal-Wallis one-way analysis of variance was used to test the significance of differences between sites and Dunn test for post-hoc test following Kruskal-Wallis test. Statistical analysis was performed at 95 % significant level. Statistical processing of data was performed using the programs Microsoft Excel 2010 and R statistical programming package version 3.5.0 (R Core Team, 2018). Carbon stocks in the biomass of pine forests The results given in Fig. 2 show a wide variation in carbon stocks of the biomass (C biomass ) in Scots pine forests. More than half of the tree litterfall was formed by pine needles, with the exception of pine forests PM2 and PS2. In general, the share of photosynthetic organs represented by pine and spruce needles and birch and aspen leaves varied from 60 to 72% and their mass depend on forest type (ANOVA: F=17.4; p=0.000). The Tukey HSD test showed that there are no differences in mass between stands within the Sphagnosa type (p>0.05) and between them and PM1 (p>0.05). Also disparities were no in pairs PM2-PL1, PS3-PL2 and PS1-PM2 (total nine pairs from 21). The contribution of the bark in the pine forests varied within 4-13% (here and elsewhere in this paragraph in brackets results of fraction's mass differences between sites using Kruskal-Wallis test: p=0.002), that of branches from 6-21% (Kruskal-Wallis test: p=0.005), and that of cones from 5-10% (Kruskal-Wallis test: p=0.502). We didn't observed significant differences both litterfall branches and bark in between stands PM2-PL1, PS1-PL2, PS1-PM1, PS3-PL2, PS3-PS1. Also there were no disparities between PS2-PL1, PS2-PM2, PL1-PL2 and PS3-PM1 in branches litterfall and between stands PM2-PM1, PM2-PL1 and PS1-PM2 in bark litterfall. In the studied pine forests, the mass of tree litterfall was 1.1-2.3 times less than that of ANPP and comprised 1.0-2.3% of the carbon stocks in the aboveground organs of the stands. Fig. 3 Aboveground net primary production of pine stands. In frame on diagram is share of component. In frame above diagram is aboveground net primary production (mean ± standard error). ANPP in columns sharing the same letter were not significantly different from each other. 1 In brackets is a period observation; 2 inter-annual mean ± standard error; 3 n.o -not observed; 4 Differences of total aboveground litterfall mass, bark, branches and cones between sites were checked by Kruskal-Wallis test (KWT) due to data non-normal distribution. Differences of photosynthetic organs litterfall (pine and spruce needles, birch leaves) between studied pine forests were tested by ANOVA; 5 Testing of differences didn't performed or performed only for stands where was detected fraction. Tab. 4 Litterfall production in Scots pine forests, kg C/ha per year. Osipov et al. Ratios between carbon stocks of the biomass, ANPP and litterfall Data of Tab. 5 show that the ANPP/Cstand ratio varied from 0.018 to 0.056 with a relatively high ratio in PM1, while a low ratio was found for PL2. The smallest variation between studied stands of this value was observed for pine forests of Sphagnosa type and largest was for Myrtillus type. The Tukey HSD test detected that there are no differences of investigated ratio between Scots pine stands within the types Lichen (p=0.998) and Sphagnosa (p>0.05). However according to ANOVA studied pine forests contrasted between each other (p=0.000). The ratio between litterfall and Сstand varied widely from 0.008 to 0.024 with comparatively high level in Scots pine forests of Sphagnosa type (p=0.000). The little values was detected in mature and overmature PS3 and PL2. In general, carbon tying-up in ANPP was 1.12-3.65 times higher than its inflow with litterfall. The highest values were noted in PM1 with more intensive ANPP and PL2 and PS3 where were low fluxes with litterfall (p=0.000). Carbon stocks and ANPP The comparison of obtained data with literature showed that our results on carbon storage are near to 85-year-old Myrtillus type Scots pine stand in Karelia (North-West of Russia) where biomass reach to 72.1 Mg . C ha -1 and variation is 72.1 -159.6 Mg . C . ha -1 (Sin'kevich et al., 2009) with ground vegetation share 1.8-6.2 %. The biomass of a 50-year-old Scots pine forest is 86 Mg C ha -1 in Southwestern Sweden and in 1.4 times higher than studied stand PM1 that akin to it's of type and age. (Hanson et al., 2013). The carbon mass in a 75-year-old Vaccinium type pine forest in Southern Finland comprises 78 Mg . ha -1 (Kolari et al., 2004), which is close to that of the PL1 stand of a similar age, characterized by a fairly high density. The authors provide data showing that the vegetation of ground cover accounts for 1.5-4% of the total biomass. The variation of carbon in the biomass of Jack pine forests in Canada is 29-59 Mg ha -1 (Bhatti and Jassal, 2014) that in 1.5 Tab. 5 Ratio between ANPP, litterfall production and aboveground stand biomass in Scots pine forests. times lower than our studied pine stands of similar age and forest type. As noted by A. Park (2015) carbon stocks in Jack pine stands ranged from 38.8 to 59.6 Mg ha -1 in age of 47-48 years and from 58.5 to 63.7 Mg . ha -1 in age 58-62 years that comparable with our data for stands with close age. Growing conditions have a direct effect on the mass of carbon in ground vegetation. In the studied Sphagnosa type pine forests on waterlogged soils, its contribution to the total carbon stocks was higher both in relative (8-11%) and absolute (3.6-5.1 Mg . ha -1 ) values in comparison with Lichen and Mytillus types pine forests where this contribution were 1.4-3.2 % and 4.2-6.5 %, respectively. The ANOVA showed that type of forest (or growing conditions) is a significant factor that determined ground vegetation input in total biomass (p=0.017). Similar data on the relatively high share of understory in the total biomass of low-productive forests in Sweden are provided elsewhere (Nilsson and Wardle, 2005). Above we provided results of ground vegetation proportion in some pine stands of Myrtillus and Lichen types that comparable with our data for same forest types. Biomass is a direct result of primary production and site productivity (Keeling and Phillips, 2007). The accumulation of ANPP and its constituent components is influenced by a number of factors, including growing conditions (forest type, soil properties), climate of the territory, and stand characteristics (species composition, age) (Song et al., 2018). Relatively high rates of ANPP were observed in more productive Myrtillus type Scots pine forests, while relatively low rates were found for Sphagnosa type ones, especially for the tree layer. In the Lichen and Myrtillus types of pine forests, we noticed a trend of decreasing net production with increasing stand age. In mature PS3, net production is more intense than in middle-aged PS2. This is probably due to a slow-down in the rate of development of low-productive communities under conditions of waterlogged soils (Chen et al., 2002). As noted previously (Vanninen and Mäkelä, 2000), the distribution of increment by organs is determined by the priority and necessity of woody plants in the implementation of vital processes at different stages of development. For instance, a high proportion of needles/ leaves in all Scots pine forests were associated with a short life span and large litterfall of these organs, which requires their regular renewal. The growth of branches ensures mastering of inter-crown space at the conditions of competition between trees for sunlight. In Sphagnosa type stands with less basal area, we observed relatively high inputs of branch increments, except for PS2. The significant contribution of stem wood in old-age PL2 is due to the fact that the trees of the stand have different ages, varying from 56 to 370 years, whereas the wood is mainly accumulated by younger trees. The authors of a previous study (Nilsson and Wardle, 2005) show that in low-productive communities, the contribution of ground vegetation to the total ANPP is higher compared to more productive ones. Similar results were observed in the studied low-productive pine forests of the Sphagnosa type. We think that this is due to a smaller sum of basal areas (by 1.1-2.1 times) and tree crown biomass (by 1.2-2.9 times) in these forests, providing more favorable lighting conditions for plants of the lower layer and leading to a relatively high biomass and ANPP value. Similar conclusions about light transmission role were reached by Ares et al. (2010) studying influence of thinning on understory diversity of coniferous forests in Oregon and Gonzales et al. (2013) that investigated contribution of ground vegetation in total ecosystem biomass of pine forests in France. Carbon inflow from plant litterfall Our data show that the carbon inflow from tree litterfall is slightly less or comparable with that of pine forests in Finland (Ukonmaanaho et al., 2008;Portillo-Estrada et al., 2013) and is approximately equal to that of Jack pine forests of Canada (Bhatti and Jassal, 2014). The authors of cited above articles reported that due to short lifespan needles and leaves has a dominant role in the total aboveground litterfall of boreal forests. Similar tend we observed for studied Scots pine stands on North-Eastern of European Plain. For these fractions we found low and medium inter-annual variation. For pine needles and birch leaves it was 5-23 and 8-25 % respectively. High variation of spruce needles is explained by minor part of spruce trees in pine forests. As noted in (Bhatti and Jassal, 2014), most published works on the structure of tree litterfall provide data only on leaf litterfall, while the role of other organs (branches, cones, bark etc.) that make a significant contribution is poorly understood. In studied Scots pine forests this part varied from 28-40 % with significant input of branches. The pine bark has the lowest (10-31 %) inter-annual variation between other fractions. The litterfall of pine cones has a highest variation (15-105 %) that explained by irregular cones forming. The weather conditions (strong wind, mass of snow) influences on branch litterfall in boreal forests (Portillo-Estrada et al., 2013;Bhatti and Jassal, 2014). It is a reason of intensive branch litterfall that we observed in winter. We suggest that in addition to weather conditions, the contribution of branches to litterfall can depend on stand density. At windy conditions, the branches of both the same tree and of neighboring trees come into contact more often, which can increase the number of broken branches. For instance, in the relatively dense stands PL1 (2,533 trees. ha -1 ), PM1 (1,730 trees.ha -1 ), and PS2 (2,040 trees.ha -1 ), the proportion of branches in tree litterfall was 25, 15, and 17%, respectively. This was especially evident in winter, when the crowns experience an additional load in the form of snow. In the less dense stands PS3 (1,210 trees.ha -1 ) and PL2 (908 trees . ha -1 ), branches accounted for 8 and 7% of the total mass of litterfall, respectively. Similar findings were provided by Lehtonen et al. (2004). According to their data, potential branch litterfall was higher in relatively dense, young stands with small stem diameters. Besides weather conditions, an important factor may be the stand age (Liu et al., 2019), which, in addition to the total mass, determines the proportion of branches. In the studied Scots pine forests, we observed both an increase and a decrease in the mass of litterfall with the age of the stand, but there was no increase in the proportion of branches. Ratios between carbon stocks of the biomass, ANPP and litterfall Currently, the methods for calculating carbon stocks in the tree layer of forests (Cstand), which are also the longterm carbon pool , are more developed; therefore, it is necessary to make calculations based on these data. As noted above age of stand and soil conditions that determined forest type are leading factors that influenced on carbon fluxes in forest ecosystems. Relatively close values ANPP/Cstand ratios we observed for Scots pine forests of Sphagnosa type which practically didn't change with age. Comparatively small ANPP/Cstand ratio was calculated for Lichen type that was in 1.9 times lower than in Sphagnosa type and in 1.6-2.5 times than in Myrtillus type. The highest ANPP/Cstand ratio was observed for middle-aged PM1 that characterized by intensive net primary production on this stage of development. The rate of ANPP decreased on 1.6 times with stand maturing that declined ANPP/Cstand correlation in PM2 in 2.0 times. The similar tends ANPP and ANPP/Cstand ratio we observed in Scots pine stands of Lichen type however intensity was lower. This tendency is due to lower biomass and high primary production rate on the early stages of pine stands development. As presented Kolari et al. (2004) for Southern Finland carbon stocks in biomass of 75-year-old Scots pine stands is in 20.9 and 1.3 times higher than in 12-year-old and 40-year-old pine stands respectively. However its intensity of CO 2 uptake is comparable with 12-year-old stand and lowers in 1.1 times than in 40 year-old pine stands. As noted Bhatti and Jassal (2014) in site Thompson (Canada) biomass of 98-year-old Jack pine stand is higher on 10 % than 76-year-old stand but NPP rate in more young forest is slightly intensive. The ratio Litterfall/Cstand varied widely and minimal value was less in three times than maximal. Relatively high Litterfall/Cstand ratio has pine forests of Sphagnosa type that explained by small stand biomass and comparable flux of litterfall with other studied pine stands. By contrast for Lichen types we observed comparatively low value of Litterfall/Cstand ratio due to large carbon stocks in trees and less litterfal. In pine forests Sphagnosa and Lichen types we detected decreasing this rate with stand age increasing in 1.7 and 2.4 times respectively whereas in Myrtillus type was inverse process. Thus increasing carbon stocks in tree biomass with stand development negatively influenced both on ANPP/ Cstand and Litterfall/Cstand ratios. This data fits into the general laws of the formation of stands in boreal zone when the intensity of ANPP and litterfall declines with an age increasing (Kazimirov et al., 1977;Chen et al., 2002;Zha et al., 2013;Payne et al., 2019). On investigated ratios there is a positive influence of site conditions. The lowest mean ratios of ANPP/Cstand and Litterfall/Cstand we observed for Lichen type that developed on the poorest sandy soils with soil moisture deficit. The maximal mean value ANPP/Cstand and Litterfall/Cstand ratios were detected for Myrtillus and Sphagnosa types respectively. In next paragraph we will discuss investigated ratios in comparably with boreal pine forests of some regions that we calculated using literature data. This sources contained Osipov et al. data about carbon stocks in biomass and carbon fluxes in ANPP and litterfall for self-same sites. Our data are slightly higher than the ANPP/Сstand ratio calculated on data presented by Bhatti and Jassal (2014) for Canadian Jack pine forests, which varies from 0.013 to 0.022, but they are comparable to Litterfall/Cstand (variation from 0.010 to 0.018). In the Myrtillus and Vaccinium types Scots pine forests in Karelia, the ANPP/C biomass ratio varies from 0.018 to 0.062, while the Litterfall/C biomass varies from 0.018 to 0.036 (Sin'kevich et al., 2009). For Scots pine forests in Southern Sweden, the Litterfall/Cstand ratio was 0.023 (Hanson et al., 2013), while in Finland, it was 0.020 (Ukonmaanaho et al., 2008). These literature data show that the analyzed ratios vary within a fairly close range and can therefore be used for assessing carbon fluxes in large regions using remote data collection of forest biomass. CONCLUSION Our study provides data about carbon stocks in biomass and fluxes with ANPP and litterfall in Scots pine forests of different types in Komi Republic (North-East of European part Russia). Total carbon stocks varied from 38.5 to 75.1 Mg.ha with relatively low storages in pine forests of Sphagnosa type. We found that trees concentrate the most carbon of biomass and the main carbon pool is a stem wood that share rich into 57-62 %. We detected that ground vegetation had high both values absolute (3.6 -5.1 Mg . ha -1 ) and relative (8-11 %) in Scots pine stands on overwetted soils compared with forests of Myrtillus and Lichen types. The ANPP accounted for 2−5% of the carbon stock in the biomass. Despite the small biomass, needles and branches had a large share in ANPP what is needed for the renewal of organs with a short life span and the occupying of the surrounding space. The role of ground vegetation in the aboveground net production was significant, which is 7-29% with higher values in stands on waterlogged soils. The binding of carbon in the process of above-ground net production exceeded its losses with litterfall in 1.12-3.65 times. The highest rate of litterfall observed in maturing pine forests of Myrtillus and Lichen type. Needles and leaves formed 62-72 % of annual litterfall. Using obtained data we calculated ratio between ANPP and Cstand that varied from 0.018 to 0.056. Ratio between Litterfal and Cstand was lower and ranged from 0.008 to 0.024. We found that all studied components of carbon cycle depend on forest types. ACKNOWLEDGEMENT The authors would like to thank Michail Kuznetsov and Svetlana Naimyshina from the 'Institute of biology of Komi scientific center of Ural Branch RAS' for partly helping during collecting and laboring experimental data. The article was prepared while doing research work "Spatial and temporal dynamics of structure and productivity of forest and mire ecosystems at the Northeast European Russia" number of registration AAAA-A17-117122090014-8 and partly supported by Complex Program of Ural Branch RAS (project 18-4-4-29).
2021-05-11T00:07:30.188Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fa48ffbdd2c91270a712e9eff03825a46576a139", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/cerne/a/xddcNRrX3jGCmj7hDvqMQgh/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "171daa1563c1e9533dfa03f1fae845591f1ca3ba", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
210698534
pes2o/s2orc
v3-fos-license
Radiocarbon dating of two old African baobabs from India The article presents the radiocarbon investigation of the baobab of Jhunsi, Allahabad and the Parijaat tree at Kintoor, two old African baobabs from northern India. Several wood samples extracted from these baobabs were analysed by using AMS radiocarbon dating. The radiocarbon date of the oldest samples were 779 ± 41 BP for the baobab of Jhunsi and 793 ± 37 BP for the baobab of Kintoor. The corresponding calibrated ages are 770 ± 25 and 775 ± 25 calendar years. These values indicate that both trees are around 800 years old and become the oldest dated African baobabs outside Africa. Introduction The African baobab (Adansonia digitata L.), belongs to the Bombacoideae subfamily of Malvaceae and is the best-known of the eight or nine species of the Adansonia genus [1][2][3][4]. The African baobab is endemic to the tropical arid savanna of the African continent between the latitudes 16˚N and 26˚S. Its current distribution throughout the tropics covers several African islands and different areas outside mainland Africa, where it has been deliberately introduced [1,2,5,6]. An extensive research project was started in 2005 by Patrut et al. in order to clarify several poorly understood aspects on the morphology, development and age of the African baobab. This research relies on a novel methodology, which is not restricted to demised individuals but also allows the investigation and dating of live standing specimens. The original approach described by Patrut et al. is based on AMS (accelerator mass spectrometry) radiocarbon dating of tiny segments extracted from wood samples collected from inner cavities, deep incisions/ entrances in the trunk, fractured stems and from the outer part/exterior of large baobabs [7][8][9][10][11][12][13][14]. Baobabs usually start growing as single-stemmed individuals and develop into multistemmed trees over time, due to their ability to generate periodically new stems. By this special characteristic, old baobabs typically exhibit very complex and unexpected architectures [10,13]. Thus, our research mainly focused on the study of so-called superlative individuals, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 i.e., very big and/or old baobabs. Traditional dendrochronological methods cannot be applied to accurately determine the architectures and ages of such superlative trees [10]. The oldest dated African baobabs were found to be over 2000 yr old [10][11][12][13][14]. By these values, the African baobab becomes the longest living angiosperm. Baobab trees are considered to be a rare sight in India. However, the African baobab is relatively well distributed in states with a tropical or subtropical climate. As a result of surveys, the existence of century-old and heritage specimens has been documented in Andhra Pradesh, Karnataka, Gujarat, Madhya Pradesh, Maharashtra, Rajasthan and Uttar Pradesh. These specimens confirm an antique association of the tree with India. The introduction of the baobab from Africa to the Indian subcontinent and other regions of the Indian Ocean is a silent and historic affair. The exact route and period of introduction has not been recorded and, practically, little is known about their arrival in India. It is believed that the African baobab was brought to India especially by Arabian sailors who came to establish trade links with regions of the Indian Ocean. Portuguese, Dutch and French invasions may have contributed to the introduction of this species in India. Several studies have also speculated that baobab pods may have floated across from Africa on sea currents, reaching the Indian shores [15][16][17][18][19]. Recent genetic research indicates multiple introduction episodes from various areas of Africa to India, supposedly dating back to prehistoric times [20]. The genetic analysis offered novel and very interesting results. First, it showed that the Indian baobabs are the same species as the African species Adansonia digitata. Second, there is less genetic diversity in the Indian baobab populations as compared to the African populations. This confirms the supposition that the baobabs have not been in India long enough for the populations to diversify; thus, their dispersal by ocean currents is less likely than their introduction by humans. The latter discovery indicates multiple introductions of baobabs to India and demostrates they have several biogeographical origins in Africa. Many Indian baobabs have phylogenetic relationship with baobab populations from coastal Kenya and Tanzania, some of them show closer relationship with baobabs from coastal and inland Mozambique and also from West Africa [20,21]. By combining the genetic findings with archaeological and historical accounts of the Indian Ocean trade, Bell and Rangan identified four major periods during which Africans from different regions would have travelled to India. The earliest interactions and transportation of the African baobab to India go back more than 4,000 years ago, when cereals and legumes arrived in India from Sudan, Ethiopia and the Horn of Africa. The second interaction between Africa and India occurred from the 10 th to the 17 th centuries via the Swahili-Arab trade networks. The third major interaction occured in the 17 th and 19 th centuries owing to the Portugese, who established their colonial bases in Mozambique and western and southern India. Finally, the fourth interaction associated with the last introduction of the African baobab to India occured over the 18 th and 19 th centuries, when English and Dutch colonial authorities recruited soldiers from West Africa for regiments in southern India. Thus, one can state that the geographical distribution of baobabs in India is associated with the long history of African diaspora across the Indian Ocean [20,21]. Today India hosts several large and relatively old baobabs. Recently, we investigated and dated by radiocarbon the biggest baobab outside Africa, which is located at Golconda Fort, Hyderabad, India. We found that the oldest part of this very large tree, with a girth of 25.48 m, is around 475 yr old [22]. Here we present the radiocarbon investigation results of two sacred African baobabs from India, namely the baobab of Jhunsi, Allahabad and the Parijaat tree at Kintoor. Ethical statement The on-site investigation and sampling of the baobabs was authorised by the National Biodiversity Authority of India (NBA/9/2269/18 /18-19). The Botanical Survey of India, Central Regional Centre of Allahabad, Uttar Pradesh, provided scientific assistance in this investigation. After each coring, the increment borer was cleaned and disinfected with methyl alcohol. The small coring holes were sealed with Steriseal (Efekto), a special polymer sealing product, for preventing any possible infection of the trees. The co-author performing the sampling in Fig 6 has given written informed consent (as outlined in PLOS consent form) to publish the photograph. The two baobabs and their areas The two trees are located in northern India, in the Uttar Pradesh state. Both have a strange shape and structure, which is not typical for the African baobab. The baobab of Jhunsi, Allahabad. The historic tree is situated in the village of Jhunsi (or Jhusi), a suburb of the city of Allahabad (officially known as Prayagraj), in the Allahabad district. Radiocarbon dating relates Jhunsi to an ancient archaeological site on the left side of Ganges at Prayag, known under the name Andhernagri and Pratishthanpur. The history of the Jhunsi area begins from 7,100 BCE (i.e., before common era or BC). Its past is documented by archaeological evidences that correspond to five cultural phases, ranging from Chalcolithic to the Early Medieval Period. Epigraphical remains discovered here during Maruya, Sunga, Kushana, Gupta and the Medieval Period testify that Jhunsi reached an advanced degree of civilization at an early date. Pratishthan (Jhunsi) is also described in Kurma Purana and Matsya Purana as the Prayagamandala [23,24]. The baobab grows on a huge mound of mud on the left bank of the holy river Ganges (Ganga), very close to Triveni Sangam, i.e., the confluence of three sacred rivers: Ganges, Yamuna (Jamuna) and the mythical Saraswati. The sacred tree, which belongs to the Muslim community, is positioned at only 5 m from the "maazar" (tomb) of the Sufi Saint Saiyid Shah Sadrul Haq Taqiuddin (1320-1384), popularly called Baba Shaik Taqi [23,24]. It is said that Shaik Taqi had placed his "datun" (twig used as tooth brush) upside down on the ground, which later developed into this tree. The tree was called "vilaiti imli" (which has not been identified) until 1978, when it was identified as an African baobab by Varmah and Vaid. They also measured its girth between 18 and 20 m and mentioned that the tree is said to be 500 years old [23]. Recently, the massive baobab was investigated again by Singh and Garg, who supposed that its age could be between 750 and 1350 years [24]. The GPS coordinates are 25˚25.431' N, 081˚53.981' E and the altitude is 99 m. The average annual rainfall in the area is 981 mm (Allahabad station) and the mean temperature reaches 25.7˚C, with 5-10 frosty days per year (Fig 1). The massive baobab grows on a slope, which tends to widen due to continued errosion by floods of Ganges and water flushing down from adjacent higher areas. The tree was severely damaged during the Kumbh Mela event in 2013, when its eastern side was accidentally set on fire by pilgrims who camped close to the baobab (Fig 2). The maximum height of the tree is h = 14.0 m. The trunk, with the shape of a haystack, forks at heights of 5.5-7.1 m into 6 primary branches, out of which 3 develop horizontally; at least 4 primary branches are missing. The primary branches have diameters between 0.6-1.2 m. The current circumference of the trunk (at 1.30 m above mean ground level) is cbh = 18.27 m, with two severely damaged and broken stems. We estimate the restored girth, before the fire of 2013, to a value cbh = 21.20 m. According to our visual inspection, the baobab exhibits a cluster structure and consists of 7 fused stems, out of which 6 are old. The horizontal dimensions of the large hemispherical canopy are 35.9 (NS) x 24.1 (WE) m. The total wood volume of the tree is V = 130 m 3 . The baobab still produces dozens of pods. The baobab has many hollow parts in the stems and branches. Two relatively large irregular cavities can be observed inside the two damaged stems. It also has 3 big roots, partially exposed due to errosion, which spread along the ground up to 30 m. Some of the bark has been extracted, stripped, sliced and peeled off by locals for medicinal use, especially for treating malaria, cold, flu and diarrhea. These actions exposed the soft tissues to fungal and bacterial attacks. At the beginning of 2019, the area around the sacred baobab was filled with mud and sand and a concrete boundary wall was erected around the tree for preventing further errosion. The concrete wall has many openings for excess water drain off (Fig 3). The Parijaat tree at Kintoor. The Parijaat tree at Kintoor, which is worshipped by the Hindu community, has an ancient background. There are many legends around this tree, which might be the most sacred of India. The Parijaat (or Parijat) tree is considered to be a "Kalpavriska" or wish bearing tree, which is only found in heaven. It is claimed that this tree, located near a temple established by Kunti, grows from Kunti's ashes. Kunti was the mother of the three Pandava brothers, including Arjuna. Arjuna is the main hero of Mahabharata and plays a central role in Bhagavad Gita. According to another saying, Arjuna brought this tree from heaven and his mother Kunti used to crown Lord Shiva with its flowers. The most widespread legend in the area states that Lord Krishna himself brought this tree from heaven, more than 5,000 years ago, for his beloved queen Rukmini. For a long time, this tree was considered to be the only one of its kind. However, in 1971, Maheshwari identified the Parijaat tree to be an African baobab. He measured its girth to 13 m and estimated its age to be anything from 600 to 5000 years [15]. The Parijaat tree is located at 8 km south of the village of Kintoor (or Kintur) and at 38 km north-east of the Barabanki city, in the Barabanki district. The GPS coordinates are 27˚00.206' N, 081˚28.922' E and the altitude is 106 m. The average annual rainfall in the area is 941 mm (Barabanki station) and the mean temperature reaches 25.6˚C, with around 5 frosty days per year. The tree is situated in a closed complex, which is a place of much religious significance for the locals. The baobab is surrounded by two metal fences. There is a small temple dedicated to Krishna at its base. According to orders of the district magistrate, the temple was fenced off and is not accessible any more (Fig 4). The maximum height of this tree is h = 13.7 m. The short trunk is of conical shape. It forks at heights of 4.0-5.0 m into 7 very large primary branches, some of them broken, with diameters between 1.2-2.0 m. At least 3 primary branches are missing. The branch sizes are exaggerated in comparison with the relatively modest trunk size (Fig 5). The circumference of the trunk is cbh = 13.09 m, with a partially broken stem. We estimate the restored girth, before this stem split, to cbh = 14.10 m. The Parijaat tree also exhibits a cluster structure and consists of 4 or 5 perfectly fused stems. The horizontal dimensions of the very impressive canopy are 33.4 (NS) x 31.6 (WE) m. The overall wood volume is of 95 m 3 , out of which 50 m 3 belongs to the trunk and 45 m 3 to the canopy. According to locals, over the past century at least, the tree did not produce pods. Sample collection As agreed with the local religious leaders, one sample was collected with the increment borer (Haglöf; 0.80 m long, 0.010 m inner diameter) from one presumptive old stem of each baobab. These two samples were labelled JA-1 and PK-1 (Fig 6). Several tiny segments, each 10 −3 m long (named a, b, c), were extracted from predetermined positions/distances along every sample. Additionally, we collected with a sharp instrument three tiny samples from one severely damaged stem of each baobab. These additional samples were labelled JA-2, JA-3, JA-4, PK-2, PK-3 and PK-4. Sample preparation The α-cellulose pretreatment method was used for removing soluble and mobile organic components [25]. The resulting samples were combusted to CO 2, which was next reduced to graphite on iron catalyst [26,27]. The resulting graphite samples were analysed by AMS. AMS measurements The AMS radiocarbon measurements were done at the AMS Facility of the iThemba LABS, Johannesburg, Gauteng, South Africa, using the 6 MV Tandem AMS system [28]. The Calibration Radiocarbon dates were calibrated and converted into calendar ages with the OxCal v4.3 for Windows [29], by using the IntCal13 atmospheric data set [30]. Water content The water content of one stem of each baobab was determined by dehydration of wood segments (depth 0.20-0.30 m) for 72 h, at 120˚C, under ambient atmosphere. Radiocarbon dates and calibrated ages Radiocarbon dates, expressed in radiocarbon years BP (before present, i.e., before the reference year 1950) and calibrated ages (expressed in calendar years CE, i.e., common era) of 11 sample segments are listed in Table 1. The 1σ probability distribution (68.2%) was typically selected to derive calibrated age ranges. For three sample segments (JA-4, PK-1b, PK-2), the 1σ distribution is consistent with one range of calendar years. For two segments (PK-1a, PK-4), the 1σ distribution corresponds to several ranges. In these cases, the confidence interval of one range is considerably greater than that of the others; therefore, it was selected as the cal CE range of the sample for the purpose of this discussion. For three sample segments with lower positive radiocarbon dates (JA-1a, JA-1b, JA-1c), the 1σ distribution corresponds to several ranges which have all very low probabilities. In these cases, we used for calibration the higher 2σ probability distribution (95.4%), which corresponds to two or three age ranges. We used the same approach for selecting the cal CE range of each sample segment, with one exception. This exception is the segment JA-1b, for which we selected the range with the second highest probability, which agrees better with the age sequence along the sample JA-1. In all cases, the selected age range is marked in bold in Table 1. For three sample segments (JA-2, JA-3, PK-3), ages fall after 1950 CE (0 BP), namely the 14C activity, expressed by the ratio 14C/12C, shows higher values than the standard activity registered in the reference year 1950. These results correspond to negative radiocarbon dates and are named greater than Modern (>Modern). Such cases indicate a very young age of the dated wood, which was formed after 1950 CE. Sample ages and errors Sample ages represent the difference between the year 2019 CE and the mean value of the selected age range (marked in bold). The sample ages and the corresponding errors, which are expressed in calendar years, were rounded to the nearest 5 years. We used this approach for selecting calibrated age ranges and single values for sample ages in all our previous articles on AMS radiocarbon dating of large and old angiosperm trees, especially of baobabs [7][8][9][10][11]13,14,22]. Dating results of samples The two samples collected with the increment borer were relatively short, i.e., 0.31 m for JA-1 and 0.40 m for PK-1, even if the penetration of the borer was almost complete. This reveals that both trees have large hollow parts in the corresponding stems. The radiocarbon date of the deepest segment was 188 ± 30 BP for JA-1c and 507 ± 29 BP for PK-1b. These values correspond to calibrated ages of 250 ± 45 and 595 ± 10 yr. For the baobab of Jhunsi, Allahabad, the oldest dated sample JA-4 was collected close to the pith of a stem severely damaged by the fire of 2013. The radiocarbon date of 779 ± 41 BP corresponds to a calibrated age of 770 ± 25 yr. In the case of the Parijaat tree at Kintoor, the oldest dated samples PK-2 and PK-4 were also collected close to the pith of a stem, which broke off several decades ago. Their radiocarbon dates of 793 ± 37 BP and 693 ± 36 BP correspond to calibrated ages of 775 ± 25 and 735 ± 15 cal yr. The very young ages of three samples (JA-2, JA-3, PK-3), with negative radiocarbon dates, collected from areas close to those of the oldest samples, can be explained by the presence of young regrowth wood for healing produced just after the major damage suffered by the respective stems [8]. Trees/stems ages The radiocarbon dates and calibrated ages of the oldest dated samples (JA-4, PK-2, PK-4) suggest ages close to 800 yr for the stems from which they originate. Because both baobabs have a cluster structure and consist each of several fused stems with close ages, we consider that the age of the baobab of Jhunsi Allahabad, as well the age of the Parijaat tree at Kintoor, is between 750-850 yr, namely 800 ± 50 yr, which is considerably older than we have expected. One can state that both baobabs started growing around the year 1200 CE. By these values, both trees become not only the oldest African baobabs from India, but also the oldest baobabs outside Africa with accurate dating results. The former record holder was the Big Biobab of Mannar Town (Sri Lanka), which is around 750 yr old [31]. Water content of stems The baobab wood has usually a high water content, up to 79%. Big baobabs stay erect/upright mainly due to the weight of stems. When this weight drops to a critical level due to water loss, the stability of the baobab is affected [32,14]. We measured the water content of the stems, which were sampled with the increment borer. We found extremely low values, more precisely 45.2% for the baobab of Jhunsi, Allahabad and only 39.7% for the Parijaat tree at Kintoor. This low values indicate that both baobabs are close to the end of their life cycle and may topple in a near future. Conclusion The research reports the results of the AMS radiocarbon investigation of two sacred African baobabs from the Uttar Pradesh state, in northern India. The two investigated trees are the baobab of Jhunsi, Allahabad and the Parijaat tree at Kintoor. Both baobabs have a cluster structure and consist of several fused stems. Several wood samples were collected from the exterior of stems, as well as from deep areas of severely damaged stems. The oldest samples have radiocarbon dates of 779 ± 41 BP for the baobab of Jhunsi and 793 ± 37 BP for that of Kintoor. These dates correspond to calibrated ages of 770 ± 25 and 775 ± 25 yr. According to these values, the age of both baobabs is close to 800 yr. Thus, the baobab of Jhunsi, Allahabad and the Parijaat tree at Kintoor become the oldest dated African baobabs outside Africa. The general state of deterioration and the low water content of stems indicate that the two sacred baobabs are in decline, close to the end of their life cycle.
2020-01-18T14:03:06.594Z
2020-01-16T00:00:00.000
{ "year": 2020, "sha1": "d2059f831bb44f1e4bfc7a694afbbcf9df5ee542", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0227352&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a1eb8cb359287f50ae887cfec61f310e8375c70", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
235286938
pes2o/s2orc
v3-fos-license
An oxide coating impedance measurement during micro-arc oxidation The impedance instrument converter of oxide layers formed by micro-arc oxidation (MAO) is developed. It allows continuous non-destructive testing of electric parameters of the synthesized coatings (resistance and capacitance) and electrolyte conductivity to assess the degree of degradation. The instrument converter consists of a generator, a measuring circuit, a repeater and has a digital output. The modified ammeter-voltmeter method is used as the impedance measurement technique. High accuracy of resistance, capacitance and conductivity measurements (the total relative error is no more than 0.5%) is achieved by performed functional and metrological analysis of the converter measurement channels, as well as metrological tests. Introduction Currently, impedance spectroscopy is widely used in industry and science. Because of its high information content and the possibility of non-destructive measurements, this method finds many applications. It is used in medicine for the human body composition assessment and diagnosis of various diseases [1,2]; for food quality control [3]; in forestry to analyze wood structure and improve the efficiency of logging [4]; for the study of conducting liquids [5] and thermoelectric materials [6]; for monitoring and adjusting the operation of DC microgrids [7], etc. The promising way of impedance spectroscopy application is the automation of the technological processes of protective coatings deposition to the surface of machine parts and devices, in particular, micro-arc oxidation (MAO). This process requires continuous non-destructive testing of the synthesized coatings properties, what cannot be achieved with traditional measurement techniques. However, the application of impedance spectroscopy allows to bypass this limitation. The impedance instrument converter developed in [8] allows making impedance measurements during MAO processing using a pulse test signal with a frequency sweep. The advantage of this approach is the high speed of measurements, but it has low accuracy. To eliminate this disadvantage, an impedance instrument converter integrated to the intelligent automated research technological MAO installation [9] was developed. This article is devoted to the consideration of the structure and metrological support of this instrument converter. Structure of the MAO coating impedance instrument converter The impedance instrument converter (figure 1) is designed to measure the impedance of a galvanic cell in the frequency range in order to determine the electrical characteristics of oxide layers (resistance and 2 capacity) and electrolyte conductivity to monitor the degree of its depletion. The instrument converter implements a modified ammeter-voltmeter method. The test sinusoidal signal Uin of the controlled frequency from the generator is fed to the measuring circuit MC. МC is a capacitor voltage divider, the upper shoulder of which is the investigated sample (galvanic cell or two additional electrodes for the electrolyte conductivity measuring). The lower arm of divider is the exemplary measure (RC-circuit with switchable component values). Input Uin and output Uout voltage of the measuring circuit are fed to the ADC. Then the impedance is calculated by software. The impedance of the galvanic cell has active and capacitive components [10]. For this reason, it can be thought of as a parallel RC-circuit that contains a resistor Rx and a capacitor Cx. They describe the resistance and capacitance of the coating, respectively. It is convenient to analyze such an RC-circuit through the complex conductivity Yx. The exemplary measure, in the simplest case, is also an RC-circuit, which is formed by elements R0 and C0, and has a complex conductivity Y0. Using Ohm's law and the voltage divider formula, we obtain an analytical functional model of the measured value: where Uin and Uout -input and output voltage of the instrument converter, respectively; K -complex coefficient (sensitivity), calculated as follows: where Re(K) and Im(K) -real and imaginary part; i -imaginary unit; Uin, φ1, Uout, φ2 -module and argument (amplitude and phase) of the instrument converter input and output voltage, respectively. The complex admittance of the sample under investigation has the form: where G0 and B0 -conductance and susceptance of the exemplary measure. Functional and metrological analysis of the impedance instrument converter The structural metrological model of the impedance instrument converter is shown in figure 2. Since, according to equation (1), for the indirect measurement of the complex admittance it is necessary to know the input and output voltages of the measuring circuit, the instrument converter includes two instrumentation channels of these quantities. Based on the metrological model, the conversion functions of the input and output voltage, reduced to the ADC input, are obtained and determined by equations (8) and (9), respectively: where Uin -input voltage; Uout1 and Uout2 -output voltage of measurement channels; Nin and Nout -ADC code corresponding to input and output voltage; qADC -nominal ADC quantization step; SMC, Sr, SMsensitivity of the measuring circuit, voltage repeater and multiplexer, respectively. Consider the errors of the measurement channel of the output voltage. The sum of the multiplicative errors reduced to the input has the form: where A3 -A5 -weighted coefficients; δSMC, δSr, δSM -sensitivity errors of the measuring circuit, voltage repeater and multiplexer, respectively; δm1, δm2 -errors in matching the measuring circuit with the voltage repeater and the voltage repeater with the multiplexer, respectively; δADC -relative error of the nominal quantization step of the ADC. The sum of the additive errors relative to the input: where Δa MC , Δa r -additive errors of the measuring circuit and voltage repeater; Δa ADC , Δq ADC -ADC additive and quantization error. The linearity error relative to the input has the form: where Δnl ADC -ADC linearity error. Similarly, we find the errors of the input voltage measurement channel. The sum of the multiplicative errors relative to the input is: The sum of the additive errors relative to the input is: Input linearity error is: Main static instrumental error relative to output voltage Δimp Uout is: Similarly, main static instrumental error relative to input voltage Δimp Uin is: Provided metrological test have shown that full error of MAO coating impedance instrument converter in relative form is less than 0.5 %. Conclusion Thus, the developed MAO coating impedance instrument converter provides high-precision measurements of the formed oxide layers the electrolyte electrical characteristics throughout the MAO process, which allows to study their dependence on various processing parameters. The developed instrument converter is planned to be used in the future to establish a correlation between the electrical and structural parameters of the coating (thickness and porosity), which will allow the use of impedance spectroscopy for effective control of the MAO process.
2021-06-03T00:46:14.797Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c4d0f4fafc581c479a254390b611741c3417a3b0", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1889/5/052041/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c4d0f4fafc581c479a254390b611741c3417a3b0", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
269021377
pes2o/s2orc
v3-fos-license
Measuring Effectiveness of Metamorphic Relations for Image Processing Using Mutation Testing Testing an intricate plexus of advanced software system architecture is quite challenging due to the absence of test oracle. Metamorphic testing is a popular technique to alleviate the test oracle problem. The effectiveness of metamorphic testing is dependent on metamorphic relations (MRs). MRs represent the essential properties of the system under test and are evaluated by their fault detection rates. The existing techniques for the evaluation of MRs are not comprehensive, as very few mutation operators are used to generate very few mutants. In this research, we have proposed six new MRs for dilation and erosion operations. The fault detection rate of six newly proposed MRs is determined using mutation testing. We have used eight applicable mutation operators and determined their effectiveness. By using these applicable operators, we have ensured that all the possible numbers of mutants are generated, which shows that all the faults in the system under test are fully identified. Results of the evaluation of four MRs for edge detection show an improvement in all the respective MRs, especially in MR1 and MR4, with a fault detection rate of 76.54% and 69.13%, respectively, which is 32% and 24% higher than the existing technique. The fault detection rate of MR2 and MR3 is also improved by 1%. Similarly, results of dilation and erosion show that out of 8 MRs, the fault detection rates of four MRs are higher than the existing technique. In the proposed technique, MR1 is improved by 39%, MR4 is improved by 0.5%, MR6 is improved by 17%, and MR8 is improved by 29%. We have also compared the results of our proposed MRs with the existing MRs of dilation and erosion operations. Results show that the proposed MRs complement the existing MRs effectively as the new MRs can find those faults that are not identified by the existing MRs. Introduction In the domain of computer graphics, the importance of Image Processing Applications (IPAs) is growing fast in our daily lives [1].IPAs utilize algorithms to analyze the characteristics of an image using various methods and techniques.Digital images can be rotated, scaled, translated, and sheared by using geometric transformation.Also, in binary and grayscale images, different morphological operations such as erosion, dilation, skeletonization, and opening and closing operations are used for filtering, thinning, and pruning of the images [2]. Nowadays, IPAs are widely used in safety and mission-critical systems such as medical radiology, biometric systems, surveillance systems, etc. [1].In medical radiology, machine learning and deep learning approaches are frequently used for automated diagnostics for patients using medical images such as MRI, CT Scan, ultrasound, etc.This diagnostic process involves some pre-processing steps, such as edge detection, and post-processing steps, such as dilation and erosion operations.Any defects in these operations will materially affect the diagnostics results.Testing of the software used in these critical systems is vital to ascertain the credibility of the results produced by these systems.Software testing is a common method to test and verify the quality of IPA software [3].In software testing, an oracle is a mechanism that ascertains whether the software has been successfully executed for a test case or not.The Software is run for a specific test case, and the result (actual output) is compared with the anticipated result (expected output).If the output differs from what was anticipated, the program is said to be faulty [4].Testing of IPAs is especially challenging due to the test oracle problem.For example, in image processing, edge detection is an operation that is used to compute the edges of the image.If we want to check whether the edges computed by the edge detection operator are correct or not, then we do not have the reference image (expected output) for comparison.This is the well-known oracle problem where the expected results are not obvious. Among many solutions to test the oracle problem, metamorphic testing (MT) is the most popular technique that tackles the oracle problem in software testing of IPAs [5].MT was first proposed by Chen et al. in 1998 [6].In MT, we need source test cases that manifest the unexpected behavior in the system under test (SUT) [7].The source test cases are generated through traditional test case generation techniques such as random test case generation, coverage criterion, etc. From these source test cases, a set of new test cases known as follow-up test cases are constructed using metamorphic relations (MR) [8].MT defines some MRs, which consist of an input relation and an output relation.If the output results of the source and follow-up test cases obtained from SUT satisfy the output relation, then the program is highly reliable.Otherwise, the program will have logical errors [9].The steps involved in MT are shown in Figure 1.The reliability of our test results is a function of the efficacy of MT, which is dependent on the effectiveness of MRs.One of the important metrics used to evaluate MRs is the fault detection rate of that particular MR.The fault detection rate shows that either the selected test cases are able to detect faults or not (can we find violations of MRs for the corresponding test cases?)[10].The fault detection rate is measured as the number of faults detected by the selected source test cases divided by the number of faults detected by the total number of test cases [11]. In our proposed framework, we have studied the fault detection capabilities of MRs.For the evaluation of MRs, we have initially selected four existing MRs of edge detection operation proposed by Sim et al. [12].We have proposed six MRs for dilation and erosion operationsWe have also ascertained the fault detection capabilities of our proposed MRs (four general and two specific) for dilation and erosion operations. The existing literature shows that Mayer and Guderlei [13] first proposed four general MRs for Euclidean distance transform.These four MRs (rotation at 90 degrees, transposition, reflection at ordinate, and reflection at abscissa) are generally applicable to all image processing operations.Furthermore, Jameel et al. [14] furthered the research by using two of these four to ascertain the fault detection rate of dilation and erosion MRs.In total, these authors have presented eight MRs (two general and six specific) for dilation and erosion operation.Jameel et al. [14] used only two general MRs, i.e., reflection at ordinate and reflection at abscissa.However, the fault detection rate of the remaining two MRs (rotation and transposition) is not determined.Therefore, we have proposed rotation and transposition MRs for dilation and erosion operations to ascertain the fault detection rate of these two MRs.The associative property is specific to the dilation operation.We have changed the order of associative property to check whether the new arrangement satisfies the dilation operation or not.This result leads us to present a new MR for dilation operation.Image translation is an operation of image processing.We have checked this operation on both dilation and erosion and come to know that it only satisfies the erosion operation.In this way, we have proposed a new MR for erosion operation. After the selection and identification of MRs, we generated the source test cases through a criterion proposed by Jafari et al. [15].In the paper, we (the authors) have discussed in detail how source test cases are generated using the black box testing technique (equivalence class testing) and the white box testing technique (coverage criterion).We have used 95 test cases of MRI brain images for our experiments taken from www.kaggle.com.Later, follow-up test cases are generated using source test cases and MRs.Both the source and follow-up test cases are given to the SUT.In this paper, we have the following three SUTs, i.e., edge detection, dilation, and erosion.The relation between the outputs of both the source and follow-up test cases is checked.If the MR holds between the outputs of two test cases, then the SUT has no faults; otherwise, the SUT is faulty. Afterward, mutation testing is performed to evaluate MRs.Mutation operators always play an important role in generating the mutants.In existing literature [12,14,16,17], only a few mutation operators are used that have generated a very small number of mutants.The authors did not discuss the effectiveness of mutation operators or which operator is effective enough to generate and kill a maximum number of mutants.We have used nine mutation operators and evaluated which operator is most effective in generating and killing a maximum number of mutants. In the mutation process, we ran the original program on source test cases and then ran the original program on the follow-up test cases.The outputs of both the test cases are recorded for comparison.In the second phase of testing, we ran the same two test cases on the mutated program.The outputs of these test cases are also recorded for comparison.If outputs of both original and mutated test cases satisfy their related MR, then it shows that the mutant is not killed; otherwise, the mutant is killed.Afterward, the mutation score is calculated to check the fault detection rate of each MR.If the mutation score is near 1, then it shows that the MR is strong, or else the MR is weak enough to find the violation. This paper makes the following contributions: • We have proposed six new MRs for dilation and erosion operations and ascertain the effectiveness of these MRs while also assessing improvements in them using mutation testing.• We have compared our six proposed MRs with the eight existing MRs for dilation and erosion operations. • In existing literature, only two mutation operators are used for the evaluation of edge detection and morphological image operations.We have used nine mutation operators to improve the effectiveness of edge detection and morphological image operation (dilation and erosion) MRs.We have also compared the result of our proposed framework with the existing techniques.• We have also checked the effectiveness of mutation operators to determine which operator is more effective in generating and killing a maximum number of mutants. This paper is organized as follows; Section 2 discusses the related work.Section 3 describes the existing and newly proposed MRs.Section 4 discusses the proposed framework for the evaluation of MRs.In Section 5 experiment design is narrated.Section 6 discusses the results and discussion whereas Section 7 describes the conclusion. Related Work In literature review, we have covered those papers where MRs are evaluated to improve the effectiveness of MT.MT is a common technique to improve the test oracle problem where it is hard to assess the output correctly when an arbitrary input has been given to the SUT [18]. Many researchers have used different image processing operators, such as edge detection, image region growth, dilation and erosion, and used their properties as metamorphic relations.The effectiveness of these MRs is checked through mutation testing.Sim et al. [12] proposed a framework to determine the effectiveness of MT.To conduct the experiments, collections of images are needed for the generation of test inputs.Unlike model-generated images, camera-captured images (real images) from published image libraries are selected randomly.Mutation testing is used to evaluate the fault detection rate of MT.Single operator faults and stride implementation faults are seeded into the Sobel edge detection program.In single-operator faults, two types of operators are used: logical operator replacement (LOR) and relational operator replacement (ROR).Results show that MT is capable of detecting faulty edge detection programs up to 90%.Jameel et al. [14] discussed the oracle problem in IPAs and showed how SUT properties could be used as MR.The authors have studied some properties of morphological image operations.The effectiveness of MRs can be analyzed through mutation testing.In order to conduct the experiments, input images are selected randomly.Mutation testing is used to show the effectiveness of the above-mentioned MRs.Therefore, errors are deliberately added to the Mex C code.The mutation score tells the number of killed mutants.The mutant is said to be killed if an MR is able to detect the bug.It is concluded that for bug identification, specific images are needed instead of general input images such as Lena.Jiang et al. [19] applied MT to the image region growth program.Mutation testing is used to find the effectiveness of MRs.In this paper, MT is applied to test the aerospace image processing software.A segmental symbolic evaluation method is used to generate the input data.The original program implemented in C language is executed sequentially with three mutant programs.The program is said to be faulty if an MR violation can be seen after the validation of output relations. Many researchers have been fascinated by the use of MT techniques in machine learning algorithms as well.Jameel et al. [20] used support vector machine (SVM) to automate the interpretation of the output results of test oracle requirements.These authors have designed a comparative study to gauge the effectiveness of their proposed scheme against the latest MT oracle technique and the traditional statistical oracle method.Thirtyfive distinctive errors are introduced to the original program written in C language to create 35 unique resultant programs.For evaluation purposes, these authors have created the output images from these 35 versions of the image dilation program for pass or fail criteria.Half of the selected images are used to train SVM using various features (wavelet features, binary features, hough features, statistical features) of dilated images to analyze their effectiveness.The results confirmed that SVM was better in terms of the lowest classification error than the other two techniques.Chan et al. [21] integrated the pattern classification technique with MT.A trained classifier (C4.5) is employed for the test oracle by labeling pass/fail.The passed test outputs may also show false positive/negative failures, which are then processed for additional testing.This proposal has proven to be efficient and effective. MT techniques have also been used with structural testing.Ding et al. [17] used a discrete dipole approximation program (ADDA) implemented in FORTRAN and C++ to check the effectiveness of MT.In this paper, statement coverage is used to check the effectiveness of test cases, whereas mutation testing is used to check the effectiveness of MT.Due to the unknown test output relations, the MRs of this program are considered weak and inadequate.Ding and Hu [16] developed a method for the adequacy of MRs.Coverage criterion, mutation analysis, and mutation tests for testing MRs are critical factors in evaluating the adequacy of MRs.An image processing program that is used to reconstruct a 3D biological cell is used to explain the author's proposed theory.A case study is performed using a complex Monte Carlo program to gauge the effectiveness of this proposed framework.The results prove the utility of their proposed method for the testing of other scientific software as well.Table 1 shows the summary of related work.After studying the literature, some of the research gaps are identified and are given below: • In literature, test cases are selected and generated randomly.Random selection leads to an unfair distribution of parametric values, which ultimately affects the testing process. • In existing techniques, MR evaluation is conducted through mutation testing.This evaluation is not comprehensive, as only a few mutation operators are used to check the fault detection rate of MRs.The total number of mutants generated through these mutation operators is quite low which makes the testing weak. • In existing literature, no work has been conducted to check the effectiveness of mutation operators.It is not highlighted that which operator is more valuable to generate and kill maximum number of mutants. Metamorphic Relations In MT, the central element is the set of MRs, which are the necessary properties of the SUT or the algorithm [22].MR plays a significant role in MT as it validates the relations between the test outputs of a program having a test oracle problem.Generally, MR is the property of a function (f) having inputs x 1 , x 2 , x 3 , . . .x n , where n is greater than 1.Their corresponding outputs are f(x 1) , f(x 2) , f(x 3) , . . .f(x n ) [23].The identification of MRs requires expert knowledge in the field of Image Processing (IP) as well as guidance provided by the experiences (Mayer et al. [13], Jameel et al. [14]). In this paper, we have worked on the MRs of edge detection and two morphological image operations, i.e., dilation and erosion. MRs for Edge Detection We have used the MRs of edge detection proposed by Sim et al. [12].The complete details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detection, and Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the dilation and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion operations are given in Table 3.We have used the MRs of edge detection proposed by Sim et al. [12].The com details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detectio Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the d and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion oper are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of thes are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our pro consists of four general and two specific MRs of dilation and erosion.In Table 4, we discussed our proposed MRs with their mathematical properties.The details of MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree Where, Im is the input Image, C(.) is the counter clockwise rotation a degree, s is the dilation and ℇ s is the erosion operation.The imag output of counter-clock wise rotation at 90 degree followed by morp logical operations should be similar to image output of morphologic operations followed by counter-clock wise rotation at 90 degree. We have used the MRs of edge detection proposed by Sim et al. [12].The complete details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detection, and Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the dilation and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion operations are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of these MRs are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our proposal consists of four general and two specific MRs of dilation and erosion.In Table 4, we have discussed our proposed MRs with their mathematical properties.The details of these MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree Where, Im is the input Image, C(.) is the counter clockwise rotation at 90 degree, s is the dilation and ℇ s is the erosion operation.The image output of counter-clock wise rotation at 90 degree followed by morphological operations should be similar to image output of morphological operations followed by counter-clock wise rotation at 90 degree. ) where c is the complement of an image I.We have used the MRs of edge detection proposed by Sim et al. [12].The complete details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detection, and Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the dilation and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion operations are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of these MRs are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our proposal consists of four general and two specific MRs of dilation and erosion.In Table 4, we have discussed our proposed MRs with their mathematical properties.The details of these MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree Where, Im is the input Image, C(.) is the counter clockwise rotation at 90 degree, s is the dilation and ℇ s is the erosion operation.The image output of counter-clock wise rotation at 90 degree followed by morphological operations should be similar to image output of morphological operations followed by counter-clock wise rotation at 90 degree. We have used the MRs of edge detection proposed by Sim et al. [12].The c details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detect Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion op are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of th are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our p consists of four general and two specific MRs of dilation and erosion.In Table 4, discussed our proposed MRs with their mathematical properties.The details MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree Where, Im is the input Image, C(.) is the counter clockwise rotation degree, s is the dilation and ℇ s is the erosion operation.The im output of counter-clock wise rotation at 90 degree followed by mo logical operations should be similar to image output of morpholog operations followed by counter-clock wise rotation at 90 degree.We have used the MRs of edge detection proposed by Sim et al. [12].The complete details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detection, and Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the dilation and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion operations are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of these MRs are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our proposal consists of four general and two specific MRs of dilation and erosion.In Table 4, we have discussed our proposed MRs with their mathematical properties.The details of these MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree Where, Im is the input Image, C(.) is the counter clockwise rotation at 90 degree, s is the dilation and ℇ s is the erosion operation.The image output of counter-clock wise rotation at 90 degree followed by morphological operations should be similar to image output of morphological operations followed by counter-clock wise rotation at 90 degree.We have used the MRs of edge detection proposed by Sim et al. [12].The com details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detectio Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the d and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion oper are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of thes are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our pr consists of four general and two specific MRs of dilation and erosion.In Table 4, w discussed our proposed MRs with their mathematical properties.The details of MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree Where, Im is the input Image, C(.) is the counter clockwise rotation a degree, s is the dilation and ℇ s is the erosion operation.The imag output of counter-clock wise rotation at 90 degree followed by morp logical operations should be similar to image output of morphologic operations followed by counter-clock wise rotation at 90 degree. Table 3 shows the MRs for dilation and erosion operations.The details of these MRs are given in [14]. Proposed MRs for Dilation and Erosion We have proposed six new MRs for dilation and erosion operations.Our proposal consists of four general and two specific MRs of dilation and erosion.In Table 4, we have discussed our proposed MRs with their mathematical properties.The details of these MRs are given below: Proposed MRs Explanation Counter clock wise rotation at 90 degree dge detection proposed by Sim et al. [12].The complete [12,15].The MRs are shown in Table 2. MRs of edge detection proposed by Sim et al. [12].The complete e given in [12,15].The MRs are shown in Table 2. Mathematical Property tection.We have used the MRs of edge detection proposed by Sim et al. [12].The complete details of these MRs are given in [12,15].The MRs are shown in Table 2. Table 2 shows the MRs for edge detection where E is the Sobel edge detection, and Im is the input image. MRs for Dilation and Erosion In this section, we have described the existing and proposed MRs for the dilation and erosion operations. Existing MRs for Dilation and Erosion The existing MRs proposed by Jameel et al. [14] for dilation and erosion operations are given in Table 3. Table 3 shows the MRs for dilation and erosion operations.The details of these MRs are given in [14]. Proposed MRs for Dilation and Erosion s is the erosion operation.The image output of counter-clock wise rotation at 90 degree followed by morphological operations should be similar to image output of morphological operations followed by counter-clock wise rotation at 90 degree. dge detection proposed by Sim et al. [12].The complete [12,15].The MRs are shown in Table 2. MRs of edge detection proposed by Sim et al. [12].The complete e given in [12,15].The MRs are shown in Table 2. Where, T(.) is the transpose of an image.The image output of transposition followed by dilation and erosion should be similar to image output of dilation and erosion followed by transposition. Proposed MRs Explanation Enhanced Associative Property MR 5 : Where, A is the input image.B and C are the structuring elements.Image dilated with structuring element B and then dilated with structuring element C should give same results when we first dilate the image with structuring element C and then dilated with structuring element B. This property is specific to dilation as erosion does not fulfill this property. Explanation e input Image, C(.) is the counter clockwise rotation at 90 he dilation and ℇ s is the erosion operation.The image ter-clock wise rotation at 90 degree followed by morphoons should be similar to image output of morphological owed by counter-clock wise rotation at 90 degree. and Erosion , we have described the existing and proposed MRs for the dilation ons. for Dilation and Erosion Rs proposed by Jameel et al. [14] for dilation and erosion operations . for dilation and erosion.4, we have osed MRs with their mathematical properties.The details of these w: s for dilation and erosion.Explanation re, Im is the input Image, C(.) is the counter clockwise rotation at 90 ee, s is the dilation and ℇ s is the erosion operation.The image ut of counter-clock wise rotation at 90 degree followed by morphoal operations should be similar to image output of morphological ations followed by counter-clock wise rotation at 90 degree. s (Trans(Im)) Where, Trans(.) is the image translation.The output of image translation followed by erosion should be similar to the output of erosion followed by image translation.This property is not applicable on dilation operation. Evaluation of Metamorphic Relations In this section, we have discussed the historical evaluations of MRs from the existing literature.We have also suggested a new method to evaluate MRs in our proposed framework. The evaluation of MRs in MT involves assessing how well these relations can effectively guide the generation of additional test cases and help verify the correctness of a software system.The source and follow-up test cases are executed, and their outputs are verified against the relevant MRs, of which any violation implies that the software under test is faulty.The MRs are considered strong and do not satisfy the relation easily.The higher the number of test cases that satisfy the MR, the weaker the MR.Suppose we have a program P that computes the sin function.There are two MRs to compute the sin function.The MR that satisfies the relation on maximum test cases is considered weak and vice versa.For all positive values of x, MR 1 is better than MR 2 as MR 2 satisfies the relation for all positive values, and thus, MR 2 will always be a weak MR. The evaluations of MRs based on a random selection of source test cases using existing techniques are not comprehensive with respect to the truly random generation of source test cases.The available sample population from which random source test cases are selected is missing many types of image characteristics such as dimension, resolution, bit depth, type of image, etc. Sim et al. [12] and Jameel et al. [14] selected the images randomly from various published image libraries available online.Each library has a set of images with only one or two image characteristics.If we create a new library using all the images from these libraries, many image properties will still be missing.By selecting the test cases randomly from these libraries, it is probable that all the selected test cases may cover only one property of the image while ignoring the others.This will definitely affect the testing process because of the lack of diversity in test images. Furthermore, in existing literature, mutation testing is used for the evaluation of MRs.As discussed earlier, mutation operators play a dominant role in generating a significant number of mutants.In existing techniques, Sim et al. [12] used two mutation operators, i.e., ROR and LOR, whereas Jameel et al. [14] used only one mutation operator (ROR) for the generation of mutants.These two operators only produce a maximum of 33 mutants, which is not a significant number for the purpose of evaluation.So, more mutation operators should be needed for the extensive evaluation of MRs. Figure 2 shows the evaluation process of our proposed framework. Source Test Case Generation The foremost step is the generation of source test cases (original test cases).In literature, source test cases are generated either through some traditional test case generation techniques as discussed earlier or through some tool such as EvoSuite (it generates source test cases automatically through coverage criterion) [5].Nowadays, few researches are emerged in the direction of generation and selection of source test cases that are effective in fault detection [24]. As discussed in the literature, the general source test case generation criterion is to generate the test cases randomly.It is a probability that randomly generated test cases may cover only one characteristic of the image while ignoring the other ones.Sim et al. [12] also suggested that considering the characteristics of the image can improve the mutation score of MRs.Keeping this in mind, we have proposed a source test case generation criterion in our previous paper.In this criterion, we have generated the source test cases through equivalence class testing and coverage criterion.In equivalence class testing, we have considered the attributes of images and grouped them into five distinct classes such as horizontal dimension, vertical dimension, resolution, bit depth, and image type.The details of the formation of source test cases are given in [15].We have used the same (95) test cases for our experiments.Afterward, the adequacy of source test cases is checked through program coverage (statement coverage and branch coverage).If the test cases (accumulatively) do not achieve 100% coverage, then we need more test cases to achieve 100% branch coverage. Metamorphic Testing In IP, it is hard to test a program without a test oracle, as the outcome is not predictable.In MT, we can detect that either the test case has passed or failed by generating new test cases for further evaluation.The steps of MT are discussed in Figure 1.In MT, MR transforms the existing source test cases into new test cases known as follow-up test cases [25].When the follow-up test cases are generated, both test cases are given to the SUT.By executing the SUT, the output data is generated for each test case [23].Processing of SUT is conducted by comparing the output relation of source and follow-up test cases.The satisfaction of MR shows that the SUT is not faulty, whereas the dissatisfaction of MR shows that the SUT is faulty. Source Test Case Generation The foremost step is the generation of source test cases (original test cases).In literature, source test cases are generated either through some traditional test case generation techniques as discussed earlier or through some tool such as EvoSuite (it generates source test cases automatically through coverage criterion) [5].Nowadays, few researches are emerged in the direction of generation and selection of source test cases that are effective in fault detection [24]. As discussed in the literature, the general source test case generation criterion is to generate the test cases randomly.It is a probability that randomly generated test cases may cover only one characteristic of the image while ignoring the other ones.Sim et al. [12] also suggested that considering the characteristics of the image can improve the mutation score of MRs.Keeping this in mind, we have proposed a source test case generation criterion in our previous paper.In this criterion, we have generated the source test cases through equivalence class testing and coverage criterion.In equivalence class testing, we have considered the attributes of images and grouped them into five distinct classes such as horizontal dimension, vertical dimension, resolution, bit depth, and image type.The details of the formation of source test cases are given in [15].We have used the same (95) test cases for our experiments.Afterward, the adequacy of source test cases is checked through program coverage (statement coverage and branch coverage).If the test cases (accumulatively) do not achieve 100% coverage, then we need more test cases to achieve 100% branch coverage. Metamorphic Testing In IP, it is hard to test a program without a test oracle, as the outcome is not predictable.In MT, we can detect that either the test case has passed or failed by generating new test cases for further evaluation.The steps of MT are discussed in Figure 1.In MT, MR transforms the existing source test cases into new test cases known as follow-up test cases [25].When the follow-up test cases are generated, both test cases are given to the SUT.By executing the SUT, the output data is generated for each test case [23].Processing of SUT is conducted by comparing the output relation of source and follow-up test cases.The satisfaction of MR shows that the SUT is not faulty, whereas the dissatisfaction of MR shows that the SUT is faulty. MR Evaluation Using Mutation Testing Evaluation of MRs is an indication of their fault detection capabilities.The greater the fault detection capabilities, the greater the ability to detect more faults from a program.We have checked the fault detection rate of these MRs through mutation testing. Mutation operators play an important role in generating mutants in mutation testing.Previously, very few mutation operators were used, which did not even highlight the effectiveness of these operators.We have used nine mutation operators to check which operator is most effective for generating and killing a significant number of mutants.Our work is relevant to [12,14], so we have compared the mutation operators used in our proposed framework with these two techniques.The mutation operators used in the existing techniques [12,14] and in the proposed framework are given in Table 5.If the output of source and follow-up test cases satisfies the relation, then mutation testing can be performed by seeding the faults into the original program to check for MR violation.The process of checking the original and mutated program is explained through an example.Suppose we have two test cases, t 1 (source test case) and t ′ 1 (follow-up test case), that have to be tested under the original program p.The outputs of tests t 1 and t ′ 1 can be recorded as r 1 and r ′ 1 .Afterward, the same test cases, t 1 and t ′ 1 , can be run on the mutated program p ′ with mutant m.Record the outputs as r 2 and r ′ 2 .If both (r 1 , r ′ 1 ) and (r 2 , r ′ 2 ) satisfy their related MR, then the mutant m is not killed [19].Otherwise, the mutant is killed.After mutation testing, the mutation score is to be calculated.The mutation score indicates the fault detection rate of each MR.In the existing literature, i.e., [12,14,16,19], all the MRs are evaluated by seeding faults manually in the code or through a tool that calculates the mutation score automatically through the traditional mutation testing approach.We have seeded the faults manually and checked the relation manually on both the programs (original and mutated) for the calculation of the mutation score. The mutation score indicates the fault detection rate (FDR) of MR.For the calculation of the mutation score, we have examined the mutants manually and have removed all the equivalent mutants.The formula for the mutation score is given below: According to the formula, if the mutation score is 1 then it means that MR is strong (high fault detection rate) whereas a 0 score would show MR is weak (low fault detection rate).We can also say that if the mutation score is near 0, the MR is weak enough to find the violation, while on the other hand, if the mutation score is near 1, the MR is strong. • In the proposed fraework, we have used nine applicable mutation operators to ascertain the effectiveness of MRs.The higher the number of mutation operators used, the higher the number of faults detected.The use of these operators contribute to the improvement of software quality, the effectiveness of test suites, and improvement in test coverage.• We have proposed new MRs in the field of IP.Overall, proposing new metamorphic relations in image processing contributes to the advancement of testing methodologies, algorithm validation techniques, and research practices, ultimately leading to more reliable, robust, and efficient image processing systems and applications. Experiment Design In this section, we have discussed the details of SUT used for our experiment: source code, dataset, original test cases, coverage criterion, and mutation operators used. Proposed Evaluation The subject programs in this paper consist of the following: Dilation and erosion programs The properties of edge detection and morphological operations are used as MRs.In IP, edge detection plays a vital role in identifying the immediate changes in grayscale images.Identifying the edges of the images can be invaluable for different real-world applications [26].Similarly, dilation and erosion are the main morphological operations that increase or decrease the region of the image according to the structuring element [2].The inputs and outputs of the edge detection program, a dilation and erosion program, are given in Figure 3. • In the proposed fraework, we have used nine applicable mutation operators to ascertain the effectiveness of MRs.The higher the number of mutation operators used, the higher the number of faults detected.The use of these operators contribute to the improvement of software quality, the effectiveness of test suites, and improvement in test coverage. • We have proposed new MRs in the field of IP.Overall, proposing new metamorphic relations in image processing contributes to the advancement of testing methodologies, algorithm validation techniques, and research practices, ultimately leading to more reliable, robust, and efficient image processing systems and applications. Experiment Design In this section, we have discussed the details of SUT used for our experiment: source code, dataset, original test cases, coverage criterion, and mutation operators used. Proposed Evaluation The subject programs in this paper consist of the following: Dilation and erosion programs The properties of edge detection and morphological operations are used as MRs.In IP, edge detection plays a vital role in identifying the immediate changes in grayscale images.Identifying the edges of the images can be invaluable for different real-world applications [26].Similarly, dilation and erosion are the main morphological operations that increase or decrease the region of the image according to the structuring element [2].The inputs and outputs of the edge detection program, a dilation and erosion program, are given in Figure 3. Figure 3 shows the sample inputs of MRI brain images and their expected output images after performing edge detection, dilation operation and erosion operation. Source Code We have used a well-structured code of Sobel edge detection and dilation and erosion operations written in Python version 3.8.3 for our implementation.The code of edge detection consists of 41 statements and 10 branches.Similarly, the code of dilation and erosion consists of 46 statements and 12 branches, respectively.The sources of the above codes are given in Table 6. Dataset For our study, a diversified collection of MRI brain images is taken from https://www.kaggle.com/datasets/abhranta/brain-tumor-detection-mri?resource=download (access on 17 February 2024).The dataset consists of 1500 images having brain tumors and 1500 images having no brain tumors.The basic three types of images used as test cases are shown in Figure 4. Source Test Cases We have selected 95 source test cases through black box testing technique (strong equivalence class testing) and coverage criteria (statement coverage and branch coverage); a criterion proposed in our previous paper [15]. Source Test Cases The coverage of source test cases is checked through statement coverage and branch coverage, respectively.The coverage detail is given in Table 7.As shown in Table 7, 95 test cases (accumulatively) cover 100% code coverage in all three programs, so we do not need more test cases for our test suite. Mutation Operators Mutation operators cover a wide range of potential faults or mutations that can occur in the code.They encompass various types of changes that may affect the behavior of the program.Each mutation operator targets a specific kind of fault.For example, some operators might mutate arithmetic operators, while others might mutate relational operators or logical operators.For our study, the mutation operators used in Python language are taken from GitHub-mutpy/mutpy: MutPy is a mutation testing tool for Python 3.x source code.There are twenty traditional and seven experimental operators.The list of traditional and experimental operators is as follows: AOD-arithmetic operator deletion AOR-arithmetic operator replacement ASR-assignment operator replacement BCR-break continue replacement COD-conditional operator deletion COI-conditional operator insertion Source Test Cases We have selected 95 source test cases through black box testing technique (strong equivalence class testing) and coverage criteria (statement coverage and branch coverage); a criterion proposed in our previous paper [15]. Source Test Cases The coverage of source test cases is checked through statement coverage and branch coverage, respectively.The coverage detail is given in Table 7.As shown in Table 7, 95 test cases (accumulatively) cover 100% code coverage in all three programs, so we do not need more test cases for our test suite. Mutation Operators Mutation operators cover a wide range of potential faults or mutations that can occur in the code.They encompass various types of changes that may affect the behavior of the program.Each mutation operator targets a specific kind of fault.For example, some operators might mutate arithmetic operators, while others might mutate relational operators or logical operators.For our study, the mutation operators used in Python language are taken from GitHub-mutpy/mutpy: MutPy is a mutation testing tool for Python 3.x source code.There are twenty traditional and seven experimental operators.The list of traditional and experimental operators is as follows: AOD-arithmetic operator deletion AOR-arithmetic operator replacement ASR-assignment operator replacement BCR-break continue replacement COD-conditional operator deletion COI-conditional operator insertion CRP-constant replacement DDL-decorator deletion The use of mutation operators is dependent on the source code of a program.We have used all the mutation operators that are applicable according to our source code.The mutation operators used in this research are given below: AOD → Unary arithmetic operator deletion AOR → Arithmetic operation replacement LOR → Logical operator replacement ROR → Relational operator Replacement OIL → One iteration loop RIL → Reverse iteration loop SIR → Slice index remove SDL → Statement deletion ZIL → Zero iteration loop Results and Discussions In this section, we will discuss the MR evaluation results of our testing methodology in detail. Effectiveness of Mutation Operators In this section we have assessed the effectiveness of mutation operators by calculating the percentage of mutants generated and mutants killed.Mutation score of each operator shows the FDR of each mutation operator.The formula of mutation score depicted in Equation ( 1) is used to calculate the fault detection rate of each mutation operator.The percentage of generated mutants is calculated by the formula given in Equation (2). M generated = No. o f M generated by each op * 100 Total no. of M generated by all the op (2) In Equation ( 2), mutant is denoted by "M" and mutation operator is denoted by "op". Effectiveness of Mutation Operators Used in Edge Detection Table 8 shows the effectiveness of mutation operators in terms of mutants generated and mutants killed by each operator for edge detection.In the existing technique of Sim et al. [12], a total of 31 mutants have been generated by using only two mutation operators, i.e., ROR and LOR.In our proposed framework, we have employed nine mutation operators to generate a total of 162 mutants.It is observed that mutation operators such as AOD, COI, ZIL, and SIR have shown 100% mutation scores in all four MRs.But their percentage with respect to generated mutants is very low, i.e., 3.70%, 1.23%, 2.46%, and 1.23%, respectively.The effectiveness is dependent on two factors, i.e., mutation score and number of mutants generated.The operator that scores a high percentage in one of the two factors and scores very low in the other is not as effective as the one having moderate scores in both factors.So, the AOR operator is the most effective operator because its lowest mutation score is 63% (MR 4 ), and the highest mutation score is 74% (MR 1 ), whereas its percentage to generate mutants is 60.49%.RIL is the least effective operator because its lowest mutation score is 0% (MR 4 ), and the highest mutation score is 100% (MR 2 ), whereas its percentage to generate mutants is only 1.23%.SDL operator has achieved a good mutation score of 81.25%, followed by ROR having a mutation score of 73.33% against each MR, whereas their percentage to generate mutants is 9.87% and 18.51%, respectively.The effectiveness of SDL and ROR is almost similar because the mutation score of SDL is 12% higher than ROR, whereas the percentage of the ROR operator in terms of mutation generation is 10% higher than the SDL operator. Effectiveness of Mutation Operators Used in Dilation and Erosion Now, we will discuss the effectiveness of mutation operators used in dilation and erosion operations for the proposed MRs.Table 9 shows the FDR (mutation score) and percentage of mutants generated by each mutation operator used in dilation and erosion operations.In the literature, Jameel et al. [14] used only one mutation operator for the generation of mutants and created only 33 mutants, whereas we used eight mutation operators and produced 130 mutants in total.It is observed from Table 9 that the AOR operator is the most effective operator because it has generated a maximum number of mutants, i.e., 95, and its fault detection rate is greater than 50% against each MR.While the FDR of the ZIL operator is 100%, it has generated only one mutant and has a percentage of 0.76.ROR has a better percentage of generating the mutants, i.e., 15% and also, a mutation score lies between 15 to 30%.It is concluded from this section that the AOR operator is the most effective operator in terms of mutants killed and mutants generated in both the subject programs of edge detection and dilation and erosion. Effectiveness of Metamorphic Relations The effectiveness of MRs is determined through mutation testing.FDR depicted in Equation (1) shows the effectiveness of each MR.We have assessed the effectiveness of existing MRs (edge detection and dilation and erosion) and proposed MRs (dilation and erosion). Effectiveness of Edge Detection MRs The fault detection rate (mutation score) defines the strength of each MR.FDR is calculated through mutation testing.The FDR of edge detection MRs is given in Table 10.We have generated a total of 162 mutants for edge detection manually by introducing one fault at a time.It is observed from Table 10 that MR 2 has killed a maximum number of mutants, i.e., 126, followed by MR 1 , which has killed 124 mutants.MR 3 and MR 4 have killed the same number of mutants, i.e., 112 mutants each.The last column in Table 10 shows the FDR (in percentage) of each MR. Effectiveness of Dilation and Erosion MRs The FDR of existing operations of dilation and erosion using our proposed framework are given in Table 11.We have generated a total of 130 mutants for dilation and erosion operation.Table 11 shows that R 1 has killed a maximum number of mutants, i.e., 70, followed by R 2 and R 7 , with 68 killed mutants each.R 6 has killed the least number of mutants, i.e., 45.After calculating the FDR, it is observed that R 1 has the highest FDR of 53.84%, thus making R 1 the most effective MR.R 2 and R 7 have the second-highest FDR of 52.30% each.R 6 has the lowest FDR of 34.61%, thus making this MR least effective. Effectiveness of Proposed MRs As discussed earlier, for the dilation and erosion operation, we have suggested six MRs.Four of the six MRs are general and can be used for the majority of IP operations, while the remaining two MRs are particular to dilation and erosion operations.We have also assessed the effectiveness of our proposed MRs using our proposed framework.The FDR of proposed MRs are given in Table 12.Table 12 shows that we have generated a total of 130 mutants.It is observed that MR 6 has the highest FDR, killing 68 mutants, followed by MR 4 , which killed 65 mutants.MR 1 and MR 3 have killed 59 mutants each.The FDR of MR 2 is the lowest because it has killed 57 mutants.It is observed that the FDR of our proposed MRs are neither too high nor too low but are significant enough to find the violations in all the respective MRs.However, the most effective MR among the proposed MRs is MR 6 (image translation), which has the highest FDR of 52.30%, followed by MR 4 (transposition in erosion operation) with an FDR of 50%.MR 2 (counterclockwise rotation at 90 degrees in erosion operation) is the least effective, with an FDR of 43.84%. Comparison of Proposed Framework with Existing Techniques We have compared the results of our proposed framework with Sim et.al [12] and Jameel et al. [14].Table 13 shows the statistics of existing techniques and proposed framework.According to the statistics given in Table 13, Sim et al. [12] selected 30 images as source test cases from different image libraries given in [12].The images used by the authors are very limited and not diverse in nature.The Kodak site has 24 images, and the image compression site has only 15 images.All the images have the same bit depth of 24 and resolution of 96 dpi.All the images have only two dimensions, 768 by 512 or 512 by 768 (Kodak site) and a single dimension of 700 by 525.Jameel et al. [14] have not mentioned the source as well as the number of test cases selected for their experiments.We have used the data set of MRI brain images taken from Kaggle.com.The dataset is comprised of 3000 images with diverse image properties.From 3000 images, we have selected 95 images using equivalence class testing and code coverage.Sim et al. [12] used two mutation operators and generated only 31 mutants.Jameel et al. [14] used only one operator and generated 33 mutants.We have used nine mutation operators in the edge detection program and eight operators in the dilation and erosion program to generate 162 and 130 mutants, respectively. Comparison Results of Edge Detection We have compared the results of our proposed framework with Sim et al. [12].The comparison results are given in Table 14.Table 14 shows that in the existing technique of Sim et al. [12], MR 2 has the highest FDR followed by MR 3 , whereas in the proposed framework, MR 2 has the highest FDR followed by MR 1 .In the existing technique, the FDR of MR 1 and MR 4 are the same, i.e., 45%, whereas in the proposed framework, the FDR of MR 3 and MR 4 are the same, i.e., 69.13%.In the proposed framework, the FDR of MR 1 and MR 4 is far better than the existing technique, i.e., 76.54% and 69.13%, respectively.The FDR of MR 2 and MR 3 is also improved by 1%. Comparison Results of Dilation and Erosion Jameel et al. [14] evaluated eight MRs of dilation and erosion operation.We have also evaluated the same MRs using our proposed framework.The comparison results are given in Table 15.Table 15 shows that out of eight MRs, four MRs, i.e., R 1 , R 4 , R 6 , and R 8 , have improved FDR using our proposed framework.The FDR of R 1 (53.84%),R 6 (34.61%), and R 8 (50%) are far better than the existing technique, having a FDR of 15%, 18%, and 21%, respectively.In existing techniques, R 2 , R 3 , R 5 , and R7 have high FDR, i.e., 58%, 97%, 50% and 73%, because they have used only one mutation operator and considered only one type of fault.In the proposed framework, the FDR of all the MRs is moderate, neither too high nor too low, thus making it effective to find the violations in all respective MRs.In the proposed framework, R 1 is improved by 39%, R 4 is improved by 0.5%, R 6 is improved by 17%, and R 8 is improved by 29%.The FDR of some of the MRs in the existing technique is high because the number of mutants generated is just 31. Comparison Results of Proposed MRs with Existing MRs of Dilation and Erosion In this section, we compare the results of our proposed MRs with those of the existing MRs.As described earlier, we have used eight mutation operators for the evaluation of dilation and erosion MRs and have generated 130 mutants in total.The number of mutants against each mutation operator is depicted in Table 16.In mutation testing, we have used eight mutation operators for the evaluation of dilation and erosion MRs.In the AOR operator, we have used six types of faults such as addition (+), subtraction (−), multiplication (*), division (/), exponent/power (**), and floor division (//).It is observed that there are nine arithmetic faults (collectively) that are identified by the proposed MRs and are not identified by any of the existing MRs.Among these faults, eight faults are identified by MR 4 , five faults are identified by MR 5 , and nine faults are identified by MR 6 .So, we can say that MR 4 , MR 5 , and MR 6 are more effective operators than MR 1 , MR 2 , and MR 3 because they have identified additional faults not identified by any of the existing MRs.We have observed the presence of alive mutants in both existing and proposed MRs.So, by combining both the MRs, the total number of alive mutants was reduced.Hence, it is concluded that the proposed MRs complement the existing MRs effectively. Conclusions Testing of Image Processing Applications (IPAs), of course, is a challenging task because of the absence of test oracle.Metamorphic testing is an efficient method to deal with the applications with a test oracle problem.Metamorphic relations play an important role in metamorphic testing.A metamorphic relation relates two or more inputs with their expected outputs after execution of the properties of the target program.Properties of different image processing operations can also be used as metamorphic relations. In this research, we have proposed six new MRs for morphological image operations (dilation and erosion).The fault detection rate of newly proposed MRs, along with existing MRs, is determined through mutation testing.The effectiveness of mutation operators is also determined by which operator is more effective in generating a maximum number of mutants and through which operator a maximum number of mutants are killed.AOR is considered the best operator in both the subject programs as it generates and kills a maximum number of mutants.We have compared the results of our proposed approach with the existing techniques of edge detection and morphological image operations.Our results demonstrate that the mutation score of all the MRs of edge detection has improved, whereas the MRs of dilation and erosion have shown improvement in four MRs (out of 8).While comparing our proposed MRs with the existing MRs of dilation and erosion operations, we have come to the conclusion that the proposed MRs complement the existing MRs effectively as the proposed MRs are able to find those faults that are not identified by the existing MRs. MR 6 : Trans( on described the existing and proposed MRs for the dilation n and Erosion sed by Jameel et al. [14] for dilation and erosion operations n and erosion.Mathematical Property Ref ord (Output(I)) = Output(Ref ord (I)) Ref ord (Output(I)) = Output(Ref ord (I)) δ s (I) = ℇ s (I c ) ℇ s (I) = δ s (I c ) where c is the complement of an image I. δ s (ℇ s (I)) ≠ I ≠ ℇ s (δ s (I)) Size obj (δ s (I)) ≥ Size obj (I) and Pix list I ⊂ Pix list δ s (I) Number obj (δ s (I)) ≤ Number obj (I) δ s (I) = I ⊕ S = S ⊕ I = δ I (S) ℇ s (I) ≠ ℇ I (S) δ s+x (I) = δ s (I) + x or dilation and erosion operations.The details of these MRs ion and Erosion ew MRs for dilation and erosion operations.Our proposal wo specific MRs of dilation and erosion.In Table 4, we have s with their mathematical properties.The details of these on and erosion. Ref ord (Output(I)) = Output(Ref ord (I)) Ref ord (Output(I)) = Output(Ref ord (I)) δ s (I) = ℇ s (I c ) ℇ s (I) = δ s (I c ) where c is the complement of an image I. δ s (ℇ s (I)) ≠ I ≠ ℇ s (δ s (I)) Size obj (δ s (I)) ≥ Size obj (I) and Pix list I ⊂ Pix list δ s (I) Number obj (δ s (I)) ≤ Number obj (I) δ s (I) = I ⊕ S = S ⊕ I = δ I (S) ℇ s (I) ≠ ℇ I (S) δ s+x (I) = δ s (I) + x the MRs for dilation and erosion operations.The details of these MRs s for Dilation and Erosion osed six new MRs for dilation and erosion operations.Our proposal eral and two specific MRs of dilation and erosion.In Table Table 1 . Summary of related work. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 4 . Proposed MRs for dilation and erosion. Table 2 . MRs for edge detection. Table 3 . Existing MRs for dilation and erosion. Table 5 . Mutation operators are used in existing techniques and in the proposed framework. Table 6 . Sources of source code. Table 8 . Effectiveness of Mutation Operators Used in Edge Detection. Table 9 . Effectiveness of mutation operators used in dilation and erosion operations for proposed MRs. Table 10 . Fault detection rate of edge detection MRs. Table 11 . Fault detection rate of dilation and erosion MRs. Table 12 . Fault detection rate of proposed MRs. Table 13 . Statistics of existing technique and proposed framework. Table 14 . Comparison of existing technique and proposed framework. Table 15 . Comparison of existing technique and proposed framework. Table 16 . Mutants Generated Against Each Mutation Operator.
2024-04-10T15:42:49.235Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "4a1baedae40af1a9dee0bf5275c7067b636cf13f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-433X/10/4/87/pdf?version=1712393170", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efb2cee3266b88ecbf73ef9183266946c7abf4d2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
17361750
pes2o/s2orc
v3-fos-license
Does prenatal stress alter the developing connectome? Human neurodevelopment requires the organization of neural elements into complex structural and functional networks called the connectome. Emerging data suggest that prenatal exposure to maternal stress plays a role in the wiring, or miswiring, of the developing connectome. Stress-related symptoms are common in women during pregnancy and are risk factors for neurobehavioral disorders ranging from autism spectrum disorder, attention deficit hyperactivity disorder, and addiction, to major depression and schizophrenia. This review focuses on structural and functional connectivity imaging to assess the impact of changes in women's stress-based physiology on the dynamic development of the human connectome in the fetal brain. Recent reports suggest that prenatal stress exposure (PNSE) is a global public health problem (13,29,35,36). PNSE has been reported in 10-35% of children worldwide (37). Nearly 8-23% of infants in the United States, or almost 800,000 neonates/ year, experience prenatal exposure to depression (38,39), and reports from developing countries support similar numbers (40)(41)(42). Likewise, 1 in 7 to 1 in 13 pregnant women in the United States affirm symptoms of anxiety, while 5.6-14.8% in developing countries suffer a similar diagnosis (40)(41)(42)(43)(44). Since a nationally representative study found that more than half of the pregnant women (65.9%) experiencing depression in the United States went undiagnosed (45), these data may underrepresent the problem. PNSE is believed to both activate the hypothalamicpituitary-adrenal (HPA) axis and result in epigenetic changes in the developing brain. This review will focus on converging preclinical and clinical imaging data to assess the impact of these changes in women's stress-based physiology on the functional development of the human fetal brain. Prior to reviewing published data, we review common causes of PNSE and methods for measuring the structural and functional connectome. We also provide preliminary human data demonstrating increasing connectivity in limbic system structures across the third trimester of gestation. STRESS MODELS IN CLINICAL AND TRANSLATIONAL STUDIES While the relationship between maternal psychosocial stress and adverse pregnancy outcomes has been shown in many studies, it is important to define the nature of the stressor and the subject population (46). Stressors range from depression and anxiety to natural disasters, bereavement and steroid administration. As stressors vary, so may the outcomes. For a listing of outcomes, putative prenatal stressors and representative publications, please see Table 1. Depression and Anxiety Although some older studies relied on retrospective recall measures and few evaluated the effects of increasing duration and strength of psychosocial stressors, more recent investigations have employed depression and anxiety as markers of maternal stress. Estimates suggest that 8-23% of women have symptoms of depression during their pregnancy (47). Likewise, 7.7-14% report anxiety, and there are numerous reports of coexisting depression and anxiety in the same pregnant woman at any given time. While depression and anxiety are common proxies Review for stress, stress in pregnant women does not always coincide with elevated depression or anxiety scales. As such, cases of PNSE may be missed in such analysis. Finally, depression and anxiety may have independent or additive effects in regards to PNSE, making it difficult to fully disentangle these effects with this model. For a more complete review of this topic, please see Suri et al. (48). Natural Disasters Another approach to test the hypothesis that PNSE results in neurobehavioral disorders is the use of natural disasters as "experiments of nature." Unlike depression and anxiety, natural disasters are independent of the subject's genetic background, personality or other confounding characteristics. Disasters strike in a random manner, similar to a randomized controlled experiment, and thus can provide data on prenatal stressors to which a given cohort of pregnant women were exposed (13). Using this strategy, the impact of disasters ranging from hurricanes to terrorist attacks on neurobehavioral outcomes of the offspring have been assessed (13,(49)(50)(51)(52). Preconception Stress In contrast, the influence of preconception adversity and the impact of high cumulative stress on maternal perception of prenatal stress on the developing connectome are just beginning to be explored (53)(54)(55). Consistent with preclinical studies showing effects of repeated stress on neural atrophy and neurobehavioral effects (56), human studies link altered structure and function of limbic, subcortical, and frontal regions to higher levels of cumulative stress (28,57,58). These data suggest that preconception adversity may shape perception and control of prenatal stress levels and should be considered in investigations of PNSE on neurobehavioral and MRI outcomes. Prenatal Maternal Stress PNSE has been widely associated with preterm birth, intrauterine growth restriction, and reduced fetal head growth (50,51,(59)(60)(61). In addition, several studies have reported that increased acute maternal stress is associated with changes in fetal heart rate, activity level, sleep patterns, and higher pulsatility indices in the middle cerebral artery (21,60). PNSE can also be directly measured using prospective data collections in samples of pregnant women with questionnaires, clinical interviews, and biological samples such as cortisol from maternal saliva, blood, or amniotic fluid. METHODS TO ASSESS CONNECTIVITY USING MRI Advances in neuroimaging provide important information about microstructural and functional connectivity (62), and offer opportunities to understand the impact of PNSE on the developing connectome (1-3). In the following section, we define measures commonly used in connectomics with examples shown in Figure 1. Functional connectivity provides information about neural regions that are physiologically functionally coupled, independent of structural connectivity (63). Based on the blood oxygen level dependent signal and derived from time series observations, it assesses "temporal correlations between spatially remote neurophysiological events. " (63) High correlation between time courses of two regions or voxels implies high functional connectivity. For the references included in this review, functional MRI (fMRI) data are largely collected in the resting state, or resting state-fMRI (rs-fMRI). Methods to assess rs-fMRI data (64,65) include seed, independent components analysis, and voxel-wise connectivity. Seed-based connectivity is most frequently used in human studies and involves (i) selecting a predefined region of interest (ROI), (ii) extracting the average time course from this ROI, and (iii) correlating this average time course with the time courses of every other voxel in the gray matter. Independent components analysis is mathematical modeling technique that parcellates the brain into independent spatial components or networks. These networks can be compared across stubject groups or used for later analysis. Voxel-wise connectivity methods are generalizations of seed-based connectivity where many seed connectivity analyses are performed treating each voxel in the gray matter as a unique ROI. As these methods Conjugal conflict (104) Depression (105,106) Maternal bereavement (55) Natural disasters (13,52) Attention deficit hyperactivity disorder Anxiety (61,103,107,108) Maternal bereavement (15,55) Bipolar affective disorder Stress (109) Cognition Anxiety (107,110) Depression (48) Natural disaster (111) Psychosocial stress (60,112) Depression Depression (16,48,113) PTSD (113) Internalizing problems Depression (114) produce a large amount of data (approximately 20,000 seedconnectivity results), seed-connectivity results for each voxel are often summarized to a single number using network theory. Anatomical connections in the developing brain represent microstructural connectivity (66). Diffusion-weighted imaging (dMRI) assesses the diffusion of water along axons and permits visualization of axonal pathways. By modeling the directional diffusion of water as an ellipsoidal shape, or "tensor", at each voxel in the brain, dMRI permits assessment of white matter tracts. The first eigenvector, λ1, describes the direction of maximal diffusion, while the second and third define diffusivity perpendicular to this principle axis. Radial diffusivity represents the average of λ2 and λ3 and is affected by changes in axon caliber and myelination. Fractional anisotropy (FA) measures the degree to which water diffuses in one direction (along the axon) by computing the ratio of λ1 to λ2 and λ3 and is the most common measure used to assess axonal integrity. High values of FA suggest more highly organized, strongly myelinated tracts. The three main approaches to analyzing dMRI data include region ROI quantification, tract-based analysis and tractography, and voxel-based morphometry (VBM). ROI quantification is frequently used in human studies investigating the impact of PNSE on the developing connectome. In this method, one or more ROIs are selected a priori and the average FA across all voxels in the ROI calculated. Typically, ROIs are major white matter tracts. Tractography is modeling technique used to identify these tracts. Once identified, they can be analyzed using graph theory or ROI analyses. In VBM analysis, FA data from all subjects are transformed into a common space and compared across each voxel of the white matter. Finally, although not direct measures of the connectome, we include studies assessing brain morphometry, including cortical volumes and thickness. Morphological features of different brain regions are not independent of those of other areas, and the brain shows a high level of coordination between different structures (67). This coordination of morphological features is often referred to as anatomical covariance (67-69) and resembles functional and structural connectivity. PRECLINICAL DATA SUPPORT THE IMPACT OF PNSE ON DEVELOPING CONNECTOME Across multiple species and numerous time points, converging data suggest that gestational stress influences brain development. Similar to the human subjects, the offspring of numerous species exposed to PNSE demonstrate increases in anxiety and depression, impaired spatial memory and alterations in cognition (70,71). Systematic experimental investigations using standardized animal models and outcome measures (6,72,73) address not only the impact of PNSE on maternal endocrine functions and the "re-programming" of the fetal HPA axis (74)(75)(76)(77)(78)(79), but also suggest that changes in corticogenesis contribute to the long-lasting effects on brain and behavior ( Table 2) (74,80,81). MRI STUDIES OF PNSE AND THE DEVELOPING BRAIN While the neural correlates of acute and cumulative postnatal stress in human subjects are active fields of study, MRI research investigating PNSE in human subjects is just starting to be explored. As described below and shown in Table 3, many investigators have interrogated the impact of PNSE on the limbic system and connected regions in the developing brain. Studies During infancy Recent studies suggest a significant relationship between antenatal maternal depression and/or anxiety and structure and (a) Functional connectivity. Functional connectivity measures the synchrony or correlation of brain activity between two or more regions of the brain. Common methods include seed connectivity, independent components analysis, and voxel-wise connectivity. Seed-based connectivity measures functional connectivity from a predefined region of interest (ROI, or seed, shown in green) and the rest of the gray matter. Regions of positive or negative functional connectivity are shown as red and blue regions. Independent components analysis is mathematical modeling technique that parcellates the brain into independent spatial components or networks. Example components shown are the motor network and the defualt mode network. Voxel-wise connectivity methods involve correlating the time course of every voxel in the gray matter with the time course of every other voxel in the gray matter. Connectivity for each voxel are often summarized to a single number using network theory to highlight so-called hub regions in the brain. (b) Structural connectivity. Structural connectivity measures anatomical white matter connections linking different cortical and sub cortical regions. Common methods include ROI quantification, tract-based and tractography, and voxel-based morphometry. For ROI quantification, average FA across all voxels in a priori ROIs (shown in white outline and overlay) is compared across study groups. Tractography is modeling technique used to identifying white matter tracts used in further analyses. In VBM analysis, FA data from all subjects is transformed into a common space and comparison across each voxel of the white matter is performed. (Figure modified Review Scheinost et al. function in the developing brain. Rifkin-Graboi performed structural MRI and dMRI on 157 nonsedated 6-14-day-old newborns whose mothers participated in the GUSTO study (Growing Up in Singapore Towards Healthy Outcomes), a cohort of Asian women enrolled during the first trimester of pregnancy. Socioeconomic status, prenatal exposures, pregnancy measures, and birth outcomes were recorded, and imaging data were analyzed only for those infants who met the following criteria: (i) gestational age (GA) ≥37 wk, (ii) birth weight (BW) >2,500 g, and (iii) Apgar 5 min > 7. The Edinburgh Postnatal Depression Scale and the State Trait Anxiety Inventory (STAI) were administered to all women at 26 wk of pregnancy. Adjusting for household income, maternal age and smoking exposure, postmenstrual age (PMA) at MRI, and BW, Rifkin-Graboi (24) found significantly lower FA but not volume in the right amygdala in infants of mothers with high EDPS scores. This suggests a significant relationship between PNSE and microstructure of the right amygdala, a region associated with stress reactivity and vulnerability for mood disorders. Similarly, Qiu interrogated the GUSTO cohort to examine the consequences of PNSE to maternal anxiety on neonatal development of the hippocampus, a structure critical for stress regulation (27). Entry criteria for this analysis differed from those of Rifkin-Graboi's 2013 study, and included both term and late preterm infants who met the following criteria: (i) GA ≥ 35 wk; (ii) BW > 2,000 g; and (iii) Apgar 5 min > 9. There were 175 GUSTO infants available for this analysis; 42 underwent repeat scans at age 6 mo, and 35 (83%) had usable data. In Qiu's analysis, antenatal maternal anxiety did not influence bilateral hippocampal volume at birth, but children of women with increased anxiety during pregnancy showed slower growth of both the left and right hippocampus between birth and age 6 mo. Subsequently, evaluating 21 GUSTO infants with high PNSE (i.e., maternal STAI > 90) and 34 with low PNSE (i.e., maternal STAI < 70), Rifkin-Graboi showed that antenatal anxiety predicted decreases in FA of regions important for cognitive-emotional responses to stress (i.e., right insula and dorsolateral prefrontal cortices (PFC)), sensory processing (right middle occipital cortex), and socio-emotional function (i.e., right angular gyrus, uncinate fasciculus, posterior cingulate, and parahippocampus) at age 5-17 d (25). Of note, infants were eligible for this analysis if met the following criteria: (i) GA ≥ 36 wk; (ii) BW > 2,000 g; and (iii) Apgar 5 min > 7. Finally, Scheinost (82) and Qiu (26) investigated prenatal depression/anxiety exposure and amygdala connectivity using rs-fMRI in preterm neonates at term equivalent age and infants at age 6 mo, respectively. These data showed that, in the neonatal period, the amygdala is functionally connected to subcortical and posterior cortical regions, and, by age 6 mo, is connected to widespread networks subserving emotional regulation, memory, and social cognition. In preterm neonates, Scheinost showed that PNSE reduces amygdalar-thalamic connectivity and is additive to effects of preterm birth. Using 24 GUSTO infants, Qui showed that infants born to mothers with higher prenatal depressive symptoms had greater rs-fMRI of the amygdala with the left temporal cortex, insula, anterior cingulate (ACC), medial orbitofrontal, and ventromedial PFC. These networks are reported in children and adults with depression, suggesting that rs-fMRI data may foreshadow future neuropsychiatric disease. Studies During childhood Studies of older children also suggest that maternal anxiety is associated with specific changes in brain morphology. Buss evaluated children ages 6-10 y whose mothers had been enrolled in a prospective study of pregnancy at the University of California, Irvine or Cedars Sinai Hospital in Los Angeles, CA, between 1998 and 2002 (83). Families were contacted again in 2007 and invited to participate in a follow-up study of their children to assess the influence of PNSE on brain development. At the time of this report, 35 mother-child dyads had both usable MRI data and complete maternal data. VBM on these children demonstrated that exposure to high maternal stress at 19 wk of gestation correlated with gray matter reductions in the PFC, premotor cortex, medial temporal lobe, lateral temporal cortex, post-central gyrus, and cerebellum extending to the middle occipital and fusiform gyri. Although the numbers are small and assessments of postnatal stress exposure were not included in the authors' analyses, high pregnancy stress at 25 and 31 wk of gestation was not associated with local reductions in gray matter volume, suggesting the importance of earlier exposure to gestational psychological stress. Similarly, Sarkar performed dMRI studies to assess both FA and perpendicular diffusivity (D perp ) on 22 children ages 6-9 y whose mothers were retrospectively assessed for PNSE when the children were age 17 mo (84). For these children, PNSE was positively correlated with right uncinate FA and negatively with right uncinate D perp , while PNSE was not associated with control tract properties. In addition, since reduced cortical volume and thickness have both been associated with a history of depression in adult populations (85,86), Sandman measured cortical thickness in Review 81 school aged children whose mothers had participated in the longitudinal study described above; (83) all were prospectively evaluated for depression at 19, 25, and 31 wk of gestation (47). Prenatal maternal depression exposure was associated with thinning in the right frontal lobe, and the strongest association was with exposure at 25 wk gestation. Morphological changes were primarily found in the superior, medial orbital, and frontal pole regions of the right PFC, consistent with data in adults with depressive symptomatology (85,86). Further, the significant association between prenatal depression exposure and child externalizing behavior in this cohort of children was mediated by these changes. Studies During Adulthood Finally, although MR studies of young adults with early life stress exposures are just beginning to emerge (87-89), Favaro explored the relationship between PNSE, cortical volumes and rs-fMRI in a sample of 35 healthy women aged 14-40 y (90). The sample was composed of volunteers to whose mothers a semi-structured interview assessing stress related events during pregnancy was administered. Subject scores were assigned based on interview data and used for MRI analyses. For these women, greater PNSE was associated with decreased gray matter volume in the left medial temporal lobe and both amygdalae. Strength of PNSE was positively correlated with rs-fMRI between the left medial temporal lobe and pre-genual cortex, and connectivity between the left medial temporal lobe and left medial-orbitofrontal cortex partially explained variance in depressive symptoms in this cohort. EMERGING FACTORS As studies begin to investigate the impact of PNSE on the connectome, several factors from both preclinical and clinical data have emerged as key considerations for future studies. These factors include defining the normal developmental trajectories of the fetal connectome, the type, timing and duration of PNSE, and the fetal sex. Other factors not addressed within this review include assessing the role of paternal preconception stress and identifying the molecular signatures of PNSE. Fetal Networks "Trajectory analysis is central to the assessment of the impact of PNSE on the developing brain. " (2) Fetal rs-fMRI is an emerging technology obtaining information about neural network development in utero by directly measuring the fetal brain (91). These methods are needed to investigate the prenatal connectome as it develops and pinpoint how and when PNSE alters its development. Using cross-sectional functional connectivity data between 21-38 wk of gestation, fetuses show evidence of both long-range functional connectivity and the emergence of neural networks across the third trimester, mimicking those in older children and adults (92,93). However, both longitudinal and cross-sectional data are needed to more fully characterize the developmental trajectories of PNSE. To During the 3rd trimester, left amygdala connectivity is first characterized by local circuitry, then begins to connect to ipsilateral regions in the frontal and temporal lobes, and finally develops connections to the contralateral amygdala (Figure 2). The development of these important cross-hemispheric connections between the right and left amygdala develop during the end of the third trimester and likely increases the vulnerability of this circuitry to PNSE (55). Timing of Stress exposure There is increasing recognition that fetal stress exposure has a particularly pronounced impact during early periods of corticogenesis, commonly known as critical periods in the developing brain. Critical periods refer to epochs characterized by both increasing plasticity and greater vulnerability; thus, these are times when the developing brain may be most easily modified in either favorable or unfavorable directions. Critical periods are thought to be environmentally sensitive, and many authors believe they underlie the developmental origins of neurobehavioral disorders such as ASD. Typically developing fetuses with PNSE during the middle second and third trimesters of gestation are reported to be at the greatest risk for neurobehavioral disorders (13,52). Reviewing Swedish birth registries, Class examined associations between PNSE in 738,144 offspring born in 1992-2000 for childhood outcomes and 2,155,221 offspring born in 1973-1997 for adult outcomes. Although data for GA are not available, third trimester bereavement stress significantly increased risk of both ASD and attention deficit hyperactivity disorder (55). Similarly, children who had been exposed to tropical storms during gestation months 5-6 or 9-10 had 3.8 times greater risk of developing ASD than children who had been exposed to the same storms, in the same place, but during other months of gestation (52). Duration of maternal stress may also play a role. Analyzing data from 4,682 live births, Latendresse reported that children of mothers with the longest periods of prenatal depression exposure experienced more than seven times increased risk for pervasive developmental disorder when compared to children with no PNSE (53). In contrast, in the GUSTO study, mothers were assessed for gestational depression and/or anxiety at 26 wk, and MRI measures were correlated with these data (24)(25)(26)(27). In addition, Sandman performed depression screening on 82 pregnant mothers at 19, 25, and 31 wk gestation and found that antenatal exposure to maternal depression at 25 wk gestation was significantly correlated with cortical thinning in 24% of the frontal lobes in the offspring (47). Finally, although cortisol levels are not available for subjects in the prior MRI studies, high levels of maternal cortisol at 15 wk (but not 19, 25, 31, or 37 wk) of gestation were associated with amygdala volumetric changes in girls but not in boys (33). Since high levels of cortisol are believed to reprogram the fetal HPA axis and maternal stress has been reported to downregulate 11 β-hydroxysteroid dehydrogenase (75), the placental enzyme which metabolizes cortisol (75), future studies of maternal psychological stress during gestation should consider longitudinal assessments of maternal cortisol in tandem with fetal neuroimaging. Sex Differences in Prenatal Stress outcomes The link between PNSE and outcomes may be moderated by fetal sex. The source of sex differences upon early development is unclear but may include placental functioning, exposure to adrenal hormones and testosterone and an assortment of epigenetic mechanisms (94)(95)(96)(97). Recent fetal pathways also proposed include sex-dependent responses of the transcriptome (6,98-100), naturally occurring sexually-dimorphic processes mediating neuron-glial interactions (101), and differential responses of target regions in the developing brain (102). Thus, while PNSE may have consequences for both males and females, the specificity of effects may differ. To the best of our knowledge, however, only a single study has reported sex differences in MRI outcome measures. These data suggest that higher cortisol levels at 15 wk of gestation were associated with larger right amygdala volumes and more affective problems in female but not male offspring (33). MECHANISMS OF PRENATAL STRESS AND THE CONNECTOME Taken together, published studies of PNSE suggest both proximate and long-lasting influence on the connectome. However, mechanisms of how PNSE alters the developing connectome must be explored. Mechanistic studies have focused on the HPA axis, candidate genes, and epigenetic pathways (see Table 4). MOVING FORWARD: INVESTIGATION OF THE CLINICAL PROBLEM, CHANGES IN CARE Converging data suggest that PNSE alters the developing connectome. As noted by Sporns, "The placement of brain connectivity as an intermediate phenotype between environmental exposures and behavior makes it an important target for studies that link networks across levels from behavior to molecules, neurons and emerging networks in the developing brain" (62). To better address the impact of PNSE on the connectome, longitudinal studies of maternal/fetal dyads with and without stress exposure are needed. Such investigations would benefit from repeated assessments of maternal stress in order to identify type, time of onset, and duration of PNSE and correlate these data with sequential imaging. In addition, preconceptional stress may influence offspring outcome, and pregnant women should be surveyed for cumulative stress at the time of study enrollment. Likewise, both genetic variants and epigenetic changes may contribute to outcome in the offspring, and consideration should be made to include these data in PNSE-offspring outcome analyses. Finally, longitudinal Review fetal imaging will provide important information about target regions, and developmental trajectory analyses are well suited for interrogation of the developing connectome. These strategies can be used to detect developmental disturbances of the connectome that may underlie the development of neurobehavioral disorders.
2017-11-08T01:47:53.330Z
2016-09-27T00:00:00.000
{ "year": 2016, "sha1": "3850500286611f2ed8dab095684a3e7b09986a81", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/pr2016197.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3850500286611f2ed8dab095684a3e7b09986a81", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220948892
pes2o/s2orc
v3-fos-license
scTyper: a comprehensive pipeline for the cell typing analysis of single-cell RNA-seq data Background Recent advances in single-cell RNA sequencing (scRNA-seq) technology have enabled the identification of individual cell types, such as epithelial cells, immune cells, and fibroblasts, in tissue samples containing complex cell populations. Cell typing is one of the key challenges in scRNA-seq data analysis that is usually achieved by estimating the expression of cell marker genes. However, there is no standard practice for cell typing, often resulting in variable and inaccurate outcomes. Results We have developed a comprehensive and user-friendly R-based scRNA-seq analysis and cell typing package, scTyper. scTyper also provides a database of cell type markers, scTyper.db, which contains 213 cell marker sets collected from literature. These marker sets include but are not limited to markers for malignant cells, cancer-associated fibroblasts, and tumor-infiltrating T cells. Additionally, scTyper provides three customized methods for estimating cell-type marker expression, including nearest template prediction (NTP), gene set enrichment analysis (GSEA), and average expression values. DNA copy number inference method (inferCNV) has been implemented with an improved modification that can be used for malignant cell typing. The package also supports the data preprocessing pipelines by Cell Ranger from 10X Genomics and the Seurat package. A summary reporting system is also implemented, which may facilitate users to perform reproducible analyses. Conclusions scTyper provides a comprehensive and user-friendly analysis pipeline for cell typing of scRNA-seq data with a curated cell marker database, scTyper.db. Background Single-cell RNA sequencing (scRNA-seq) technology has enabled researchers to profile transcriptomes at single-cell level [1,2]. However, there are a number of challenges in the analysis of scRNA-seq data and its outcomes; one of the key challenges is the identification of cell types from the transcriptome data. Currently, various cell typing methods have been introduced using different workflows and data types [2][3][4][5][6]. Cell typing by estimation of the expression level of cell marker genes is generally used by researchers for convenience. With time, enriched resources for cell type markers that have been generated from different sources, including single cell sequencing and experimental studies, are becoming available [7,8]. Thus, cell typing using these inconsistent markers has become more timeconsuming and an error-prone process. Thus far, there is no standard practice for cell typing and use of different cell markers and cell typing algorithms often results in inconsistent cell type assignment. To overcome this issue, a collection of versatile cell markers from previous studies is needed for cell typing. In fact, there is a comprehensive cell marker database, CellMarker, which provides manually curated cell markers and their information [9]. However, this database does not include the recent studies, especially on tumor tissues, even though many tumor-associated cells have been characterized recently [10][11][12]. In this study, we developed scTyper, an R package that provides a cell marker database, scTyper.db, as well as a flexible pipeline for cell typing analysis of scRNA-seq data with three different methods. Users can customize the cell typing pipeline and easily use the pre-collected cell marker databases. Implementation scTyper is an R package that can be executed by a single command. Experienced users can customize the pipeline stepwise by manipulating the parameters. Besides the cell typing process, scTyper also supports pipelines for quality control and sequence alignment, which are performed by FASTQC (https://www.bioinformatics.babraham.ac.uk/ projects/fastqc) and Cell Ranger [13], respectively. Data normalization, clustering, and visualization processes are also supported by the wrapper functions for 'Seurat' R package [14]. Results Overall workflow of scTyper scTyper provides an automated and customizable pipeline for the cell typing of scRNA-seq data ( Fig. 1). For user convenience, the package has been supported with raw data preprocessing pipelines by wrapper functions for FASTQC and Cell Ranger from 10X Genomics; the preprocessing includes quality control, sequence alignment and quantification of raw sequencing data. These processes can be executed by a single command. Data processing steps for log transformation, normalization, and clustering are performed by the wrapper functions for Seurat, generating a Seurat object that is used as an input file in the subsequent processes. After data processing, cell typing can be performed using the pre-pooled cell marker database, scTyper.db, and a previously reported cell marker database, Cell-Marker [9]. Users can choose the cell markers of interest from these databases and apply them to subsequent cell typing. The expression of the cell marker sets can be estimated by three different methods, nearest template prediction (NTP) [15], pre-ranked gene set enrichment analysis (GSEA) [16], and average gene expression values. For malignant cell typing, users can utilize the inferred DNA copy numbers using the inferCNV R package with modifications [17]. Overall, scTyper is comprised of the modularized processes of "QC", "Cell Ranger", "Seurat processing", "cell typing", and "malignant cell typing". These processes can be customized by manipulating the parameters for each process. If users want to perform only the cell typing process and a preprocessed input file with Seurat object is already prepared, the processing steps of "QC", "Cell Ranger" and "Seurat processing" can be skipped by setting the parameters "qc", "run.cellranger" and "norm.seurat" to "FALSE". The processes and their parameters implemented in scTyper are summarized in Supplementary Table 1 (more details can be found in the package manual). Finally, the results and the executed processes are automatically documented as a report. The report summarizes the processing steps, cell typing and clustering results, and visualizes the results with plots. scTyper.db, a manually curated cell marker database scTyper.db is pre-installed in the package that is comprised of manually curated 213 cell marker gene sets and the 121 cell types collected from 22 studies (Supplementary Table 2). We collected the cell markers for cancer-associated fibroblasts (n = 21), tumor-infiltrated lymphocyte (n = 33), tumor-associated macrophage (n = 4), and malignant cells from different tissue types (n = 13) ( Fig. 2a-b and Supplementary Table 2). Immune repertoires of 149 immune cell markers were also included in the database. For example, there were 62 T cell marker sets with different cell transition states such as CD4+, CD8+, regulatory T, and exhausted T cells. We have used a unified nomenclature to label the marker gene sets in the database. For example, a cell marker label "Puram.2017.HNSCC.TME" was designated by concatenating the first author name of the publication (Puram), publication year (2017), tissue type/cancer type (HNSCC), and category of cell composition (TME, tumor microenvironment). Using this nomenclature, users can easily search the cell markers of interest. Detailed information about the cell markers such as data source, PubMed ID, species name, tissue type, study detail, etc. was also provided in the "extdata" directory. In addition to scTyper.db, we also implemented the previous, CellMarker database, which comprised 2867 cell type marker sets and 467 cell types from 1764 studies (Fig. 2c and Supplementary Table 3). Cell marker expression estimation and cell typing In the current version of scTyper, three different methods are implemented to estimate the expression of cell marker sets, including NTP, pre-ranked GSEA, and average expression values (Fig. 1). NTP is a class prediction method to estimate the proximity to the cell type templates by using a list of gene sets and calculating its distance to the test data [18]. Enrichment score (ES) is calculated by the pre-ranked GSEA method (https://www.gsea-msigdb.org/gsea/index.jsp). Users can choose the level for cell typing from the options, "cell-level" or "cluster-level" by setting the value of the parameter "level" to "cell" or "cluster", respectively. For malignant cell typing, inferred DNA copy numbers are estimated by the inferCNV R package [17] with an improved modification. The group of genes with same function can be located within their proximity on a chromosome, resulting in the construction of a gene cluster. These gene clusters can have similar expression levels and thus can be falsely inferred to have regional DNA copy number alteration. Therefore, we have added a gene filtering step in the inferCNV process to remove gene clusters from the inferCNV analysis. Next, we benchmarked the performance of different cell typing methods implemented in scTyper using a test data set (GSE103322, 5902 cells from head and neck squamous cancer) [18]. Cell typing was performed with 6 different parameters using all the 3 cell typing methods for comparison with or without the application of inferCNV. The cell markers of Puram.2017.HNSCC.TME were used. As expected, we observed that the cell types were assigned differently based on the methods applied (Fig. 3a). For instance, applying the inferCNV method could identify 529 additional malignant cells that were assigned to be non-malignant cells in the original Puram study (Fig. 3b, c). During inferCNV analysis, 5 gene clusters (including 180 genes) were filtered out; these were identified by performing gene set enrichment analysis of genes residing in the neighboring chromosomal regions (1 Mb) (P < 0.05). These results show that the combined analysis of the cell marker expression and CNV inference is greatly helpful in appropriately interpreting the cell typing results. In the performance test, cell typing of the test data (5902 cells) with NTP and inferCNV utilized 2.25 h of runtime under the computing environment of a single CPU core (Intel Xeon, 2.40 GHz) and 500 M RAM (Supplementary Fig. 1). Most of the runtime was used by the inferCNV (1.43 h) and NTP (0.32 h) processes. We also tested a larger test set with 54,239 cells (in-house data), that utilized 20.63 h of runtime. Preprocessing steps for raw data ("QC" and "Cell Ranger") were not included in the performance test. Parallel computation using multiple CPU cores (up to 20) could enhance the performance, improving the runtime to 0.47 h for 5902 cells and 5.47 h for 54,239 cells. The scTyper generates an automatic report summary document (Supplementary Data); this document summarizes each step of the processes including the parameters used and the results of cell typing and clustering, and visualization plots (heatmaps and UMAP/t-SNE plots). This may help users reproduce their analysis workflows. Discussion In this study, we employed a comprehensive and flexible pipeline for cell typing of scRNA-seq data, by providing manually curated, pre-installed cell marker databases and three different cell typing methods. Customization or update of the cell marker database can be easily accomplished by replacing the 'sigTyper.db.txt' file in the "extdata" directory to a newer one. The package allows the users to use and compare different cell typing methods. The modularized design of the pipeline enables users to modify the pipeline at each step, will facilitating the appropriate interpretation of data. scTyper has some limitations for implementation in the current version. The package does not include the cell typing methods that utilize reference scRNA-seq data instead of the cell markers [4,5]. Divergent clustering and dimension reduction methods can be applied to the analysis pipeline, but the current version of scTyper only supports the functions provided by the "Seurat" package such as "PCA" or "UMAP/t-SNE". Conclusions We developed scTyper, a flexible and user-friendly pipeline for cell typing of scRNAseq data. This package can help users to perform reproducible and comprehensive cell typing. Availability and requirements Project name: scTyper. Programming language: R. Other requirements: R 3.5 or higher. License: GPL2. Any restrictions to use by non-academics: None.
2020-08-04T14:27:01.667Z
2020-08-04T00:00:00.000
{ "year": 2020, "sha1": "844e0eeb7445930aeda426866aad073ce0760501", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-020-03700-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "844e0eeb7445930aeda426866aad073ce0760501", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
255838622
pes2o/s2orc
v3-fos-license
The pedagogical value of near-peer feedback in online OSCEs During the Covid-19 pandemic, formative OSCE were transformed into online OSCE, and senior students (near peers) substituted experienced clinical teachers. The aims of the study were to evaluate quality of the feedbacks given by near peers during online OSCEs and explore the experience of near-peer feedback from both learner’s and near peer’s perspectives. All 2nd year medical students (n = 158) attended an online OSCE under the supervision of twelve senior medical students. Outcome measures were 1) students’ perception of the quality of the feedback through an online survey (Likert 1–5); 2) objective assessment of the quality of the feedback focusing on both the process and the content using a feedback scale (Likert 1–5); 3) experience of near peer feedback in two different focus groups. One hundred six medical students answered the questionnaire and had their feedback session videotaped. The mean perceived overall quality of senior students’ overall feedback was 4.75 SD 0.52. They especially valued self-evaluation (mean 4.80 SD 0.67), balanced feedback (mean 4.93 SD 0.29) and provision of simulated patient’s feedback (mean 4.97 SD 0.17). The overall objective assessment of the feedback quality was 3.73 SD 0.38: highly scored skills were subjectivity (mean 3.95 SD 1.12) and taking into account student’s self-evaluation (mean 3.71 (SD 0.87). Senior students mainly addressed history taking issues (mean items 3.53 SD 2.37) and communication skills (mean items 4.89 SD 2.43) during feedback. Participants reported that near peer feedback was less stressful and more tailored to learning needs– challenges for senior students included to remain objective and to provide negative feedback. Increased involvement of near peers in teaching activities is strongly supported for formative OSCE and should be implemented in parallel even if experience teachers are again involved in such teaching activities. However, it requires training not only on feedback skills but also on the specific content of the formative OSCE. Introduction Feedback is an essential component of medical education. Formative feedback is defined as an information given to the learner with the intention of adjusting his or her thinking or behavior for the purpose of improving learning [1]. It is the most widely used approach to stimulate learning and development at all levels of clinical expertise development [2]. It is used in formative objective structured clinical examinations (OSCEs) to help medical students improve their clinical skills such as history taking, physical examination and communication skills [3]. It is also widely used in the workplace, with the global shift towards competency-based curricula and programmatic assessment during both pre-graduate, graduate and continuous training [4][5][6]. In order to be effective, feedback should be specific, timely, and credible. It should be based on observable behavior and in response to a problem or a task, and promote a specific and actionable goal [1,7,8]. Effective feedback is not about just delivering a message; it is described a conversation in which both the supervisor and the student collaboratively reflect on his/her performance and how to improve it [9,10]. Feedback effectiveness also depends on students' individual receptiveness which is in turn influenced by their motivations, fears, expectations as well as the credibility of the feedback provider [11]. They will all impact on students' acceptance and interpretation of feedback [12,13]. Credibility is a broad construct and refers to dimensions such as trustworthiness, accuracy, believability, reliability, intention of the feedback provider but also to features such as age, gender, experience, expertise and professional background [13]. Some studies evaluated the quality of feedback according to the clinical teachers' features (gender, seniority, and specialty) [14][15][16][17][18]. Do peer or near peer students are credible as feedback providers and provide high quality feedback? A near-peer tutor is "a trainee one or more years senior to another trainee" while a peer-tutor is one at the same level [19]. Peer and near-peer teaching (NPT) has become an increasingly recognized method for teaching and learning within medical education [20]. It is aligned with social constructivism which promotes learning in a social setting where individuals help each other through a shared culture of knowledge [21]. It is also fits cognitive congruence theory as near-peer teachers usually better understand learner needs since the gap in knowledge between a senior and a junior student is smaller than between an experienced tutor and a student [22,23].In peer-assisted learning in medical education, the most common topics are the physical examination skills and OSCE [20]. A recent scoping review about peer assessment in OSCE revealed that peer examiners provided valuable feedback [24]. However, in most studies, feedback quality was assessed through students' perceptions using questionnaires or Likert scales but was not objectively assessed. The Covid-19 pandemic had two major impacts on OSCEs. First, in several settings, face to face OSCE were transformed into online OSCEs [25,26]. Peer and near peer involvement in teaching increased and gained visibility [27,28]. In our setting, the in-person formative OSCEs were transformed into online OSCEs, and senior medical students replaced the experienced clinical teachers who were no longer were available to supervise formative OSCEs given the amount of clinical work at the hospital. The aims of the study were 1) to evaluate the perceived quality of feedback given by near peers during an online OSCEs 2) to objectively assess the quality of near peer feedback and compare it with the quality of feedback given by experienced clinical teachers during an face to face OSCE a few years earlier; 3) to explore medical students junior (year (Y) 2 learners) and senior (Y4-5 tutors) experiences of receiving from and giving feedback as near peers. Design and setting A prospective mixed method study was conducted to investigate the quality and added value of near peer feedback at the Faculty of Medicine, Geneva University, Switzerland. The Geneva Faculty of Medicine offers a 6-year curriculum divided into 3 pre-clinical years (bachelor) and 3 clinical years (master) to 158 medical students (the total n of our students in the medical school). Clinical skills training occurs during the 2 nd and 3 rd bachelor years. During these two years, medical students have the opportunity to practice history taking, physical examination and communication skills during four formative OSCEs focusing successively on different topics (abdominal, cardiac, respiratory, neurological) which are usually organized in three formats: 1) a direct observation format -direct observation of the student -standardized patient interaction followed by an oral feedback given by a clinical teacher 2) a video based format -a delayed oral feedback given by a clinical teacher based on the observation of the videotaped student-standardized patient interaction; 3) a group format-direct observation followed by an oral feedback involving 1 clinical teacher in a general practice setting and 3 students -three students interact consecutively with a standardized patient mimicking a different clinical problem, followed by a group (clinical teacher, peer and simulated patient) feedback. Clinical teachers are generally 20-30 experienced physicians who have both clinical and teaching activities. As the Covid-19 pandemic outbreak occurred, the medical school closed its doors mid of March 2020: face to face seminars were cancelled, clinical teachers, mostly working in hospital settings became unavailable and the clinical skills training team had to adapt the formative OSCE to such constraints. Participants All 2 nd year medical students were invited to attend the new online version of the 2 nd formative OSCE (n = 158). Twelve senior medical students (4 th and 5 th year) were asked to replace the clinical teachers. They were part of near peer tutors already involved in the teaching of physical examination during years 2 and 3 (17 seminars). Procedure The formative OSCE station focused on a cardiac topic. All medical students received a link to attend an online formative OSCE (via zoom, a videoconference platform providing face views) [29] during which students by group of two successively interacted with the patient mimicking two different clinical problems (stable angina, heart failure). During the 20 min, they were asked to collect information, describe loud the different steps of the physical examination, briefly explain their clinical hypothesis and end the encounter. The encounter was followed by a 20-min group feedback including senior medical student, peer and simulated patient feedback before the next student started interacting with the patient. Senior medical students received a one-hour interactive training on how to give feedback and a one-hour training on the learning objectives of the OSCE and how to use the online platform prior to the online OSCE. They received a checklist form to assess the different items expected for history taking, physical examination and communication skills. All feedback sessions were videotaped. After the session, medical students received an online questionnaire including an information and consent sheet to be signed. Outcomes measures 1. Online questionnaire to students on perceived quality of feedback Online questionnaire on the perceived quality of the feedback -after the formative OSCE; students received a 15-item online questionnaire (Likert scale [1][2][3][4][5] evaluating the perceived quality of the feedback received. The questions addressed the usefulness of the feedback for improving clinical skills (history taking, physical examination and communication) as well as on different elements of the feedback process. The items derived from a grid used from a previous study that confirmed its ability to discriminate between poor and good feedback givers [16]. The content of the grid was developed on the basis of a literature review on feedback principles and strategies [16,[30][31][32][33]. 2. Objective assessment of the quality of feedback (analysis of videotaped feedback) The feedback quality -the quality of the feedback given exclusively by the senior student was objectively assessed through the analysis of the videotaped feedback sessions using a feedback scale focusing both on the content and process of feedback. It included seven content items about history taking, physical examination and communication elements as well as elaboration on clinical reasoning and communication/professionalism issues. Elaboration referred to whether the senior student addressed in facilitative or directive way the importance or relevance of collecting some items during the feedback session (e.g. "Why it is important to ask about thromboembolic risk factors in a woman complaining with chest pain?" or "Do not forget to explore patients' beliefs and emotions: it will influence the way you will explain the diagnosis!"). The 14 feedback process items derived from a validated feedback scale used in previous studies [16,30] that follows the structure of the MAAS-Global, a well-known communication skill coding instrument, given the close similarities existing between a clinical encounter and a teaching encounter [30,34]. These instruments included specific elements of the feedback process, 3 transversal dimensions (empathy, pedagogical effectiveness, structure) as well there is 1 for the global rating (Table 1). In order to analyze the quality of videotaped feedback, we used a coding book that provides, for each feedback item, the precise definition and examples of the five anchors of the Likert scale (1 to 5). This coding book is available upon request. NJP, VM and LM first independently coded the first 12 feedback sessions using the coding book and discussed their coding in order to ensure a correct understanding of the coding definitions. Then, LM coded the remaining videotaped feedback sessions. Interrater reliability of coding, measured by blind coding (NJP) of 10% of the videotaped sessions, was good (intraclass correlation coefficient =0.88). Both the questionnaire on the perceived quality of the feedback and the feedback scale had been used in a previous study in 2013 that included 2nd and 3rd year medical students and clinical teachers [16]. It was used to evaluate whether the content and process of feedback varied according to the tutors' profile (generalist versus specialist clinical teachers). Focus groups about students and senior students' experiences of near peer feedback We conducted 2 focus groups (one with Y2 students and one with Y4-5 tutors) via the same videoconference platform with a convenient sample of students to deepen our understanding of the perception of online OSCEs and near-peer teaching feedback. Focus groups are a group discussion which is moderated by a researcher, such groups are used for generating information regarding the participant's experiences and beliefs about a particular topic [35]. The focus groups guide included several questions about participants' perceptions as near-peer feedback receivers and givers and their experience of the online formative station (see Appendix A). External moderators (JS and LM), who were not involved in the organization and implementation of the online OSCE and had no professional relationship with the participants led the discussion in order to make participants feel free to express their views without any hierarchical pressure. The sessions were audiotaped and transcribed ad verbatim. We selected the answers from four question for the purpose of the study. Analysis Perceived feedback quality data as well as objective feedback process (items measured using Likert's scale) and content (occurrence of comments) data were summarized by means and standard deviations. We compared students' perception of the quality of feedback as well as the objective analysis of feedback content and process between 2020 (an online OSCE supervised by senior students) ( Table 2) and 2013 (face to face OSCE supervised by experienced tutors), as we had conducted a study assessing the quality of feedback during this year using the same questionnaires and feedback scales [16,30]. Both 2013 and 2020 OSCE focused on cardiac symptoms. However, the physical examination approach (performance in 2013 vs description in 2020) and the format (face to face vs online) were different. Potential content differences were investigated using Wilcoxon ranks sum tests. All analyses were run on R 3.5.2 (the R Foundation for Statistical Computing, Vienna, Austria). For the focus groups, a thematic analysis was conducted to explore the different themes which emerged from the data [36,37]. The transcripts were first read by all authors, who then met to discuss their observations and develop a list of codes. Codes were developed to reflect the discussion questions and focused on participants' perceptions and experiences of near-peer feedback during online OSCEs. Then, JS coded all transcripts using ATLAS Ti Results A hundred and six 2 rd year medical students filled in the questionnaire and had their feedback session videotaped (participation's rate = 67%). Eleven senior medical students supervised the formative OSCE and gave feedback. Quality of near peer feedback Students felt that the formative OSCE helped them improve their clinical skills (mean score > 4) except for physical examination skills that could only be assessed through description (Table 3). Students' perception of the feedback was very good with all scores above 4 except for opportunities to practice parts of the encounter during feedback (Table 3). They especially appreciated the fact that senior students were aware of their learning needs, made them feel comfortable, gave balanced feedback, involved them actively in the self-evaluation and problem-solving phases and involved the simulated patient in the feedback. Their evaluation of quality of feedback were statistically significantly higher than students' ratings documented in 2013 when feedback was given by experienced clinical teachers during face-to-face OSCE. Regarding the feedback process, objective analysis showed that senior students actively involved students in feedback, (students' exploration of learning needs, self-assessment, active participation in problem-solving, checking for understanding) and rarely provided opportunities to practice parts of the history or communication skills during the feedback session (Table 1). They performed statistically significantly much better than experienced clinical teachers (in 2013) in all phases of the feedback process except regarding feedback balance (Table 1). Figure 1 showcases the overall global feedback scores of senior students (2020) and of experienced clinical teachers (2013) further divided into two sub groups: experienced clinical teachers with no prior training in teaching skills (group A) and with prior training in teaching skills (group B), The senior students' quality of their feedback was of the same level as and had more homogeneity than the one delivered by experienced clinical teachers trained in teaching skills (group A) in face-toface OSCE 7 years ago (Fig. 1). In terms of content, senior students addressed less elements in relation to history taking and physical examination and expressed less global comments about performance. Their teaching focused less on elaboration of communication/professionalism dimensions but addressed clinical reasoning in the same amount than experienced tutors/supervisors. The mean duration of direct observation-based feedback (isolated from peer and standardized patient feedback) was however longer for senior students (8.90 min (SD 4.6)) than for experienced clinical teachers (6.8 min (SD 3.4)). Students and senior students' experiences of near-peer feedback Out of 158 2 nd year students, 5 were included in the student focus groups. Out of the 12 senior students, 8 took part (5 from 4th and 3 from 5th year). Reasons for nonresponse were not recorded. Less stressful and more tailored to students' needs Students reported that learning from peers was experienced as less threating and more tailored to their needs Senior student tutors expressed similar thoughts and considered that they could guide more explicitly the student in the learning process and make the session less stressful and more interactive. "And I think it can be quite reassuring to be in front of students, for a first experience. "(Senior student 8) "It was more interactive. They weren't afraid to ask more questions. " (Senior student 5) Different focus Some senior students reported being less focused on specific elements of the history taking or physical examination parts, making their feedback therefore less clinic oriented. In the senior students' previous experience as juniors, they felt more emphasis was needed on the process of the consultation, communication issues and strategies to handle stress rather than the missing content elements' . "I remember that the doctors were more interested in the clinical examination to know exactly what you did. They were more fastidious, in the sense: "Yes, the reflux is not all regular, you did twenty seconds instead of twenty-five seconds. " I mean, they were more like that. Whereas in the end, when I was giving feedback, it was more about the content/finally more how the exam went. Rather than on a specific point of the clinical examination (Senior student 3) Challenges as near feedback givers The senior students also described some challenges, the main one was related on how to remain objective when giving feedback. The difficulty was to use objective criteria to assess the student's performance beyond using a checklist. "Perhaps a little difficult, was in the assessments I was doing, to keep a form of objectivity. "(Senior student 6) Some were often afraid of saying inadequate comments while others reported that it was easier to say that they did not know and if there were elements that they were unsure of it. "I must admit that sometimes it can be a little stressful "because we are afraid of saying stupid things. We're not doctors, so...(laughs)"(Senior student 2) They sometimes found difficult to give constructive feedback. "For the feedback, I sometimes have trouble finding points to improve. My feedback was too kind. "(Senior student 3) " Discussion The results from this study show that the quality of feedback given by near peers during online OSCEs was well perceived and objectively of high quality. These results are surprising given the sanitary context and stressful conditions in which these OSCEs were implemented with little time dedicated to senior students' training as tutors. The high scoring of the perceived quality of feedback may have been overemphasized in the pandemic context where most courses and training activities were canceled due to lack of hospital-based clinical teachers' availability [24,39]. Our findings are consistent with prior research which found that near peer feedback was judged to be of greater quality than input from clinically teachers, and was generally well received and accepted [40,41]. One major strength of our study is that the near peers' quality of feedback was assessed by analyzing the videotaped sessions and did not rely solely on perceptions. The content addressed during the feedback slightly differed between near peers and experienced clinical teachers with senior student putting less focus on history taking and physical exam skills and elaborating less on communication/professionalism issues. The fact that near peers put less focus on physical examination can be easily explained by the fact that during this online OSCE, students were only asked to describe step by step how they would examine the patient and such format did not allow an appropriate demonstration/evaluation of physical exam skills. The reasons why they also put less emphasis on the history taking is less obvious since the duration of the feedback session was longer for near peer than for experienced clinical teachers. In addition, we do not know whether the history taking skills which were not mentioned by senior students were crucial or not to address in line with aligning with the learning objectives of the OSCE. The results from the focus groups indicate that some senior students deliberately chose to focus on different issues because of past OSCE feedback memories where the listing of physical exam elements well/poorly done or missing was experienced as fastidious. Training more specifically senior students on identifying and addressing the key skills to practice during the OSCE might be necessary beyond giving a checklist form. Finally, near peers elaborated less on communication/professionalism issues than experienced clinical teachers. This is not surprising since it requires not only clinical experience but also a frame of references that even experienced clinical teachers ignore [42,43]. These differences in content, although statistically significant, may not be clinically relevant. It is commonly assumed that quality matters more than quantity-it may be more pedagogically relevant to address three important issues in an interactive way than five per skill domain in a directive way during a short feedback session. However, the design of our study did not allow to explore this issue. A systematic review and meta-analysis showed that students taught by peers do not have significantly different outcomes than those taught by clinical teachers when teaching relates to physical examination or communication skills [44]. Near peer feedback was experienced less stressful and more tailored to students' needs. It represented a learning opportunity for near peers. This results are in line with the literature which shows that near peers create a less intimidating atmosphere and are more aware and realistic regarding expected knowledge and skills than clinical teachers [45]. Peer and near peer teaching is also beneficial for senior students who, by teaching, consolidate their knowledge and skills and may even improve their academic performance [22]. It also helps develop teaching skills and enhance the identity formation of future clinical teachers. Not surprisingly, challenges reported by near peers, such as objective rating and ability to provide negative feedback are similar to those commonly described by more experienced clinical teachers [46]. Strengths and limitations There are several limitations to our study. First, we compared students' perceptions and objective scores of feedbacks given by near peers and experienced clinical teachers in different formats, at different times and of different duration. These elements together with the Covid-19 pandemic context may have positively biased our results. Second, it is possible that students' perceptions of the quality of feedback were influenced by the overall feedback including near peer, student observers and the standardized patient and not just the near peer feedback. Third, near peers represented a selection of senior students already involved in teaching activities. It is possible that we recruited only highly motivated and skilled senior students Near peer volunteered participation is indeed commonly reported in studies assessing peer/near peer assisted [20]. Involving randomly assigned senior student with no specific teaching experience may have led to lower quality feedback. Finally, the number of students as learners included in the focus groups was small and may have prevented us to capture all the perceived advantages, disadvantages and challenges of near peer feedback. Conclusion A key element of feedback acceptability is the fact the source should be credible [7,11]. This study together with other studies suggest that near peers, with limited
2023-01-16T15:06:16.704Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "aa44b5a2bb28fc8d1cb2e7760649b081254b58c7", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-022-03629-8", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "aa44b5a2bb28fc8d1cb2e7760649b081254b58c7", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [] }
215548982
pes2o/s2orc
v3-fos-license
Coupling between waveguides and microresonators: the local approach Coupling between optical microresonators and waveguides is a critical characteristic of resonant photonic devices with complex behavior that is not well understood. When the characteristic variation length of the microresonator modes is much larger than the waveguide width, local coupling parameters emerge that are independent of the resonator mode distributions and offer a simplified description of coupling behavior. We develop a robust numerical-fitting-based methodology for experimental determination of the local coupling parameters in all coupling regimes and demonstrate their characterization along a microfiber waveguide coupled to an elongated bottle microresonator. Introduction Photonic devices based on optical microresonators typically include waveguides, which are used to couple light in and out of microresonators.The performance of these devices is determined by the intrinsic optical characteristics of the microresonators and waveguides as well as by the coupling between them.The theoretical and experimental investigation of microresonators with different shapes (rings, spheres, toroids, bottles, etc.) is of great current interest and has been intensively developed for different applications [1][2][3].While recent studies have identified promising novel coupling designs [4][5][6], less attention has been given to investigating exactly how coupling performance depends upon the optical and geometric characteristics of waveguides and microresonators [7][8][9][10][11][12][13].These dependencies are quite complex [7,8] and in many cases it is easier to determine them experimentally [9][10][11][12][13][14].However, understanding the fundamental features of coupling between waveguides and microresonators, especially of those with three-dimensional geometry (e.g., microspheres and microbottles), is important for the future development of resonant microdevices for classical [7,8,[11][12][13][14] and quantum [9,10,[15][16][17] applications. Evanescent coupling between tapered fibers and whispering gallery modes (WGMs) is ultimately concerned with overlap integrals of the taper and resonator fields [7].Typical coupling characterization focuses on quantities such as the transmission, roundtrip loss, and coupling strength [18], or ideality [10].Determination of these parameters can indicate when parasitic losses are minimized, but does not provide details about the underlying loss processes.The local coupling approach, proposed in [19], applies in the regime where the characteristic length scale of the waveguide field is small compared to the transverse extent of the resonator fields with characteristic length .When ≪ , the waveguide-microresonator coupling is determined by the local value of the WGM microresonator field at the waveguide position, hence the name of the approach.This approximation simplifies the overlap integral, enabling separation of coupling parameters from the resonator field modes.Characterization of these parameters as the coupling configuration is varied enables insight into the underlying coupling and scattering processes.For example, resonant and non-resonant loss are described by separate parameters, thus yielding more insight into how to ameliorate loss than is afforded with a single loss parameter.The local coupling approach has been previously applied to the design of Surface Nanoscale Axial Photonics (SNAP) devices, e.g.[20][21][22], but potential applications of this technology in the single photon regime require optimization of loss and coupling, which makes it critical to determine the dependencies of the coupling parameters upon transverse positioning in taper-microresonator systems. In this article, we extend the local coupling approach with novel fitting capabilities that robustly determine the bare resonator modes and coupling parameters with quantified residual error and coupling parameter uncertainty estimates.We report the first characterization of the profile of these coupling parameters along the longitudinal axis of a tapered optical fiber.The procedure demonstrated herein maps the entire menu of coupling configurations available via transverse positioning of the taper along the longitudinal axis of the resonator with the two devices in contact, enabling subsequent selection of the desired coupling.Lastly, we report a novel quantification of the "criticality bound" that indicates how to determine the coupling regime (under-, critically, or over-coupled) from the coupling parameters.We investigate an elongated bottle microresonator coupled to a tapered optical fiber with micron-scale diameter (microfiber), as illustrated in Fig. 1.The fundamental WGM in this resonator behaves as exp[] exp(− 2 / 2 ) where is the azimuthal quantum number and = (2 0 0 ) 1/4 res 1/2 (2 ) −1/2 with resonance wavelength res , effective refractive index , and axial and radial radii of curvature 0 and 0 , respectively [23].Using 0 = 30 m and 0 =19 m, we have = 75 m which is significantly greater than the diameter of microfiber ~1 µm used in our experiment, satisfying the local coupling condition.We change the coupling by moving one of the devices relative to the other to vary the position where the microfiber and bottle microresonator make contact.The coupling depends on the local diameter of the taper at the contact point along its longitudinal axis z, as well as the position of the contact point along the resonator's longitudinal axis x.We set = 0 when the resonator is aligned for contact in the center of the taper waist region, and = 0 when the taper is aligned for contact with the center of the resonator.The range of the effective radius variation eff () describing the bottle microresonator used in our experiment is very small (nanoscale); therefore, the resonant transmission power through the microfiber is described by [19] Theoretical background where is the vacuum wavelength.Here 0 (), |()| 2 and () are the local coupling parameters, which depend on neither nor the cavity mode (interpretations detailed below).(, , ) is the Green's function of the one-dimensional wave equation describing the propagation of WGMs along the bottle axis : Here (, ) = 2 1/2 0 [( is the WGM propagation constant, 0 = 2 res is the propagation constant in the bulk resonator material with refractive index , and Δ is the wavelength variation [23]. The interpretations of the local coupling parameters in Eq. ( 1) are as follows: || 2 is the coupling strength between resonator and taper modes. .Re() describes the shift of the resonance wavelength induced by the tapers presence (coupling to the taper changes the optical path length).Finally, Im(D) describes broadening of the resonances due to additional loss induced by the presence of the microfiber (e.g. via coupling to radiation modes).See [19] for additional background details.Our experimental system consists of an elongated SNAP bottle microresonator with ~400 m extent along , created on 38 m diameter fiber using a CO 2 laser [24], coupled to a microfiber pulled using a ceramic microheater [25].Coupling parameters are estimated through the measurement and analysis of 2D spectrograms, e.g.Fig. 2(a) [24].Spectrograms are made by combining the transmission spectrum through the microfiber at multiple contact positions along the resonator with fixed .The transmission spectrum is calculated from the Jones matrix spectrum of the system, measured with a Luna Technologies Optical Vector Analyzer (OVA).We isolate the Jones matrix describing transmission past the microresonator from those describing the taper segments and connecting fibers using the procedure described in [26]. Experimental characterization From this, we calculate our reported transmission values, which are for light with polarization matched to the resonator modes.The baseline taper loss (spectral average of 4.6 dB) is removed such that transmitted power fraction is 0 dB (no loss) in the absence of coupling. We then fit the measured spectrogram data to extract the best-fit coupling parameters.To accomplish this, we first find the Green's function solution [19] to the 1D wave equation of Eq. ( 2).The effective radius variation serves as a potential of the assumed form We use a fitting procedure to find the values of , , and that produce a modal eigenwavelength spectrum that best matches the observed spectrum.Characterization of the local coupling parameters yields rich information about coupling variation as the resonator is moved to vary the contact point along the taper axis .The nonresonant power transmission | 0 | 2 [Fig.3(a)] has slope transitions at = ±3 mm and a mimimum value near the center of the taper waist region at = 0 mm.The phase arg( 0 ) is nearly flat across the entire measured range.Coupling strength || 2 peaks at = ±1.5 mm [Fig.3(b)].Re() has a roughly flat profile with random variation, indicating that the phase shift experienced by WGMs passing the microfiber is roughly independent of the microfiber diameter.The resonant loss Im(D) is smallest in the central taper waist region, but increases with the local taper radius away from this region (as we discuss further with ex below). In some cases, the coupling parameter fits can converge to local minima that don't represent the actual coupling parameters.We determine when this occurs by comparing the best-fit model .These two quantities are substantially different for local minima, and such a difference indicates that the fit must be run again with the local minima excluded, or with starting values closer to the true values. The excellent agreement of our best-fit model and measured spectrograms is apparent from the low normalized cost [Fig.3(c)] where meas and model are the measured and best-fit model transmission [Eq.( 1)], and index grid positions, the numerator is the cost value, and the denominator normalizes the cost by , the number of transmission values in the fit region [green box in Fig. 2(a); the model has the same number of transmission values as the measured spectrograms], and , the depth of the measured fundamental axial resonance along its central position (x=0).This quantifies the fractional variation per measured transmission value.The effectiveness of the local approach is validated by the small value of Δ ̅̅̅̅ (z) across the entire profile. Microresonator-taper coupling can be sorted into three coupling regimes, set by the ratio of the light loss rate from the microresonator and the coupling rate between the microresonator and taper.Starting from the Fano formulation of transmission {Eq.(13) of [19] . Where || > 3.0 mm in Fig. 3(b), the system is undercoupled and crit () ≈ Im[()].At = ±3.0mm, the power transmission for resonant light is very small (<2% for z=+3.0 mm; even smaller for z=-3.0 mm [Fig.2]).This and the nearby crossings of crit () and |()| 2 both indicate that coupling is close to critical at these positions.Between these critical coupling positions, the system is over-coupled and it's important to perform the check described above against local minima.The transmission is very sensitive to small changes in over-coupled and critically-coupled configurations, and since our system uses no feedback stabilization, we note a concomitant increase in the standard deviation of || 2 in those regimes.We observe that the dips near the edges of the resonances in spectrograms [Fig.2(c)] are indicative of overcoupling.The increased variation in these dips can confound the fit, which is why we select the fit-region indicated in Fig. 2(a). The coupling parameters indicate device loss performance.Energy conservation sets two constraints on the coupling parameters [19]: which set bounds on the nonresonant and resonant loss, respectively.Minimum loss occurs when each of these conditions approaches equality.We quantify how much Im[()] exceeds this minimum with the excess resonant loss Investigation of the suggested proportionality relationship between the excess loss and the local radius of the microfiber at the point of contact is an interesting avenue for future research that could potentially be used to determine the microfiber radius variation (see e.g.[28]).We find a strong anti-correlation relationship between || 2 and || (correlation coefficient = -0.96)[Fig.3(d)], which indicates that the taper's effect on the cavity field (through resonant frequency shifts and induced loss) is smallest where the coupling is largest. Conclusion We report experimental characterization of the local coupling parameters, which describe the interaction between an elongated bottle microresonator and an input-output microfiber.In contrast to parameters commonly used for the description of the microresonator-waveguide coupling, these parameters are independent of the mode distribution.Our fitting approach demonstrates excellent agreement between measured and best-fit theoretical models, in addition to good coupling parameter repeatability between consecutive spectrogram measurements, in all coupling regimes (undercoupled through over-coupled).This method of characterizing coupling and loss paves the way for design optimization towards classical and quantum resonant optical devices.The elongated shape of the modes is of special importance since it allows us to simplify positioning of quantum emitters [29].We suggest that, for this purpose, the microresonator profile can be optimized to arrive at enhanced regions with uniform WGM magnitude.Finally, we note that this approach can be generalized to find local coupling parameters with any microresonator system where ≪ , through substitution of modesolving methods appropriate to the resonator in use.Such generalization would enable investigation across multiple WGM resonator platforms to generate insight into commonalities and differences in their coupling behavior. Fig. 1 . Fig. 1.WGM in a bottle microresonator. 0 and 0 are the axial and radial radii of curvature, respectively. | 0 | describes the field transmission through the taper in the absence of coupling to resonator modes (|| 2 → 0).| 0 | 2 is the power transmission for light with nonresonant wavelength [where (, , ) ≈ 0].Transmission of resonant light depends on a coherent combination of the terms and exhibits Fano line-shapes, and the phase arg( 0 ) controls the spectral shape of the resonances.The presence of the dielectric tapered fiber in the evanescent resonator field changes the field distribution relative to the condition where it is absent. describes these effects and relates the bare Green's function describing the resonator mode field in the absence of the taper (, , ) to the renormalized (dressed) Green's function with the taper present ̅( , , ) = (,,) 1+()(,,) Fig. 2 . Fig. 2. (a) Comparison of measured and best-fit model spectrograms, near critical coupling, showing multiple axial modes.The green dot-dashed box over the measured data indicates the region used in coupling parameter fitting (see text).The blue dashed box over the model indicates the magnified region shown in (b) which compares the measured and best-fit model fundamental axial mode.(c) Comparison of measured and best-fit model fundamental resonances at = −2.5 mm in the over-coupled regime.The characteristic edge dips seen in this regime are indicated with arrows on the measured data. The best-fit values of = 3.2744 nm, =123.5934m, and =1.1406 are used for all resonators, while 0 and are set for each spectrogram to account for the angle between the and axes being slightly different from 90°, and for random spectral shifts arising from thermal drift, respectively.Once the bare Green's functions (, , ) , for each spectrogram are found, the measured spectrograms are fit to Eq. (1) in the region indicated in the green box in Fig.2(a)[see discussion below Eq. (5)] with fixed (, , ) to find the 5 best-fit real-valued local coupling parameters: | 0 ()| 2 , arg[ 0 (z)], |()| 2 , Re[()], and Im[()], in addition to the final minimized "cost" value (described below).Each spectrogram measurement is repeated 4 times to assess repeatability, and the profile of the mean average values for these parameters and the associated cost values (detailed below) are plotted in Fig. 3 with the error bars showing the standard deviation of each quantity. Fig. 3 . Fig. 3. Average coupling parameters with error bars showing standard deviation. indicates the position along the taper axis of resonator-taper contact, with = 0 corresponding to resonator contact at the center of the taper waist region [25].(a) Nonresonant transmission power amplitude | 0 | 2 with phase profile arg( 0 ).(b) || 2 and parameters, with the critical coupling bound crit [Eq.(5)] and the excess resonant loss ex [Eq.(6)].(c) The average cost value normalized as described near Eq.(4), indicates excellent agreement between model and theory.(d) The average values of || 2 and || display anti-correlation.The best-fit line approximately describes the interesting relationship where stronger coupling is associated with smaller effect on the cavity ||.
2020-04-10T01:00:55.914Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "6362d28a7167205b87296e76c66e492b6bd2674a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.399978", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "6362d28a7167205b87296e76c66e492b6bd2674a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
29173596
pes2o/s2orc
v3-fos-license
Efficacy of Para-Aortic Lymphadenectomy in Ovarian Cancer : A Retrospective Study Objective: The prognostic impact for ovarian cancer treatment of employing a systematic para-aortic and pelvic lymphadenectomy is still poorly defined. The purpose of this study was to evaluate the therapeutic efficacy of adding a para-aortic lymphadenectomy (PA) to the pelvic lymphadenectomy (PL), as compared with solely the pelvic lymphadenectomy. Materials and Methods: A retrospective study of patient outcomes was conducted of ovarian cancer patients who underwent optimal debulking surgery, concurrent with either PA + PL or PL alone, between 2000 and 2009 at our Osaka General Medical Center. Results: One hundred twenty-one patients with ovarian cancer underwent surgery. Forty-four patients (36%) underwent optimal debulking surgery (all residual disease was <1 cm) concurrent with lymphadenectomy. Seventeen patients underwent PA + PL (PA group), and 27 patients underwent PL alone (PL group). There were no significant differences in terms of overall survival (OS; hazard ratio [HR] = 0.49; 95% CI, 0.13 to 1.82; p = 0.29) and progression-free survival (PFS; HR = 0.62; 95% CI, 0.19 to 2.00; p = 0.40) between the PA group and the PL group. Both OS and PFS also failed to show significant differences, even when comparing them among 26 cases of FIGO stage I cases. Conclusions: Our data failed to show any prognostic improvement for ovarian cancer by adding para-aortic lymphadenectomy to the standard pelvic lymphadenectomy regimen. Introduction Ovarian cancer has been increasing in Japan.Approximately 8000 new ovarian cancer cases, and 4500 ovarian cancer deaths, were recorded in 2006 [1].Retroperitoneal lymph nodes involvement occurs in approximately 50% to 80% of women with advanced ovarian cancer [2].Cass et al. found that 15% of patients with clinical stage I disease have microscopic lymph node metastases [3].In recognition of the prognostic importance of the retroperitoneal spread of ovarian cancer, the FIGO staging classification was amended in 1988 to include a substage for nodal involvement [4].Subsequent work has illuminated the relevant surgical anatomy, which has allowed for identification of the role and technical aspects of lymph node dissection, and for a clarification of the nomenclature [5][6][7]. Primary cytoreductive surgery (i.e., the removal of as much of the tumor as possible at the time of initial surgery, with resection of only the bulky nodes) has been an integral part of the treatment of advanced ovarian cancer. In addition, the amount of postoperative residual tumor is a clinically significant prognostic factor [8,9].It is still unclear as to whether the systematic removal of the retroperitoneal para-aortic lymph nodes should be a standard part of a maximal cytoreductive surgery.The core issue of the controversy is whether or not the removal of these lymph nodes improves patient survival. The purpose of this study was to evaluate the therapeutic efficacy of para-aortic lymphadenectomy (PA) being added to the pelvic lymphadenectomy (PL), as compared with a pelvic lymphadenectomy alone. Materials and Methods A retrospective review of the medical records of the Osaka General Medical Center (Osaka, Japan) was con-ducted for the period between January 1, 2000 and December 31, 2009 to identify all patients diagnosed as primary ovarian cancer who were treated with primary surgery.Patients with borderline tumors were excluded from this study. The primary surgery was aimed at removing the primary tumor and visible pelvic implants.This included treatment with total abdominal hysterectomy and bilateral salpingo-oophorectomy and/or omentectomy and/or appendectomy, before adding the lymphadenectomy.As a consequence to this surgery, participants had no visible residual tumors greater than 1 cm in diameter.The extent of the lymph node dissection included a pelvic lymphadenectomy without (PL) or with (PA) a para-aortic lymphadenectomy. The indication for preoperatively being put in the PL group was that para-aortic lymph node metastases which lymph node size is less than 1 cm in diameter were not found by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). Pelvic lymphadenectomy: The dissection was begun at the origin of the external iliac vessels and continued caudally around them along the medial border of the psoas muscle; the lower limit of the external iliac lymphadenectomy was represented by the deep inferior epigastric vessels.The lateral boundaries of dissection were superficially delineated by the fascia covering the psoas muscle and deeply by the fascia covering the internal obturator and levator ani muscles; the median margin of the lymphadenectomy was represented by an imaginary plane which was parallel to the umbilical artery and was delineated by the umbelico-pubic fascia, the bladder and the rectum.In addition, lymphatic tissue was cleared from the obturator fossa, which was begun with the mobilizations of the superficial obturator nodes, which were removed en bloc with the lymphatic fatty tissue, which has been previously separated from the internal iliac vessels to the origin of the internal pudendal vessels. Aortic lymphadenectomy: The nodal dissection started at the aortic bifurcation by removing the superficial intercavoaortic, precaval and preaortic nodal groups.Then lymph nodes located lateral to the cava (paracaval) were separated from the vena cava, the renal capsule and the psoas muscle, and were then removed en bloc.Afterward, displacing the vena cava and the aorta laterally and medially, the lymph nodes behind the cava (retrocaval nodal group) and the lumbar vessels (deep intercavoaortic nodes) were separated from the prevertebral fascia and then removed.Removal of the most cranial nodes, both behind and under the left renal vein, was performed after entering the right plane of dissection between the Toldt's and Gerota's fasciae, mobilizing the descending colon from the renal capsule, the psoas, the ovarian pedicle and the ureter, and displacing it medially. The patients received from 3 to 6 cycles of paclitaxel at a dose of 175 mg per square meter of body-surface area, and carboplatin at a dose of area under the curve (AUC) 5 mg/mL every three weeks for adjuvant chemotherapy for FIGO stages Ic, II, III and IV. A statistical analysis for demographic characteristics of the groups was performed using the Mann-Whitney U test.Overall survival was calculated from the day of first surgical treatment until death, regardless of the cause of death.Progression-free survival was calculated from the day of surgical treatment to the time of either detected progression or death.Overall-survival curves and progression-free survival curves were calculated for each treatment group using Kaplan-Meier estimates and were compared with the log-rank test [10,11].p values of less than 0.05 were considered to be statistically significant. Results Between 2000 and 2009, 121 women with ovarian cancer had undergone surgical treatment at our facility.Fortyfour patients (36%) underwent optimally debulked surgery (maximum size of the residual disease <1 cm) concurrent with some level of lymphadenectomy.Seventeen patients underwent PA + PL (PA group) and 27 patients underwent PL alone (PL group). The details of the patient's characteristics are shown in Table 1.The mean ages were 60 and 59 years for the PA and PL groups, respectively (p = 0.5225).BMI [body mass index] were 23 for PA group and 21 for PL group, respectively (p = 0.779).There were no significant difference is outward patient characteristics between the PA group and the PL group.Blood loss volume was significantly higher in PA group compared with the PL group (p = 0.0266); whereas the surgical time was not different between these two groups. Progression-free 5-year survival was 69% for the PA group and 75% for PL group (Figure 1).Overall 5-year survival was 71% for the PA group and 90% for the PL group (Figure 2).Comparing the patients in the PA group with the PL group, there was no significant difference in overall survival (OS; hazard ratio [HR] = 0.49; 95% CI, 0.13 to 1.82; p = 0.29) or in progression-free survival (PFS; HR = 0.62; 95% CI, 0.19 to 2.00; p = 0.40). Additionally, we analyzed the FIGO stage I of 26 patients.Eight women from the PA group and 17 women from the PL group participated.Progression-free 5-year survival was 75% for the PA group and 79% for the PL group (Figure 3).Overall 5-year survival was 75% for the PA group and 93% for the PL group (Figure 4).There were no significant differences between the PA group and the PL group in OS (HR = 0.62; 95% CI, 0.08 to 4.68; p = 0.63) or PFS (HR = 0.66; 95% CI, 0.10 to aortic lymph node recurrence, and this was in the PL group.4.45; p = 0.65).Overall, the PA group tended to have a lower survival rate than the PL group. Discussion In cases with recurrence, which we show in Table 2, only one out of the six cases was found to have para-Standard therapy for women with ovarian cancer in As patients with metastases to the lymph nodes have poorer outcomes, lymphadenectomy plays a significant diagnostic role in assessing a prognosis and determining their need for adjuvant treatment.However, there are only a very limited number of studies which have investigated the therapeutic efficacy of adding on a dissection of the para-aortic lymph nodes to the traditional pelvic lymphadenectomy.There are even fewer studies which examined the benefits of lymphadenectomy in those patients in the earliest stages of ovarian cancer.Furthermore, there are risks of developing post-lymphadenectomy complications, such as ileus, deep-vein thrombosis, lymphocyst, lymphedema and major wound dehiscence.Additional complications potentially arising from systematic lymphadenectomy can be longer operating times and higher blood loss volumes than for a non-systematic lymphadenectomy [13,14]. Bristow et al. confirmed that, in FIGO stages III -IV ovarian cancer, a maximal surgical cytoreduction which included a systemic lymphadenectomy was one of the most powerful determinants of cohort survival [15].Likewise, Onda et al. found that the overall metastasis-positive rates of the aortic and pelvic lymph nodes in all clinical stages were 38% and 37%, respectively.They also found that 15% of lymph-node-positive patients with stage I/II tumors had only isolated aortic lymph node involvement, and another 23% had isolated pelvic lymph node involvement, clearly showing that direct spreading solely to the aortic or pelvic lymph nodes is possible in ovarian cancer.Moreover, they found that the incidences of positive lymph nodes in stages I and II ovarian carcinoma were approximately 20% in each.Stage I and stage II serous and clear cell types had significantly higher rates than endometrioid and mucinous types of the same stage [16]. A randomized study [14] showed that: 1) the addition of systematic aortic and pelvic lymphadenectomy to cytoreductive surgery prolonged progression-free survival, which, in turn, may have an important impact on the quality of life of patients with advanced ovarian cancer; 2) systematic lymphadenectomy did not prolong overall survival, probably because effective platinum based firstand second-line (with or without salvage surgery) chemotherapies might have diluted the impact of systematic lymphadenectomy on the risk of death; 3) patients in the systematic lymphadenectomy arm had a higher number of nodal metastases than patients in the no-lymphadenectomy arm.This study found that, although systematic lymphadenectomy significantly improved progression-free survival, overall survival was similar in both the systematic lymphadenectomy and the "resection of bulky nodes only" groups. It is unknown if patients who underwent an aortic lym-phadenectomy may have received more chemotherapy, resulting in a better survival.Our data could not show any improvement of either PFS or OS by adding aortic lymphadenectomy to a pelvic lymphadenectomy.Furthermore, considering the chances of para-aortic lymph node metastases, cases diagnosed as FIGO stage I without para-aortic lymphadenectomy may have actually contain undiagnosed FIGO stage III cases.We could not show any therapeutic effect of performing a para-aortic lymphadenectomy, even in the FIGO stage I case containing potential FIGO stage III case.The number of participants in this study is acknowledged to be able to provide only a limited reliability for the results.However, we believe our data do give adequate validity for considering doing a prospective study to evaluate the efficacy of aortic lymphadenectomy. Figure 2 .Figure 1 . Figure 2. Progression-free survival curves for the PA group versus the PL group in ovarian cancer.Figure 1.Overall survival curves for the PA group versus the PL group in ovarian cancer. Figure 3 . Figure 3. Overall survival curves for the PA group versus the PL group in stage I ovarian cancer. Figure 4 . Figure 4. Progression-free survival curves for the PA group versus the PL group in stage I ovarian cancer.
2019-03-10T13:14:31.448Z
2013-05-20T00:00:00.000
{ "year": 2013, "sha1": "b1872101495cdf534ff7403181d3fbe47ce02f41", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=31824", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b1872101495cdf534ff7403181d3fbe47ce02f41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249090142
pes2o/s2orc
v3-fos-license
A snow and glacier hydrological model for large catchments – case study for the Naryn River, central Asia . In this paper we implement a degree day snowmelt and glacier melt model in the Dynamic fluxEs and Connec-tIvity for Predictions of HydRology (DECIPHeR) model. The purpose is to develop a hydrological model that can be applied to large glaciated and snow-fed catchments yet is computationally efficient enough to include model uncertainty in streamflow predictions. The model is evaluated by simulating monthly discharge at six gauging stations in the Naryn River catchment (57 833 km 2 ) in central Asia over the period 1951 to a variable end date between 1980 and 1995 depending on the availability of discharge observations. The spatial distribution of simulated snow cover is validated against MODIS weekly snow extent for the years 2001–2007. Discharge is calibrated by selecting parameter Abstract.In this paper we implement a degree day snowmelt and glacier melt model in the Dynamic fluxEs and Connec-tIvity for Predictions of HydRology (DECIPHeR) model.The purpose is to develop a hydrological model that can be applied to large glaciated and snow-fed catchments yet is computationally efficient enough to include model uncertainty in streamflow predictions.The model is evaluated by simulating monthly discharge at six gauging stations in the Naryn River catchment (57 833 km 2 ) in central Asia over the period 1951 to a variable end date between 1980 and 1995 depending on the availability of discharge observations.The spatial distribution of simulated snow cover is validated against MODIS weekly snow extent for the years 2001-2007.Discharge is calibrated by selecting parameter sets using Latin hypercube sampling and assessing the model performance using six evaluation metrics. The model shows good performance in simulating monthly discharge for the calibration period (NSE is 0.74 < NSE < 0.87) and validation period (0.7 < NSE < 0.9), where the range of NSE values represents the 5th-95th percentile prediction limits across the gauging stations.The exception is the Uch-Kurgan station, which exhibits a reduction in model performance during the validation period attributed to commissioning of the Toktogul reservoir in 1975 which impacted the observations.The model reproduces the spatial extent in seasonal snow cover well when evaluated against MODIS snow extent; 86 % of the snow extent is cap-tured (mean 2001-2007) for the median ensemble member of the best 0.5 % calibration simulations. We establish the present-day contributions of glacier melt, snowmelt and rainfall to the total annual runoff and the timing of when these components dominate river flow.The model predicts well the observed increase in discharge during the spring (April-May) associated with the onset of snow melting and peak discharge during the summer (June, July and August) associated with glacier melting.Snow melting is the largest component of the annual runoff (89 %), followed by the rainfall (9 %) and the glacier melt component (2 %), where the values refer to the 50th percentile estimates at the catchment outlet gauging station Uch-Kurgan.In August, glacier melting can contribute up to 66 % of the total runoff at the highly glacierized Naryn headwater sub-catchment.The glaciated area predicted by the best 0.5 % calibration simulations overlaps the Landsat observations for the late 1990s and mid-2000s.Despite good predictions for discharge, the model produces a large range of estimates for the glaciated area (680-1196 km 2 ) (5th-95th percentile limits) at the end of the simulation period.To constrain these estimates further, additional observations such as glacier mass balance, snow depth or snow extent should be used directly to constrain model simulations. Introduction In High Mountain Asia, large populations rely on glacier and snow-fed river systems for their freshwater supply (Lutz et al., 2014;Armstrong et al., 2019).These "water towers", which provide essential streamflow during the summer and a buffer against drought, are under threat as glaciers melt in response to warming temperatures (Kraaijenbrink et al., 2017;Immerzeel et al., 2013Immerzeel et al., , 2010Immerzeel et al., , 2020)).The Aral Sea basin in central Asia has been identified as the region with the greatest human dependence on glacier meltwater (Kaser et al., 2010).Streamflow in the Syr Darya River, the second-largest river in central Asia, is supplied by snowmelt and glacier melt from the Tien Shan mountains during the spring and summer (Sorg et al., 2012).This water is crucial for hydro-production in upstream Kyrgyzstan and for irrigation downstream in the semi-arid lowlands of Kazakhstan and Uzbekistan. Glaciers in the Tien Shan mountains have reduced in mass by approximately 27 % during the past 50 years (Pieczonka and Bolch, 2015).In the Naryn basin, the upstream tributary of the Syr Darya, satellite observations show a 23 % reduction in glacier area since the mid-1970s (Kriegel et al., 2013) and a similar reduction in the upper Naryn basin since the 1940s (Hagg et al., 2013).The shrinkage in glacier area has coincided with an increase in discharge in the upper reaches of the Syr Darya River caused by accelerated glacier melting (Zou et al., 2019).There has also been a shift in the precipitation regime, with more precipitation falling as rain and less falling as snow, leading to enhanced melting and less snow accumulation.Chen et al. (2016) showed that in the Tien Shan mountains the snowfall fraction decreased every decade, from 27 % in 1960-1969 to 25 % in 2005-2014.As glaciers retreat, river runoff is expected to temporally increase, reaching a maximum known as "peak water", after which flow is reduced as glaciers recede and disappear completely.There is a compelling need to predict the timing of "peak water" in order to understand when to implement adaption strategies in reduced river flow.Projections of future streamflow in the upper reaches of the Syr Darya River show a decrease during the summer and an increase in the spring as the hydrological regime shifts from one of glacier melting to seasonal snow melting (Radchenko et al., 2017).The reduction in summer streamflow will have direct impacts on water availability in the Ferghana Valley (Kyrgyzstan, Tajikistan) and further downstream in the Syr Darya River in Uzbekistan. To estimate the timing of "peak water", models which couple a representation of glacier processes to catchment hydrology are required.van Tiel et al. (2020) reviewed the many glacio-hydrological models in the literature and highlighted that one of the major challenges is uncertainty in the input data.Observations are generally sparse in mountainous regions.For example, meteorological stations are generally clustered at low altitudes, meaning that the derivation of the precipitation and temperature lapse rates can be very uncer-tain.Furthermore, observations of solid precipitation can be underestimated by 20 %-50 % due to windiness at high elevations (Rasmussen et al., 2012) which redistributes snow.The accuracy of streamflow predictions will also be affected by model structural uncertainty and uncertainty in model parameters that cannot be directly observed.Furthermore, the quality of discharge observations used to evaluate models is often difficult to determine.Incorporating uncertainty analysis into streamflow predictions means that the models we use need to be computationally efficient. The treatment of snow and glacier melting in glaciohydrological models can vary in complexity, from simple temperature index models (Neitsch et al., 2005;Zhang et al., 2013;Lindstrom et al., 1997), enhanced temperature index models (Ragettli and Pellicciotti, 2012;Mayr et al., 2013), to full energy balance models (Ren et al., 2018).A benefit of using a simple temperature index model is that only temperature is used to calculate melting, whilst energy balance models require observations of the radiation components, temperature, wind speed and humidity.This makes temperature index models a pragmatic choice for data-sparse regions, although it does mean that processes such as sublimation (important on glacier surfaces in areas of low humidity) might be overlooked.Furthermore, Magnusson et al. (2015) showed that for hydrological applications a temperature index model can predict daily snowpack mass and runoff as well as a more complex energy balance model. In this study, a degree day snowmelt and glacier melt scheme is incorporated into the Dynamic fluxEs and Connec-tIvity for Predictions of HydRology (DECIPHeR) (Coxon et al., 2019) model.The aim is to develop a glaciohydrological model that can be applied to very large glaciated catchments yet that still retains computational efficiency.This means that parametric uncertainty in streamflow predictions can be explored whilst retaining a high spatial resolution to allow orographic variability in the climate to be represented.Many glacio-hydrological models already exist in the literature (van Tiel et al., 2020;Horton et al., 2022); however, we integrate a snowmelt and glacier melt model into DE-CIPHeR for the following three reasons.Firstly, DECIPHeR uses hydrological response units (HRUs) to model water flow in hydrologically similar parts of the catchment and has a flexible model structure which allows the model to be run as a fully distributed (HRU for every single grid point), semidistributed (multiple HRUs) or lumped model (1 HRU).Depending on user requirements and the corresponding degree of complexity, topographic, land use, geology, soil, anthropogenic and climate attributes as well as points of interest (any gauged or ungauged point on the river network) can be supplied to define the spatially connected topology and thus differences in model inputs, structure and parameterization (Coxon et al., 2019).Other HRU-based glacio-hydrological models exist, for instance, SWAT (Omani et al., 2017), PRE-VAH (Koboltschnig et al., 2008) and HBV (Finger et al., 2015), but they do not offer this level of flexibility within a single modelling framework.Secondly, DECIPHeR is computationally efficient, which makes it suitable for modelling very large catchments.Many of the glacio-hydrological models in the literature are distributed (grid point based), for example, TOPKAPI (Pellicciotti et al., 2012), DHSVM (Frans et al., 2018), VIC (Schaner et al., 2012) and GERM (Farinotti et al., 2012).The computational expense of modelling processes with adjacent grid points makes distributed models more suited to studying small catchments.Furthermore, computational efficiency makes it possible to quantify uncertainties and run large ensembles, which is important for understanding the uncertainties in future predictions.Thirdly, the DECIPHeR code is open source, which allows opportunities for further community development.In contrast, the glacier-enhanced version of SWAT (Omani et al., 2017;Luo et al., 2013b) is not open source. The model performance is assessed by simulating discharge in the Naryn River catchment in central Asia for the period 1951-2007.Simulated snow cover is evaluated against MODIS remote sensing snow extent for the period 2001-2007.Simulated catchment-wide glaciated area is compared to glaciated areas derived from Landsat observations for time periods during the 1970s, 1990s and mid-2000s.This evaluation is used to establish the extent to which the model can reproduce past changes in river discharge, snow extent and catchment-wide glaciated area in order to have confidence in the model's predictive ability when applied to future scenarios.The model is used to quantify the present-day relative contributions of rain, snow and glacier melting to the total discharge and to determine the timing of when these components dominate river flow.Determining these baseline conditions for the present day is important because these will change in the future as the seasonal snowpack reduces and glaciers retreat.The paper is structured as follows.Sect. 2 gives an overview of the DECIPHeR model and describes the changes made to include the snow and glacier model.Section 3 describes the calibration and validation of discharge.Section 3.9 describes the validation of snow extent against MODIS observations.Section 4 describes the model limitations and proposes several avenues for further development. Study region The Naryn River (Fig. 1) originates in the Tien Shan mountains in Kyrgyzstan and flows west through the Ferghana Valley into Uzbekistan, where it merges with the Kara Darya River to form the Syr Darya.The river is an important source of freshwater for agriculture downstream in the heavily irrigated Ferghana Valley (Radchenko et al., 2017).According to the Randolph Glacier Inventory Version 6 (RGI, 2017), there are currently 1784 glaciers in the catchment.Glaciers are found at elevations ranging between 2815 and 5125 m and are predominantly located in the east of the catchment and to a lesser extent in the north-west.Rock glaciers are also common above around 3000 m, especially downslope of contemporary glacier termini, and these represent considerable (if unquantified) ice and future water resources.The catchment has an area of 57 833 km 2 , of which 1060 km 2 or 1.8 % is glaciated.The monthly temperature climatology (1951)(1952)(1953)(1954)(1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970) averaged over the catchment varies between −13 • C in January and +11 • C in August, and the majority of the annual precipitation falls in spring and early summer (April-July) (see Fig. 2d).There are six gauging stations with long-term monthly observations commencing in 1951observations commencing in and terminating between 1980observations commencing in and 1995 (see Sect. 3.1) (see Sect. 3.1).Discharge at the six stations peaks in summer (June-August) due to glacier melting, with the exception of Aflatun, which is unglaciated and peaks sooner in spring (May) due to snow melting (Fig. 2). For the purposes of this study, we assume that streamflow at the gauging stations has a natural signal.The excep-tion is the Uch-Kurgan station, where streamflow after 1975 is impacted by the management of the Toktogul reservoir (Bernauer and Siegfried, 2012).A high-resolution irrigation map of the catchment derived from the normalized difference vegetation index (NDVI) (Meier et al., 2018) shows that the irrigated area is low (3 % area is irrigated), in contrast to the Ferghana Valley downstream (Fig. S1 in the Supplement).Irrigation is predominately clustered in the south-eastern part of the Naryn catchment. DECIPHeR model DECIPHeR is a flexible hydrological modelling framework (Coxon et al., 2019) which is based on the dynamic TOP-MODEL (Beven and Freer, 2001;Metcalfe et al., 2015;Freer et al., 2004;Peters et al., 2003).The model can be spatially configured in any form, providing a distributed mosaic of interacting and spatially connected HRUs that allow different representations of water fluxes due to local conditions (i.e.geology, soils, slopes, vegetation) via different inputs (i.e.precipitation/evaporation), model structures and parameterizations.HRUs group together similar parts of the landscape to minimize run times of the model.This enables the user to run large ensembles of event data and climate simulations and provide probabilistic flow simulations essential for risk analysis.The user can use DECIPHeR to test different spatial configurations, from a fully gridded model to a lumped model set-up.Each HRU can be assigned its own processes and parameters, which allows for a more complex representation of spatially variable processes across the catchment.The capability to include spatially varying processes is particularly useful for modelling glaciers because the equations used to describe water storage and release from glaciated locations in the catchment will be different to unglaciated regions. DECIPHeR simulates water storage, hydrologic partitioning and surface/subsurface flow for steeper shallow soils and/or groundwater-dominated watersheds.The model structure (as implemented in Coxon et al., 2019) consists of three stores defining the soil profile (root zone, unsaturated and saturated storage), which are implemented as lumped stores for each HRU.Moisture is added to the soil root zone by rainfall input and removed only by evapotranspiration.Any excess precipitation is added to the unsaturated zone, where it is either routed directly as overland flow or added to the saturated zone.Changes to storage deficits in the saturated zone are dependent on this recharge from the unsaturated zone, fluxes from upslope HRUs and downslope flow out of each HRU.Subsurface flows for each HRU are distributed according to a flux distribution matrix based on accumulated area and slope.Channel flow routing is modelled using a set of time delay histograms.For a more detailed discussion of the original DECIPHeR model structure, please see Coxon et al. (2019). While DECIPHeR has been applied to catchments in the UK (Coxon et al., 2019;Lane et al., 2021), it has not been used for glacier or snow-fed rivers because cryospheric processes have not yet been included in the model.TOP-MODEL, the forerunner of the dynamic TOPMODEL, did have a snowmelt scheme; however, seasonal variations in the degree day melt factor and snow sublimation were not included (Ambroise et al., 1996).DECIPHeR is implemented in two steps: (1) a digital terrain analysis (DTA) to set up and define the spatial complexity of the model domain and (2) ensemble simulation of that domain.The DTA is critical for defining the model complexity and landscape features that will separate HRUs, their equations, function types and parameterization.The DTA procedures also calculate the river network, routing path, catchment areas for each gauge and connectivity between the HRUs and HRUs to the stream network.More details on the DTA can be found in Coxon et al. (2019). Modifications to the DECIPHeR model The following sections describe the modifications made to the DECIPHeR model to include snow and glacier processes (for a full description of the hydrology included in DECIPHeR, see Coxon et al., 2019).We have altered the model to include a simple degree day snowmelt and glacier melt scheme.The degree day approach is a well-established method to calculate glacier and snow melting (Marzeion et al., 2020) and only requires air temperature as an input.The equations implemented in DECIPHeR for snowmelt and ice melt are similar to those in the SWAT model (Luo et al., 2013a).SWAT is one of the most widely used community hydrological models which have been applied to many snowfed catchments (van Tiel et al., 2020).The evolutions of the snow and glacier depths are calculated every time step using the snow accumulation, melt and sublimation components.Temperature and precipitation are calculated at the HRU level by adjusting the gridded surface climate for elevation using a temperature lapse rate and a precipitation gradient.Glacier melt and snowmelt are added to the precipitation fields every time step and are then routed through the catchment.Figure 3 shows a conceptual diagram of the snow and glacier scheme added to the DECIPHeR model.All model parameters are sampled using Latin hypercube sampling. Modifications to the digital terrain analysis The code is modified to read in two additional inputs: (1) air temperature, which is used for the degree day melting of ice and snow and to estimate the fraction of precipitation falling as rain or snow, and (2) the elevation of the forcing data which is used to apply a lapse rate correction to the surface temperature and precipitation fields.To reduce the number of HRUs, we do not classify glaciated regions as a function of accumulated area or slope, unlike in other parts of the catchment.HRUs located inside glaciers are only classified as a function of elevation, spatially varying climate and a unique ID that identifies the glacier.The spatial distribution of HRUs in the upper part of the catchment is shown in Fig. S2. Snowpack model The daily snowpack depth is calculated as where S accum , S melt and S sublim are the snow accumulation, melt and sublimation components at time step t and S 0 depth is the snow depth at the previous time step.The snowpack components are described below. Snowpack melting Melting is related to the snowpack temperature using a degree day factor for snow.The degree day factor accounts for a variety of different processes that control melting, such as the presence of debris cover or the darkening of snow through the snow aging process.Glacier ice is situated beneath the snowpack or can be exposed if no snowpack exits (right).The snowpack accumulates when precipitation falls in solid form.Liquid precipitation, snowmelt and glacier melt are transported to the root zone. where T snow is the snowpack temperature ( • C), ddf snow is the degree day factor for snow melting (m w.e.• C −1 d −1 ) and T melt is the melt temperature.To reduce the number of parameters required to calibrate the model, we assume that T melt = 0 • C. A criterion is enforced such that the melting depth cannot exceed the depth of snow that exists. A seasonally varying degree day factor is calculated which has a maximum snowmelt on 21 June (ddf max ) and a minimum snowmelt on 21 December (ddf min ).The reason to use a seasonally varying degree day factor rather than a constant one is because the degree day factor is a simplification of processes that can be more correctly described by the energy balance, i.e. inward and outward longwave and shortwave radiation, albedo, and latent and sensible heat, which vary throughout the year and are not solely a function of temperature.The degree day factor is represented as a sinusoidal curve, where j is the day of the year.The ddf min is calculated by multiplying the maximum value by a scale factor ddf mult .ddf min = ddf max ddf mult (4) The scale factor can vary between 0 and 1 and ensures melt rates are lower in the winter than in the summer. The temperature of the snowpack T snow is calculated from the air temperature using a lag factor l snow .The lag factor can vary from 0 to 1, where a value of 1 sets the snowpack temperature equal to the air temperature.A value less than 1 sets the snowpack temperature lower than the air temperature.Using a temperature lag factor is a simple way of approximating the snowpack temperature without doing more complex heat transfer modelling of the temperature flux from the air into the snowpack.The lag factor accounts for affects of snow depth and density on the snowpack temperature. where T 0 snow is the snowpack temperature at the previous time step. The daily HRU temperature T hru ( • C) is calculated by adjusting the forcing temperature T 0 ( • C) using a lapse rate λ temp ( • C m −1 ). where E hru is the elevation of the HRU (m) derived from a digital elevation model (DEM) and E climate is the elevation of the forcing data (m).Calculating temperature at the HRU level allows us to downscale the gridded climate data, allowing for high-resolution spatial variability in temperature as a function of elevation. Snowpack accumulation Snow accumulates in the snowpack when precipitation falls in the form of snow. where S accum is the snow accumulation (m w.e.d −1 ), P solid is the solid precipitation falling on the HRU (m w.e.d −1 ) and T c is the threshold temperature for the conversion of rain to snow ( • C). A spatially uniform rainfall and snowfall correction factor is applied to the precipitation based on the equations used in the HBV-ETH model (Mayr et al., 2013).The correction is applied because gridded datasets often underestimate precipitation in mountainous regions due to the lack of meteorological stations at high elevations and the fact that observations of solid precipitation are susceptible to undercatch due to windy conditions (Rasmussen et al., 2012).Wortmann et al. ( 2019) demonstrated using the Soil and Water Integrated Model-Glacier Dynamics (SWIM-G) model that the APHRODITE precipitation (used in this study) needed to be increased by a factor of 1.5-4.3 to maintain observed glacier area and mass balances.Solid and liquid precipitation are scaled separately: where P s and P l is the scaled solid and liquid precipitation (m d −1 ), P 0 is the forcing precipitation (m d −1 ), S c and R c are dimensionless snowfall and rainfall correction factors.Solid and liquid precipitation are lapse rate corrected for elevation using a linear precipitation gradient.The solid precipitation P solid falling on the HRU (m d −1 ) is where λ precip is the precipitation lapse rate (% m −1 ).The liquid precipitation falling on the HRU P liquid (m d −1 ) Rain falling on a snow-or ice-covered HRU is passed to the root zone. Snowpack sublimation The quantity of snow sublimated is calculated using the potential evapotranspiration which is provided as an input forcing dataset.The parameter E sub is used to reduce the potential evapotranspiration over snow surfaces.Sublimation is then set equal to the reduced PET, and no PET hru is passed to the root zone. S sublim is the snow sublimation (m w.e.d −1 ), PET hru is the forcing data potential evapotranspiration (m d −1 ) and E sub is a parameter to be calibrated that reduces the evapotranspiration over snow-covered HRUs.Using PET to approximate snow sublimation has also been implemented in the SWAT model (Fontaine et al., 2002). Glacier model The daily glacier depth is where G accum , G melt , and G sublim are the glacier accumulation, melt and sublimation components at time step t and G 0 depth is the glacier depth at the previous time step. Glacier melting When the snowpack has melted and the glacier ice is exposed, melting can occur.The amount of glacier ice melted G melt (m w.e.d −1 ) is where ddf ice is the degree day factor for ice melting (m w.e.• C −1 d −1 ), T glacier is the temperature of the glacier ice ( • C), T melt is the melt temperature which is set to 0 • C. The degree day factor for ice is calculated from the degree day factor for snow by multiplying by a scaling parameter ice mult .The scaling parameter increases the degree day factor for ice relative to snow.Ice generally melts more per degree day than snow because it has a lower albedo. Glacier temperature is related to the air temperature using a lag factor where T glacier is the glacier temperature ( • C), T 0 is the glacier temperature at the previous time step ( • C), T hru is the lapserate-corrected temperature of the HRU ( • C), l glacier is the dimensionless lag factor for ice.The lag factor for glacier ice is found by multiplying the lag factor for snow by a scale factor l ice mult .This reduces the temperature lag factor for ice relative to snow, because ice responds more slowly to the air temperature than snow. l glacier = l snow l ice mult (15) Glacier accumulation Glacier accumulation is calculated by transforming a fraction of the snowpack into glacier ice.This is a simple way https://doi.org/10.5194/hess-27-453-2023Hydrol.Earth Syst.Sci., 27, 453-480, 2023 of converting snow into ice without including more complex processes such as the densification and compaction of snow grains under the force of gravity.Glacier accumulation is where (β) is the basal turnover coefficient (d −1 ).This represents the fraction of snow that is removed from the snowpack and converted into ice every time step.The minimum parameter range for β is 1 year (2.74×10 −3 d −1 ), which means that it takes 1 year for all of the snowpack to be converted into ice.The upper bound is 100 years (2.74 × 10 −4 d −1 ).The upper range is based on observations of the age of ice at the firn-ice transition depth for different glaciers (Paterson, 1994). Glacier sublimation Sublimation occurs when the snowpack has disappeared and the glacier ice is exposed.We assume that the reduction in P ET over snow and ice surfaces is the same. where E sub is used to reduce the potential evapotranspiration over ice and snow HRUs.Sublimation is set to the reduced potential evapotranspiration value. Snowmelt and glacier melt contributions to streamflow Water from snow and glacier melting is added to the precipitation field and routed through the model to simulate river discharge. where P total is the total water input to the catchment, P liquid is the liquid precipitation on the HRU, S melt is the snowmelt contribution and G melt , each with units (m w.e.d −1 ).See Table 3 for a list of the model parameters that are calibrated. 3 Model evaluation Input data for DECIPHeR DECIPHeR requires a digital elevation model and information on the locations of gauging stations.Catchment elevation data are provided by the Multi-Error-Removed Improved-Terrain (MERIT) digital elevation model (Yamazaki et al., 2017).The DEM has a spatial resolution of 3 arcsec (∼ 90 m at the Equator) and is pre-processed to remove any sinks or flat areas.Therefore we assume that all of the catchment area flows to the gauging outlets. The location of the gauging stations and monthly discharge observations come from the Global Runoff Data Centre (GRDC).There are six gauging stations located the Naryn River catchment: Djumgol, Kekirim, Naryn, Toktogul reservoir, Aflatun, and Uch-Kurgan (Table 2).The Toktogul reservoir gauging station is located immediately upstream of the reservoir dam and so is not affected by water abstraction.Discharge at the Uch-Kurgan station after 1975 is impacted by the reservoir management.Table 1 contains a list of the input and evaluation datasets used in this study. Glacier area and thickness Glacier outlines are used in the model to identify glaciated/non-glaciated HRUs and initial glacier thicknesses for each HRU.These are calculated using glacier outlines derived from Landsat Multispectral Scanner imagery for the 1970s (Kriegel et al., 2013).According to the Landsat data, 1472 glaciers existed in the catchment during the 1970s.Determining glacier thicknesses at the start of the simulation is problematic due to the lack of long-term observations.Therefore, we infer glacier thickness using the Glacier bed Topography (GlabTop2) method (Frey et al., 2014), where the glacier outlines from the 1970s and the MERIT DEM are used as input.Freely available python code to implement the GlabTop2 method was obtained from https://pypi.org/project/GlabTop2-py (last access: 17 December 2020).Glacier thicknesses are converted to units of m w.e. using an ice density of 917 kg m −3 .GlabTop2 is a useful technique to infer glacier thicknesses in the absence of observations; however, the method has some limitations that should be acknowledged.Firstly, the method uses a simple parameterization for the basal shear stress based on the elevation difference within a glacier.This can introduce a source of uncertainty into the initial ice thicknesses.Secondly, the method can produce very high ice thicknesses for locations with low slopes.Low slopes can appear in the MERIT DEM if a glacier has retreated and a glacier lake is formed or an existing lake expands.Two glaciers in the upper part of the catchment have very low slopes (flat) at the terminus and large ice thicknesses.The larger of these is Petrov Glacier which drains into the Petrov glacial lake.Observations show that Petrov Glacier has experienced accelerated retreat since the 1970s (Engel et al., 2012), and the lake area has more than doubled since 1980 (Janský et al., 2010).To correct for this, we take the mean thickness of a 5 × 5 pixel buffer upstream of the terminus for each of the two glaciers and replace the large thickness values with the fill value.Ice thicknesses before and after the fill values have been applied can be seen in the Supplement (Figs. S3 and S4,respectively).A third limitation is that the thickness estimate uses 1970s outlines, but our model simulations commence in 1951.In situ mass balance observations show that glaciers in the Tien Shan were in a quasi-stable state between the 1950s and the 1970s but experienced accelerated mass loss in the years that followed (Barandun et al., 2020;Liu and Liu, 2016).The glaciers in the aforementioned studies were located outside of the Naryn catchment; however, we assume that the mass balance trends are representative of our study region. Input climate Daily precipitation data are provided by the Asian Precipitation Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE) Russia and Monsoon Asia V1101 daily precipitation (Yatagai et al., 2012).These data have been shown to outperform other gridded precipitation datasets for hydrological modelling applications in central Asia (Malsy et al., 2015).The APHRODITE data are constructed from a network of rain gauges across Asia which are interpolated onto a 0.25 • grid.Daily air temperature, potential evaporation and elevation of the climate data come from the ERA5 back extension (ERA5 BE) 1950-1978and ERA5 1979-2007(Copernicus, 2017).These data have a spatial resolution of 0.25 • × 0.25 • . Calculation of hydrological response units HRUs are calculated by categorizing the catchment according to the following. -81 elevation ranges consisting of 0-2000 m in increments of 100 and 2000-5200 m in increments of 50 m.These elevation bands are selected to downscale the temperature and precipitation which is used for snow and glacier model.The finer 50 m bands are used between elevations 2000 and 5200 m because glaciers are located within these elevation ranges in the catchment. -Dividing the non-glacierized parts of the catchment into three equally sized surface slope and accumulated area fractions.This results in HRUs that cascade down to the valley bottom. -Spatially varying precipitation, temperature and potential evapotranspiration.This provides the model with a regionally varying climate forcing. -Glacier mask which is used to determine if a HRU is initially glacierized or non-glacierized. This categorization results in the following variable spatial resolution characteristics: a mean area of 0.94 km 2 , median 0.27 km 2 , minimum 0.0055 km 2 and maximum 259.93 km 2 per HRU.(See Fig. S5 for a histogram of the areal distribution of the HRUs.)The total number of HRUs in the catchment is 61 481. Initialization and spin-up The initial snowpack depth is set to 0 m w.e., and its temperature is set to 0 to −5 • C. The calculation of the initial glacier thicknesses using the GlabTop2 method is described in Sect.3.2.The model is spun up by repeating the first simulation year 1951 for 10 years.The spin-up time period is found by performing an idealized experiment in which precipitation forcing is kept constant through time and the temperature forcing is set to less than 0 • C to ensure there is no snow or ice melting.Under these conditions, the discharge reaches an equilibrium in approximately 10 years. Calibration and validation of discharge The model is calibrated for the period 1951-1970 to determine behavioural parameter sets, and then these are validated for the period 1971 to a variable end date between 1980 and 1995 depending on the availability of discharge observations at each gauging station.The calibration period is prior to the commissioning of the Toktogul reservoir in 1976 which affected the discharge at Uch-Kurgan.The model is run on a daily time step and monthly simulated discharge is calculated from daily values.Twelve additional parameters are added to DECIPHeR for the snow and glacier scheme. The parameters and their minimum and maximum sampling ranges are listed in Table 3.In total, 150 100 simulations using parameter combinations selected using Latin hypercube sampling are run (McKay et al., 1979).This form of stratified sampling is a more efficient and structured approach to generating a "near-random" sample for large multi-dimensional problems such as this.Model performance is assessed using the six evaluation metrics described in Sect.3.7 below.The best 0.5 % performing simulations (see the section below for the methods) in the calibration period are used in the validation. Generalized likelihood uncertainty estimation (GLUE) GLUE is a technique used to identify parameter sets which provide a good representation of the system (Beven and Binley, 1992;Freer et al., 1996;Beven, 2006).The approach assumes that there is no single optimum parameter that can be considered correct, but instead there is a set of "behavioural models" that describe the system equally well.In this study, we want to select models that perform well in simulating seasonal changes in discharge.The onset of snow melting affects the discharge during the spring, whereas peak discharge during the summer is affected by glacier melting.It is equally important to ensure that the model performs well during the autumn and winter.Station observations show that the warming in the Naryn basin over the period 1960-2007 occurred primarily during autumn and winter (Kriegel et al., 2013). Warming winter temperatures cause a reduction in snowpack accumulation when more precipitation falls in the form of rain rather than snow.This has an impact on the autumn and winter hydrograph.Therefore we use the following metrics to assess the model performance. The Nash-Sutcliffe efficiency (NSE) is used to evaluate high flows and the timing of peak discharge, particularly from glacier melting during the summer.NSE values range between −∞ and 1 where NSE = 1 is the optimal value. Q obs i and Q sim i is observed and simulated monthly discharge, Q obs i is the mean observed monthly discharge and n is the number of observations. The bias in runoff ratio PBIAS is a measures of the tendency of the simulated data to be larger or smaller than the observations (Yilmaz et al., 2008).A negative PBIAS indicates the model underestimates the discharge and a positive bias indicates the model overestimates the discharge.PBIAS close to zero indicates better model performance. The ratio of the root mean square error to the standard deviation of the observations (RSR) (Moriasi et al., 2007) is used to evaluate the model performance for the four seasons, and therefore this provides four separate evaluation metrics (bringing the total to six). where i is discharge for the months of spring (March, April, May), summer (June, July, August), autumn (September, October, November) and winter (December, January, February).Q obs i is the mean of the observed discharge.Better parameter sets have lower values of RSR.The units of RSR are dimensionless which makes it useful for comparing the model performance between the sub-catchments and with other studies. In this study two different calibration techniques are tested. 1. ISC: individual-site calibration to find parameter sets best suited to individual sub-catchments.This approach parameterizes areas upstream of a gauge in a lumped way.This is different to the step-wise approach where each upstream to downstream catchment is calibrated sequentially, resulting in spatial differences between upstream-downstream parameters.Nonetheless, the ISC method allows us to identify the spatial variability in parameters across the catchment. 2. MSC: multi-site calibration to find global parameters sets suited to the entire catchment. The purpose of using two approaches is to investigate whether there are parts of the catchment that behave differently to the entire catchment.The results of ISC are included in the main body of the paper and the results of MSC are detailed in the Supplement.Model performance is assessed by calculating conditional probabilities from the six metrics described above.The final conditional probability values are combined with an equal weighting to give overall model performance, where higher conditional probability values (after all the normalization steps) indicate better model performance.Prior to calculating the conditional probability values, NSE values less than zero are set to zero.By doing so, we reject these simulations because they will have a zero conditional probability.The NSE is adjusted such that The metrics NSE rev , |PBIAS| and RSR season are normalized so that values vary from 0 (good performance) to 1 (poor performance).This is to account for the difference in units between the metrics and the fact that the upper bounds values for the metrics are different.PBIAS has units of percent and RSR season and NSE rev have dimensionless units.The upper bound value for NSE rev and RSR season is 1, whilst PBIAS can have an upper bound value exceeding 1.The above metrics are then normalized so that they are all on a scale of 0 to 1: where M c,i,m is the metric m at catchment c for simulation i and I = (1, 2, . . ., n) are the vector of simulations from 1 to the number of simulations n. max(M c,I,m ) and min(M c,I,m ) are the maximum and minimum values of the metrics across all simulations for each catchment.Once normalized, the values for each simulation, for each metric, and for each catchment are then calculated as https://doi.org/10.5194/hess-27-453-2023Hydrol.Earth Syst.Sci., 27, 453-480, 2023 so that a higher value (between 0 and 1) reflects a better simulation for that metric.Finally the conditional probability values are then calculated so that for each metric and for each catchment all the simulations sum to 1. where n is the number of simulations.For the ISC method, a combined conditional probability measure is calculated for each sub-catchment ( c,i ) by multiplying the conditional probability measures derived from the six metrics.We assume the metrics contribute equally to the overall model performance. For the MSC method, catchment-wide conditional probability measures are calculated by multiplying c,i for each subcatchment.We assume that the sub-catchments contribute equally to model performance. Simulations are ranked in order of descending conditional probability measure, where maximum values indicate good model performance.The best performing 0.5 % calibration simulations (n = 751) are extracted and used to validate the discharge and spatial snow extent. Individual site calibration and validation Figures 4 and 5 show the simulated and observed discharge for the calibration and validation periods using the best 0.5 % simulations.The ranges of values for the performance metrics are listed in Table 4.For the calibration period the model is able to capture the seasonal peaks in discharge well, with NSE values 0.74 < NSE < 0.87 for the 5th-95th prediction limits and PBIAS values lower than 11.36 % at all the gauging stations.RSR values can be considered "satisfactory" (RSR < 0.7) during the winter, spring and autumn; however, some RSR values exceed 0.7 in the summer (June, July, August) indicating poorer model performance when glacier melting is active. For the validation period the model also performs well NSE values 0.7 < NSE < 0.9 for the 5th-95th prediction limits, with the notable exception of the Uch-Kurgan station where discharge is overestimated by up to 32 %.The discrepancy is most noticeable during the summers of 1987 and 1988.The model simulates the observed peak discharge during these years well at the Toktogul reservoir gauging station, which is located upstream of the reservoir; however, downstream of the reservoir at Uch-Kurgan, discharge is overestimated.Observations from the Central Asian Waterinfo Database shows that the reservoir inflow was very high during these 2 years, which coincided with a sharp increase in the reservoir volume by 7253 million m 3 from August 1987 to August 1988 (Fig. S6).Initial and calibrated parameter ranges for each sub-catchment for the best 0.5 % of ensemble are listed in Table S3. Multi-site calibration and validation Simulated discharge for the calibration and validation periods is shown in Figs.S7 and S8, and performance metrics are listed in Table S1 in the Supplement.There is a degradation in model performance when the MSC method is used to calibrate the model (see Table S1).The reduction in model performance is most noticeable for the Naryn sub-catchment, where the best NSE is 0.91 for the ISC method and 0.59 for the MSC method.Furthermore, the summer peaks in discharge at the Naryn sub-catchment are underpredicted when using the MSC method to select global parameters (Fig. S8).This suggests that the global catchment parameters are not well suited to the Naryn sub-catchment.Simulations that perform well in the sub-catchments (Figs.S10-S15) favour higher values for the precipitation lapse rates, in contrast to the global catchment parameters which range from 1 % 100 m −1 to 10 % 100 m −1 (Fig. S9).This is visible in Table S2 which summarizes the range of precipitation lapse rates for the 10 bestperforming simulations for each sub-catchment.The upper values for the precipitation lapse varies between 16 % 100 m −1 and 24 % 100 m −1 depending on the subcatchment, which is higher than the global catchment upper bound of 10 % 100 m −1 .Simulations also perform better in the sub-catchments when higher values for the sublimation factor E sub are used, in contrast to the global values (0.005-0.2).The 10 best E sub parameter ranges are also listed in Table S2.The upper bound values for E sub vary between 0.6 and 1.0, depending on the sub-catchment, which is higher than 0.2 predicted by the global catchment values.E sub controls the reduction in P ET over snow and ice surfaces.This indicates that discharge in the Naryn is predicted better when PET is reduced.The PET has not been adjusted for elevation, unlike the air temperature and precipitation.The ERA5 PET has a 0.25 • spatial resolution, which is much larger than the HRU areas.In mountainous regions PET should decrease with height as a consequence of decreasing temperatures (Lambert and Chitrakar, 1989). We used wide parameter ranges to calibrate the model because this is the first time applying DECIPHeR in a mountainous region, with snowmelt and glacier melt processes included.The dotty plots (see Fig. S9) show that the sampled parameter ranges could be further reduced for two of the parameters; the lateral saturated transmissivity (ln(T 0 )) and the rainfall correction factor (R c ).The ln(T 0 ) calibration range is −20 to 20 ln(m 2 h −1 ); however, the best simulations have values that are predominately clustered around −7 and 0. R c is calibrated between 0.8 and 3, but the best simulations have values of less than 2. To explore whether any of the parameters are correlated, a plot of the coefficient of determination (r 2 ) for parameter pairs is shown in Fig. S16 for the best 0.5 % calibration experiments.The strongest correlation is found between the precipitation lapse rate and the snowfall correction factor (r 2 = 0.46), which both control the quantity of snow accumulation.The correlation indicates that good simulations can be produced with higher values of snowfall correction factor combined with lower precipitation lapse rates or vice versa. Validation of the glaciated area against Landsat observations The simulated catchment-wide glaciated area is compared to the Landsat-derived glaciated area (Kriegel et al., 2013) for the best 0.5 % calibration simulations.The Landsat glaciated area is available for three time periods : the 1970s (1972-1977), which were used to calculate initial glacier thicknesses using GlabTop2, the late 1990s (1998)(1999)(2000) and the mid-2000s (2002-2007).The observations have an uncertainty bound associated with the delineation process which was calculated by placing a buffer of approximately 1 pixel wide (for example, 79 m for observations in the 1970s) around the glacier polygons.The uncertainty is the difference between the glaciated area and the area extended by the buffer.Figure 6 shows the simulated glaciated area for the top 0.5 % simulations in the calibration overlaid with the Landsat observations.The simulated glaciated area overlaps the Landsat observations in the late 1990s and mid-2000s.The model produces a large range of estimates for the glaciated area (680-1196 km 2 ) (5th-95th percentile limits) at the end of the simulation period.This range is larger than the observed uncertainty range of 903-948 km 2 .The uncertainty range in the model is 516 km 2 (in 2007), which is more than 10 times greater than the uncertainty in the observed glaciated area (46 km 2 ). Figure 7 shows the initial glacier thickness and the mean annual thickness at the end of the simulation period in 2007 for a region in the upper part of the catchment.A thinning of the glaciers and retreat of the terminus from the initial glacier outlines in 1970 is evident.The plots show the median ensemble member of the best 0.5 % calibration simulations. Validation of modelled snow extent against MODIS observations We evaluate the simulated spatial distribution of snow extent against MODIS 500 m resolution 8 d snow cover extent MOD10A2 version 6 (Hall and Riggs, 2016) for the period 2001-2007.Snow extent is an internal hydrological variable that was not used in the calibration and is complementary to the discharge time series, which is spatially integrated.MODIS snow extent does not contain information on the snow water equivalent; however, it is spatially distributed, which makes it particularly useful for the evaluation of distributed models (Duethmann et al., 2014;Finger et al., 2011).Daily simulated snow depth for each HRU is output for three simulations from the best 0.5 % calibration simulations: 50th (median), 5th and 95th ensemble members.It was not feasible to output daily HRU snow depth for all 751 simulations due to the size of the data.Snow depth is converted from metres of water equivalent to metres using a density of 300 kg m −3 , which is the mean density of settled snow.A spatial map of snow depth is generated by converting the snow depths on HRUs to a spatial grid using a map of HRU locations which was generated by the digital terrain analysis. The MODIS sensor detects snow in a pixel if there is any snow present within an 8 d period.To compare this to the model output, we assume snow is present in the model if the snow depth exceeds 1 cm on any day within the 8 d MODIS observational period.This threshold is used because MODIS can begin to detect snow with an accuracy 40 % (Pu et al., 2007) at this depth.Modelled snow extent is interpolated onto a 500 m grid for direct comparison with the MODIS data.Pixels where the MODIS data detect cloud cover are excluded from the analysis.Seasonal MODIS and simulated snow extent is calculated from the weekly data by selecting all the weeks that occur during a season and finding the most frequent state (i.e.snow or no-snow) for each pixel. For each season, a binary classification scheme is used to enable a comparison between the MODIS and modelled seasonal snow extent.The classification scheme has been used in flood hazard modelling to validate simulated and observed flood hazard area maps (Wing et al., 2017).Four metrics of fit are used, which categorize the relative number of pixels which conform to one of the states in the contingency table (Table 5). The first is the hit rate (H ), which is the proportion of snow pixels in the MODIS data that were reproduced by the model. H can range from 0 (none of the snow pixels in the MODIS data are snow pixels in the model data) to 1 (all of the snow pixels in the MODIS data are snow pixels in the model data). The second metric is false alarm ratio (F ), which indicates the proportion of snow pixels in the modelled that are not snow in the MODIS data. F can have values ranging from 0 (no false alarms) to 1 (all false alarms).F evaluates the tendency of the model to overpredict the snow extent. The third is the critical success index (C), which evaluates both overprediction and underprediction in the model and can range from 0 (no match between modelled and MODIS data) to 1 (perfect match between modelled and MODIS data). The fourth is the error bias (E) which evaluates the tendency of the model to underpredict or overpredict snow extent. The spatial distribution of the hit rates, misses and false alarms are shown in Fig. 8 for the median (50th percentile simulation) of the best 0.5 % calibration runs.(See Figs.S17 and S18 for the 5th and 95th percentile limit simulations.)Seasonal hit rates, misses and false alarms averaged over the years of MODIS observations 2001-2007 are summarized in Table 6.Seasonal snow extent is predicted reasonably well with mean hit rates exceeding 0.86 (median ensemble member).The model captures the complete snow cover observed in winter and the snow that persists at high elevations in the upper part of the catchment in the summer.Most noticeable is the poorer model performance in autumn, where there is a large positive bias (33.53 % median ensemble member) and a high number of false alarms (0.42 median ensemble member).This indicates that the model is overpredicting the snow extent in autumn.The best 0.5 % calibration simulations produce good estimates for discharge but at the same time predict a range of snow extent values.This can be seen in the fraction of the catchment covered in snow (Fig. 9) where NSE values range from 0.78 to 0.89 (95th-5th percentile limits simulations), with most of the model uncertainty occurring in the winter. Discharge components and timing Adding the snow and glacier model to DECIPHeR makes it possible to disentangle the relative contributions of snow melting, glacier melting and rainfall to the total runoff and to determine the timing of when each component influences river flow.It is important to establish these present-day baseline conditions because these will change under future climate change scenarios.Figure 10 shows the percentage contribution of snow melting, glacier melting and rainfall to the total annual runoff averaged over the years 1951-2007.The discharge components are calculated using the 0.5 % best calibration parameters for the Uch-Kurgan station located at the outlet of the catchment.Snow melting is the largest contributor, consisting of 41 %-91 %, followed by the rain component (8 %-43 %) and the glacier component (0 %-15 %), where the ranges represent the 5th-95th percentile simulations across all the gauging stations.The glacier melt contribution is largest for the Naryn sub-catchment, comprising 4 %-15 % of the annual discharge.This is the headwater subcatchment located in the Tien Shan mountains and has the largest glaciated area.In contrast, the glacier melting contribution at Aflatun is zero because this sub-catchment contains no glaciers.Figure 10 shows that the rainfall component is larger at the 95th percentile simulations than at the 5th and 50th percentile simulations.This is because the lapse rate at the 95th percentile simulations is higher (22 % 100 m −1 ) than at the 5th (1 % 100 m −1 ) and 50th (6 % 100 m −1 ) percentile simulations. To determine whether there have been any statistically significant changes in the simulated discharge components since 1951, we do a trend detection analysis using a Mann-Kendall test and Sen's slope estimator with a significance level of 5 % on the 50th percentile discharge predictions (black line in Fig. 11).A trend detection is also run on the annual mean air temperature, potential evapotranspiration, precipitation and observed annual mean discharge.The trends are listed in Table 7.Despite a warming of 0.12-0.2• per decade, we only see a statistically significant trend in the observed discharge at Uch-Kurgan which is affected by the management of the Toktogul reservoir and at Ust Kekirim.There are no statistically significant trends in the glacier melt fraction and only small positive trends in the snowmelt and negative trends in the rainfall fractions of less than 1 % per decade at some of the gauging stations.The glacier melt and snowmelt fractions exhibit an anti-correlation which happens because the glacier melting occurs when the snowpack is depleted and the ice is exposed (Fig. 11). Monthly hydrographs averaged over the period 1951-2007 show that discharge from snow melting peaks in the spring (April and May) (Fig. 12).Peak discharge from glacier melting happens later during the summer (June, July, August and September) after the snowpack has melted and glacier surfaces are exposed.This seasonal signal is seen at all the gauging stations except for Aflatun, where there are no glaciers.The glacier melt contribution is very high in August, where the upper range is 66 % for the Naryn subcatchment and 41 % for Ust Kekirim.The percentage contributions of snowmelt, glacier melt and precipitation to the monthly runoff are listed in Table S4. Discussion In this paper we added a snowmelt and glacier melt model to the DECIPHeR hydrological model and demonstrated that the model performs well in predicting discharge and the spatial distribution of snow cover when compared to MODIShttps://doi.org/10.5194/hess-27-453-2023 Hydrol.Earth Syst.Sci., 27, 453-480, 2023 Figure 8. Spatial distribution of the hits, misses and false alarms between the simulated snow extent (median 50th percentile simulation) and MODIS snow extent for the year 2002.Hits, misses and false alarms are defined in Table 5. observed snow extent.This updated version of the DECI-PHeR model is suitable for simulating discharge in large glacier and snow-fed catchments at a high spatial resolution whilst maintaining the ability to include model uncertainty in the simulated discharge.Snow and glacier melting is modelled using the degree day approach, which requires only air temperature as an additional forcing input.We used the model to calculate the relative contributions of snow, rain and glacier melt to the annual runoff.We found spatial variability in the relative contributions of each of the components.For the entire catchment (gauging station at Uch-Kurgan) the 50th percentile contributions are snow (89 %), rain (9 %) and glacier melting (2 %).These estimates are broadly consistent with Armstrong et al. (2019) who used MODIS imagery and degree day melt modelling to partition the runoff components in the Syr Darya River.Armstrong et al. (2019) found the runoff comprised of snow (74 %), rain (23 %) and glacier melting (2 %).Our estimates are slightly higher for the snowmelt contribution; however, our study fo-cuses on the upper reaches of the Syr Darya River where the snowmelt is more likely to dominate the discharge. Snow melting is the dominate component of the runoff at the six gauging stations.Throughout the Tien Shan longterm hydrological records of the former USSR show that snowmelt is the dominant source of runoff (Aizen et al., 1995).Further upstream in the Naryn sub-catchment the glacial melt contribution to the annual discharge is higher (4 %-15 %) than at Uch-Kurgan.Our upper estimate (15 %) is slightly lower than a study by Saks et al. (2022) who calculated that 23 % of the runoff originates from glacier melting in upper Naryn River.A possible explanation for why our estimate is lower is that our simulation period starts 30 years earlier (1951) than the study by Saks et al. (2022Saks et al. ( ) which started in 1981. .In this study we set the behavioural models to the best 0.5 % simulations in the ensemble.This is an unconventional way of selecting behavioural models; however, it was important in our analysis to rank models according to their ability to capture seasonal discharge, particularly from spring snowmelt and summer glacier melt.Often behavioural models are selected using threshold values for guideline metrics.These metrics are calculated over the complete discharge time series, rather than for individual seasons.For example, metrics from Moriasi et al. (2007) are commonly used in the literature to categorize "acceptable", "good" or "very good" simulations based on threshold values for NSE, PBIAS and RSR.Metrics calculated over the complete discharge time series are not a strong test of the model's ability to predict seasonal discharge.To our knowledge, there are no standardized guideline thresholds in the literature for seasonal metrics, therefore we selected the best 0.5 % of the ensemble.If we decided to define the behavioural models using a threshold for the seasonal RSR, then this would also be based on an arbitrary choice of value.A high threshold for seasonal RSR would be required to categorize the behavioural models because the summer values are high (see RSR JJA in Table 4). We explored the impact of selecting alternative threshold values (1 %, 2.5 %, 5 % and 10 %) on the calibrated NSE values (Fig. S24).To obtain NSE values > 0.7 at all the gauging stations requires a threshold smaller than 1 %.This is notable at the Alfatun station, where the NSE value at the 0.5 % threshold is 0.74 but reduces to 0.63 at the 1 % threshold (95th percentile limit values).Figure S24 also shows how the uncertainty in the NSE values increases for higher threshold values.At the 10 % threshold the uncertainties in the NSE values are much larger than at the 0.5 % threshold. In the Naryn sub-catchment which is the most upstream catchment located at elevations predominately exceeding 3000 m, the model performed best when the E sub parameter values are high.E sub reduces PET over snow and ice surfaces.This suggests that to improve the model at high elevations an orographic adjustment for PET is required.Currently the model uses surface values for PET; however, in practice PET decreases with elevation because of temperature cooling with height.Future work would calculate PET at the HRU level using an empirically derived relationship with temperature (Xie and Wang, 2020;Oudin et al., 2005).This type of parameterization would use the lapse rate adjusted HRU temperature calculated in the model.Alternatively, PET could be calculated using the Penman-Monteith (PM) method recommended by the Food and Agriculture Organization (Allen et al., 1998).This would require additional inputs such as solar radiation, wind speed and humidity.Nevertheless, the Penman-Monteith approach could potentially be more appropriate for high mountainous regions if the orographic increase in wind speed is also included. Our simulated discharge and glaciated areas are presented with uncertainty bounds because many processes in the model are represented in a simplified way leading to uncertainty in the predictions.Likewise, forcing data in mountainous regions are often very sparse.We can see for example that the location of the gauging stations, used to dehttps://doi.org/10.5194/hess-27-453-2023 Hydrol.Earth Syst.Sci., 27, 453-480, 2023 rive the APHRODITE precipitation, are sparsely located and not homogeneously distributed across the catchment (see Fig. S19l).This leads to the requirement to calibrate snowfall and rainfall correction factors as they may vary across the catchment.We found that, when calibrating the discharge, the RSR values were higher in the summer than during the other seasons, suggesting that improvements to the glacier model may improve the simulated summer discharge.One of the key missing processes is the role of permafrost which affects the upper part of the Naryn catchment.Barandun et al. (2020) showed that permafrost is the western Tien Shan is continuous above 3800 m, discontinuous between 3800 and 3600 m and sporadic between 3600 and 3000 m, and this can influence the runoff regime in three different ways.Firstly, the impermeable (or partially) frozen surface acts as a barrier over which water flows and this increases the speed of the runoff from snow and ice melting during the spring and summer.Secondly, every summer the thawing of ice-rich permafrost releases water that contributes to the streamflow.Thirdly, the degradation of ice-rich permafrost due to climate warming releases additional water.Permafrost responds more slowly to climate change than snow and glacier ice due to the insulating effect of the overlying land layer.This hidden water source could provide a buffer to water loss from glacier and snow melting. We see from the comparison with MODIS data that snow extent is overpredicted in autumn, as evidenced by a higher false alarm ratio.This may explain why the catchmentwide glaciated area predicted by the model is higher than the Landsat observation at the end of the simulation period (median value in Fig. 6 is 988 km 2 and observation is 926 ± 23 % km 2 ).It is open to question whether the overestimate in snow extent is caused by an overestimate in accumulation or an underestimate in melting.The simple degree day melt model does not account for all the complex processes that contribute to melting.Terrain aspect can have a large impact on the quantity of solar energy available for melting.Snow and glaciers on south-facing slopes receive more sunlight than north facing slopes in the Northern Hemisphere.The effect of aspect could be included by modifying the degree day factor as a function of the slope (Immerzeel et al., 2012(Immerzeel et al., , 2013)).Another improvement to the melt scheme would involve calibrating the temperature thresholds for glacier and snow melting.Currently the threshold temperatures for melting ice and snow are set to 0 • .Alternative methods to calculate melting, such as using an enhanced temperature index model or full energy balance in scheme may help to improve the predictions of snow extent in the autumn. Future versions of the model could also consider the impact of debris cover on glacier surfaces and its effect on glacier melt.While thin debris layers decrease albedo and enhance local melt rates, once the debris layer exceeds a few centimeters, it insulates the glacier which reduces melt rates (Fyffe et al., 2019).In addition, some debris-covered glaciers in High Mountain Asia and elsewhere undergo a transition to form rock glaciers; these contain large ice volumes and appear to be more resilient to warming than ice glaciers (Jones et al., 2021).As a result, the degree day melt and temperature lag factors could be modified in regions where debriscovered glaciers and rock glaciers are present.Information on the present-day distribution of debris-covered glaciers derived from remote sensing is available to implement this (Herreid and Pellicciotti, 2020;Scherler et al., 2018).However, similar information on rock glaciers is not yet available, and the climate and hydrological response of rock glaciers differs from that of debris-covered glaciers. Additional developments would improve the representation of water flow through the ice and snow.Currently, when rain falls on a HRU, the water goes straight to the root zone, so we do not consider the percolation of water through the ice and snow.This would require a more complex model that includes the density and pore space of the snowpack and ice.We have not included the re-freezing of meltwater, which would increase the snowpack or ice depth, nor have we included the process by which rainfall adds warmth to the snowpack or ice, which enhances melting.Glacier flow has not been included in the model.To estimate a catchment-wide glaciated area, an arbitrary threshold of 1 mm is used to identify the presence of ice. Figure S20 shows the impact of using alternative thresholds of 1 × 10 −6 m, 1 cm and 1 m.We see that the catchment-wide glaciated area is sensitive to the choice of threshold value.Including ice flow and constraining this using mass balance observations in the calibration procedure would enable us to select a realistic threshold over an arbitrary one. Glacier ice is not allowed to advance beyond the perimeter of the initial glacier outlines, meaning that the model is unsuitable for applications where glaciers are surging.Globally, glaciers have been in a state of retreat (Zemp et al., 2019); however, this has not been the case in the Pamir-Karakorum region, where observations show that glaciers have advanced (Hewitt, 2005;Gardelle et al., 2012Gardelle et al., , 2013)).Nevertheless, a more recent study by Hugonnet et al. (2021) shows that this anomalous mass gain appears to have ended.It is reasonable to assume that glaciers will only retreat for applications where the model is driven by future climate scenarios. A future study would explore the sensitivity of the model to the initial glacier and snow conditions.Glacier ice is initialized with a fixed temperature of −5 • C and snow temperature was set to 0 • C. The GlabTop2 method (Frey et al., 2014) used to calculate the initial glacier thickness contains uncertainty due to empirically derived parameterization to estimate basal shear stress.A sensitivity study would test alternative parameterizations or explore other methods.For example, the Volume and Topography Automation (VOLTA) model (Gharehchahi et al., 2020), which includes the effect of side drag on glacier thicknesses, could be used. The model does not include snow redistribution by wind and avalanches, so multi-year accumulation of snow at high elevations leads to the well-known problem of isolated "snow towers".Disregarding snow redistribution in models can not only lead to the formation of these "snow towers", but can also have an impact on the timing and magnitude of the snowmelt runoff (Freudiger et al., 2017).At the end of the simulations in 2007, 55 of the total 61 481 HRUs have daily snow depths exceeding 100 m.This might eventually have a significant impact on river discharge if the model were run over longer simulation periods."Snow towers" were found predominately in a small region located in the western part of the catchment.This can be seen in Fig. S21 showing snow depths and Fig. S22 showing a snow tower. Future work should focus on improving the calibration method to include additional observations.In this study, it was not possible to include glacier mass balance observations in the calibration because of the lack of historical observations during the period 1951-1970.The large range of glaciated areas predicted by the model at the end of the simulation period in 2007 shows that the model can make good predictions in discharge (Fig. 5) whilst simultaneously predicting a large range of estimates for glacier area (Fig. 6).This highlights the importance of including ancillary obser-vations, such as glacier mass balance, snow depth or snow extent, in the calibration to help constrain the predictions.Currently, the model performance is not sensitive to many of the calibration parameter values (Fig. S15).It is possible that some parameter combinations compensate for each other.For example, a high snowfall correction factor may be compensated for by a lower precipitation lapse rate.More analysis needs to be conducted on the sensitivity of the new snow and ice parameters added to DECIPHeR as part of this study, both in time and space, and the types of data that may help to constrain these parameters.Remote sensing snow products have been used to calibrate models, and studies indicated that the integration of data such as MODIS snow cover into hydrological models can improve the simulated snow cover whilst maintaining model performance with respect to runoff (Parajka and Bloschl, 2008).For example, Hong et al. (2015) integrated glacier annual mass balance observations in the calibration of a glacio-hydrological model to simulate discharge for catchments in Norway and the Himalaya.Glacier mass balance was considered so relevant that an annual mass balance observation was given a weighting of 10 000 times more than a discharge observation. We have not included the impact of water abstraction for irrigation or water storage from reservoirs in the model.The results show that a reservoir model is required to improve estimates of discharge at the Uch-Kurgan station.Our assumption is that water abstraction for irrigation is minimal compared to natural streamflow.This assumption is supported by observations of flow intake at irrigation channels, which is very small compared to the flow measured at the gauging stations.Observations of monthly flow intake for the major irrigation channels in Kyrgyzstan are archived in the Central Asian Waterinfo Database.Three channels are located in the Naryn basin: Kulanak, Aryk Chegirtke and Alfatun.(See Fig. S1 for the locations of these irrigation head intakes.)The Kulanak channel is the longest (40 km) and irrigates an area of 45 km 2 in the Kulanak Valley.The maximum monthly flow intake at the head of the channel is approximately 3.6 m 3 s −1 (Fig. S22), which is significantly lower than the peak flow observed at the Ust Kekirim and Naryn stations.Nonetheless, excluding the impact of irrigation will result in a small uncertainty in the prediction of the snowmelt and glacier melt contributions to streamflow because the majority of the water abstraction takes place during in spring and summer (April to August). The model evaluation presented here is an essential prerequisite for running future simulations to predict how river flow will change as glaciers lose mass and the seasonal snowpack disappears.We showed that for the period 1951-2007 discharge increases in the spring (April-May) when the snowpack melts and peaks in the summer (June, July, August and September) when glacier melting commences.Under future climate change scenarios we may expect the timing of the peak snowmelt and glacier melt contributions to happen sooner in the year (Gan et al., 2015).These changes will have implications for water supply in the Ferghana Valley and downstream in the Syr Darya River. Conclusions In this study we implemented a degree day snowmelt and glacier melt model in the DECIPHeR model.The motivation for this work was to develop a hydrological model that can be used to simulate discharge in very large glaciated and snowfed catchments, at a high spatial resolution, whilst maintaining the ability to explore model uncertainty.The overarching aim is to develop a tool for predicting changes in river flow under future climate change scenarios. We describe the snow and glacier model and its application to the Naryn River catchment, central Asia.The model is evaluated using discharge observations, MODIS snow extent and catchment-wide glaciated area derived from Landsat observations.The model is found to be robust at predicting monthly discharge at six gauging stations over the period 1951 to the variable end date between 1980 and 1995 depending on the availability of discharge observations.The validation with MODIS snow extent shows that the model can reproduce the spatial extent in seasonal snow cover reasonably well in winter, summer and spring, with mean hit rates exceeding 0.86 (median ensemble member of the best 0.5 % calibration simulations), but overestimates snow extent in autumn as reflected by a high false alarm ratio and a positive bias.The best 0.5 % calibration simulations using six different and equally weighted metrics reproduce a catchment-wide glaciated area consistent with Landsat observations in the late 1990s and mid-2000s.There is, however, a large range of glaciated area estimates within this ensemble.This means that good predictions of discharge can be made concurrently with a large range of glacier area estimates.This strengthens the case that, to make robust predictions, additional observations such as glacier mass balance, snow depth or snow extent should be included in the model calibration. https://doi.org/10.5194/hess-27-453-2023 Hydrol.Earth Syst.Sci., 27, 453-480, 2023 Figure 1 . Figure 1.The location of Naryn catchment and gauging stations.The MERIT DEM elevation and 1970s Landsat-derived glacier outlines used in this study are also displayed.The sub-catchment boundaries (black) and river path (red) calculated using DECIPHeR are shown.The Naryn River is a tributary of the Syr Darya River which flows across central Asia from the Tien Shan mountains to the remains of the Aral Sea (bottom left inset figure). Figure 2 . Figure 2. Climatology of monthly mean temperature (T ), potential evaporation (PET), precipitation (P ) and observed discharge (Q obs ) averaged for each sub-catchment for the calibration period 1951-1970.Temperature and PET are from the ERA5 and ERA5 back extension dataset, precipitation is from the APHRODITE dataset and observed discharge is from the Global Runoff Data Centre. Figure 3 . Figure 3. Conceptual diagram of the model adapted from Coxon et al. (2019) to show snow and glacier stores and model parameters (left).Glacier ice is situated beneath the snowpack or can be exposed if no snowpack exits (right).The snowpack accumulates when precipitation falls in solid form.Liquid precipitation, snowmelt and glacier melt are transported to the root zone. Figure 4 . Figure 4. Observed and simulated discharge for the calibration period when parameters are selected for individual sub-catchments (MSC method).The shaded envelopes show the 5th-95th percentile ranges and the black line shows the 50th percentile of the best 0.5 % simulations. Figure 5 . Figure 5. Observed and simulated discharge for the validation period when parameters are selected for individual sub-catchments (MSC method).The shaded envelopes show the 5th-95th percentile ranges and the black line shows the 50th percentile of the best 0.5 % simulations. Figure 6 . Figure6.Simulated (top 0.5 %, n = 751) catchment-wide glaciated area using parameters for the Ust.Kurgan station which is located at the outlet of the catchment.The Landsat-observed glaciated area(Kriegel et al., 2013) is shown in blue boxes, where the error on the time axis relates to the years when satellite imagery was used to calculate area.The Landsat-glaciated area uses satellite imagery from the 1970s(1972-1977), late 1990s (1998-2000) and mid-2000s (2002-2007).A simulated threshold glacier depth of 1 mm is used to identify the presence of glacier ice.The figure shows the spread of the glaciated areas predicted by the model, but the simulations are not sorted by conditional probability values. Figure 7 . Figure 7. Panel (a) shows initial glacier thicknesses on HRUs calculated using GlabTop2 in the upper part of the Naryn catchment.1970s glacier outlines are shown in black.Panel (b) shows the annual mean thickness for the year 2007 for the median ensemble member in the top 0.5 % calibration simulations. Figure 9 . Figure 9. MODIS and simulated weekly fraction of the basin covered in snow.The grey-shaded envelope shows the 5th-95th percentile range of the best 0.5 % calibration runs. Figure 10 . Figure 10.Simulated fraction of annual runoff from snow melting, glacier melting and rain averaged for the years 1951-2007.The inner ring shows the 5th percentile, the middle ring is the 50th percentile and the outer ring is the 95th percentile simulation. Figure 11 . Figure 11.Annual percentage contribution of rain (green), snowmelt (blue) and glacier melt (red) to the total runoff for the period 1951-2007.The coloured envelopes show the 10 %-interval increasing percentile limits and the 50th percentile lines are shown in black. Table 1 . Summary of the datasets used for this study. Table 3 . List of parameters and ranges.Default parameters are described in detail inCoxon et al. (2019).The glacier and snow parameters have been added to the model for this study. Table 4 . Calibration and validation performance metrics for the best 0.5 % of the ensemble (751 simulations) when parameters are selected for individual sub-catchments.The table lists the metrics for the best simulation and the 5th, 50th and 95th percentile limits. Table 5 . Table of possible pixel states in a binary classification scheme for the validation of seasonal MODIS (O) and modelled (M) snow extent. Table 6 . Seasonal validation metrics for MODIS and modelled snow extent for the 5th, median and 95th percentile prediction limits of the best 0.5 % calibration runs.Metrics for each season are calculated by averaging annual metrics over the years 2001-2007.Annual metrics are listed in TableS4. Table 7 . Trends in annual air temperature, PET and precipitation and the fraction of annual runoff from snowmelt, glacier melt and rainfall for the period 1951-2007.Trends are derived for the 50th percentile experiment using a Mann-Kendall test with Sen's slope estimator.Bold font indicates statistically significant trends with p values ≤ 0.05.Trends in annual simulated and observed discharge are also shown for the period 1951 to a variable end date which is dependent on the available observations.
2022-05-26T23:25:35.931Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "b350e553e8fbbdd06d517bb636d2a6fa23e8940a", "oa_license": "CCBY", "oa_url": "https://hess.copernicus.org/articles/27/453/2023/hess-27-453-2023.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7fc987d47d2cbe4db6dd64636a6b2a15b46d0242", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
4382920
pes2o/s2orc
v3-fos-license
Learning to Singulate Objects using a Push Proposal Network A key challenge for manipulation in unstructured environments is action selection. We present a novel neural network-based approach that separates unknown objects in clutter by selecting favourable push actions. Our network is trained from data collected through large-scale interaction of a PR2 robot with randomly organized tabletop scenes. The model is designed to propose meaningful push actions based on segmented RGB-D images. We evaluate our approach by singulating up to six unknown objects in clutter. We demonstrate that our method enables the robot to perform the task with a high success rate and a low number of required push actions. Our results based on real-world experiments show that our network is able to generalize to novel objects of various sizes and shapes, as well as to arbitrary object configurations. Highlights of our experiments can be viewed in the following video: https://youtu.be/jWbUJOrSacI . Introduction Robot manipulation tasks such as tidying up a room or sorting piles of objects are a substantial challenge for robots, especially in scenarios with unknown objects. The objective of object singulation is to separate a set of cluttered objects through manipulation, a capability regarded as relevant for service robots operating in unstructured household environments. Further, the ability to separate unknown objects from surrounding objects provides great benefit for object detection, which still remains an open problem for overlapping objects. A key challenge of object singulation is the required interaction of multiple capabilities such as perception, manipulation and motion planning. Perception may fail because of occlusions from objects but also from occlusion by the manipulator itself. Manipulation actions such as grasping may also fail when attempting to manipulate objects that the perception systems fails to push proposal CNN ranking + motion planning training data from interaction push proposals best push + motion plan found push proposal push plan push action Fig. 1: Our approach consists of a convolutional neural network that ranks a set of potential push actions based on a segmented input image in order to clear objects in clutter. The proposed pushes are executed using motion planning. Our push proposal network is trained in an iterative manner from large-scale interaction with cluttered object scenes. correctly segment. Motion planning is particularly prone to errors when applied in unknown, unstructured environments. Previous approaches for object singulation strongly incorporate the concept of object segments or even complete objects. Push action strategies are selected based on the current belief of the configuration of all segments or objects in the scene, an assumption that is not robust, since segments might merge and object detection might fail. We apply a different strategy that aims to relax the concept of segments and objects for solving the task at hand, making task execution less prone to modelling errors from the perception modules. Similar to recent approaches that operate directly on image inputs [16,18] we aim for an end-to-end action selection approach that takes as input an RGB-D image of a tabletop scene and proposes a set of meaningful push actions using a convolutional neural network, as shown in Fig. 1. Our primary contributions are: 1) a push proposal network (push-CNN), trained in an iterative manner to detect push candidates from RGB-D images in order to separate cluttered objects, 2) a method that samples potential push candidates based on depth data, which are ranked with our push-CNN, and 3) real-world experiments with a PR2 robot in which we compare the performance of our method against a strong manually-designed baseline. We quantitatively evaluate our approach on two sets of real-robot experiments. These experiments involve tasks with four unknown objects and 25 different object configurations as well as a more challenging one with six unknown objects and 10 object configurations. In total the robot executed over 200 push actions during the evaluation and achieved an object separation success rate of up to 70%, which shows that our Push-CNN is able to generalize to previously unseen object configurations and object shapes. Related Work Standard model-based approaches for object singulation require knowledge of object properties to perform motion planning in a physics simulator and to choose push actions accordingly [5,6]. However, estimating the location of objects and other properties of the physical environment can be subject to errors, especially for unknown objects [25]. Interactive model-free methods have been applied to solve the task by accumulating evidence of singulated items over a history of interactions including push and grasp primitives [4,12,9]. Difficulties arise when objects have to be tracked after each interaction step, requiring small motion pushes due to partial observations and occlusions. We follow a model-free approach that encodes evidence of singulated objects in the learned feature space of the network. Hermans et al. [12] perform object singulation using several push primitives similar to ours. Their method is based on object edges to detect splitting locations between potential objects to apply pushes in those regions respectively, but does not include learned features and does not take into account stacked objects. Katz et al. [13] present an interactive segmentation algorithm to singulate cluttered objects using pushing and grasping. Similar to them, we also create object hypothesis from segmentation of objects into surface facets but use a different method based on the work of [22]. Given the segmentation the authors perform supervised learning with manual features to detect good manipulation actions such as push, pull and grasp. We take the idea of learning a step further by directly learning from segmented images and therefore removing the need for manual feature design. Gupta et al. [9] present an approach to sort small cluttered objects using a set of motion primitives, but do not show experiments with objects of various size and shapes. Our approach is related to robot learning from physical interaction [20], including approaches that learn to predict how rigid objects behave if manipulated [15,11] and self-supervised learning methods for grasping [21,17]. Gualtieri et al. [8] present a grasp pose detection method in which they detect grasp candidates using a convolutional neural network. They follow a similar methodology but for a different task and goal. Recent methods learn the dynamics of robot-object interaction for the task of pushing objects to a target location [7,1], but do not evaluate on scenes with many touching objects and do not follow an active singulation strategy. Byravan et al. [3] learn to predict rigid body motions of pushed objects using a neural network, but to not demonstrate results for multiple objects in a real-world scenario. Object separation is used as a source to leverage perceptual information from interaction following the paradigm of interactive perception [2]. As demonstrated in our experiments, our singulation method can be used to generate informative sensory signals that lower the scene complexity for interactive object segmentation [23,10,24]. Learning to Singulate Objects We consider the problem of learning good push actions to singulate objects in clutter. Let o be an image with height H and width W taken from a RGB-D camera with a known camera intrinsics matrix K. Further, Let a = (c, α) be a push proposal with start position c = (x, y) and push angle α both specified in the image plane. We aim to learn a function p = F(o, a; θ ) with learned parameters θ that takes as input the image together with a push proposal and outputs a probability of singulation success. Data The prediction function F is trained in an iterative manner (we use three iterations F 0 , F 1 , F 2 ) on experiences that the robot collects trough interaction with the environment. At the first iteration, we gather a dataset D of push proposals together with images from randomly pushing objects in simulation . Then, we train a classifier F 1 that best mimics the labels from an expert user in terms of successful and unsuccessful singulation actions. In the next iteration, we use F 1 to collect more data and add this data to D. The next classifier F 2 is trained to mimic the expert labels on the whole dataset D. We refer to F 1 as a vanilla network that is trained solely on experience collected from random interaction and F 2 as an aggregated network. where y i is a corresponding label in one-hot encoding. The function is trained in a supervised manner using a negative log likelihood loss min θ ∑ N i=1 L p, y i and a softmax function. Approach Our approach is divided into three modules, as shown in Fig. 1: 1) a sampling module that generates a set of push proposals, 2) a neural network module that classifies the set of push proposals and ranks them accordingly, and 3) an action execution module that first finds arm motion plans for the proposed pushes and then executes the push proposal with the highest probability of singulation success. Push Proposal Sampling Our push proposal sampling method is designed to generate a set of push proposals {a 1 , . . . , a M }. First, we sample 3D push handles {h 1 , . . . , h M } represented as point normals pointing parallel to the xy-plane of the robot's odometry frame h m = (x, y, z, n h ), n h = (−n x , −n y , 0). Specifically, given the raw depth image we first apply a surface-based segmentation method [22] to obtain a set of segments {s 1 , . . . , s L } together with a surface normals map and a table plane. Second, we sample for each segment s l a fixed number of push handles and remove push handles below the table plane. Finally, we obtain a set of push proposals by transforming the push handles into the camera frame using the transformation matrix C from the odometry to the camera frame together with the camera intrinsics a = KCh. Push Proposal Network and Input We parametrize the predictor F(o, a; θ ) using a deep convolutional neural network, denoted as a push proposal CNN in Fig. 1. The distinction between o and a as input is somewhat artificial, because both inputs are combined in a new image o a which constitutes the final input of the network. To predict the probability of success of a push proposal for the given singulation task the most important feature is the relation of the candidate segment that we want to push and its relation to neighbouring segments. We propose to fuse the segmented camera image with a push proposal using rigid image transformations T t , T r , see Push Motion Planning To find a motion plan for each push proposal we use the default motion planning algorithm LBKPIECE provided by the MoveIt motion planning framework. Our arm motion strategy consists of two steps. In the first step, we plan to reach the target push proposal a m . In a second step, given that the gripper reached the desired push start position, the robot performs a straight line push with a fixed length of 0.2m. We compute motion plans for both steps before execution to avoid that the robot executes the reaching plan and then fails to find a plan for the straight line push. We found our two-step procedure to be more stable than executing a reach and push trajectory at once, due to arm controller errors. Experiments We conduct both qualitative and quantitative experiments on a real PR2 robot to benchmark the performance of our overall approach. Hereby, we aim to answer the following questions: 1) Can our push proposal network predict meaningful push actions for object shapes and configurations it has not seen during training? 2) Can our network trained in simulation generalize to a real-world scenario? 3) Can our model reason about scene ambiguities including multiple object clusters and focus its attention to objects that are more cluttered than others? In order to answer question (1) we perform real-world experiments with objects that the network has not seen during training and compare against a manual model-free baseline that reasons about available free space in the scene and keeps a history of previous actions. To answer question (2) we test a network that we train solely with data from simulation (de- noted as vanilla network). Finally a challenging experiment with six objects aims to answer question (3), where the learned model has to trade-off between further separating isolated objects to create more space and split isolated object clusters to solve the task. The overall intent of our experiments, is to show that a learned model is able to directly map from sensor input to a set of possible actions, without prior knowledge about objects, spatial relations or physics simulation. Network Training To train our push proposal network we use an iterative training procedure. First, a vanilla network is trained on labeled data from a total of 2,486 (243 positives, 2,243 negatives) random interactions performed in simulation. Then an aggregated network is trained on additional 970 interactions (271 positives, 699 negatives), which we collected using the vanilla network in both simulation and real-world, resulting in a training dataset size of 3,456 push interactions. Both networks are trained in a fully-supervised manner from scratch using random uniform weight initialization and Adam [14]. We performed network architecture search with 10 different configurations and found that for our task the network architecture depicted in Fig. 3, yielded best performance on two separate validation sets. We experienced a drop in classification performance when increasing the image size, introducing dropout, or removing pooling operations, and did not experience improvement for adding xavier weight initialization or other activation functions besides ReLUs. Finally, we trained our model for 25 epochs with a batch size of 64, using the default parameters for Adam (learning rate= 0.001, β 1 = 0.9, β 2 = 0.999, ε = 1e − 08, decay= 0.0). We use the object dataset by Mees et al. [19] for our simulation experiments. Experimental Setup Our PR2 robot is equipped with a Kinect 2 camera mounted on its head that provides RGB-D images with a resolution of 960 × 540. We use both arms of the robot, each with 7-DOF to perform the pushing actions. The task is designed as follows: a trial consists of four or six objects from a test set of 11 everyday objects with different shapes. In the starting configuration for a trial all objects are placed or stacked on a table to be in collision. The starting configurations are saved and reproduced for all methods in a manual manner using point cloud alignment. The 25 starting configurations for our quantitative evaluation with four objects are depicted in Fig. 4. We report the number of successful singulation trials. A singulation trial is considered successful if all objects are separated by a minimum distance of 3cm. Further, we report results for singulation success after every action, to show which methods can successfully execute the task with less actions. For each trial the robot performs a fixed set of actions. The robot is allowed to stop before, if it reasons that all objects are singulated. This is implicitly encoded into our method. If the robot does not find a motion plan for the set of push proposals that the network ranked as positive it automatically terminates the trial. Baseline Method We provide a quantitative comparison against a manually designed baseline method, to evaluate whether or not the predictions of the push proposal network lead to improved object singulation results. We do not provide knowledge about the objects and their physical properties to our model, correspondingly our baseline method is model-free and follows an interactive singulation strategy that is reasonably effective. The method which we denote as 'free space + tracking' works as follows: 1. Given a set of segments {s 1 , . . . , s L } from our object segmentation approach we construct a graph that includes all segments, where each segment is a node in the graph and the edges correspond to distances between the segments. We represent each segment as an axis-aligned bounding box (AABB) and compute an edge between two segments by means of the Manhattan distance of the two AABBs. 2. We compute two features for scoring push proposals with the baseline method. The first feature is a predictive feature that reasons about the resulting free space if a segment would be pushed to some target location. To compute the feature, we predict a straight line motion of the respective segment to which the push proposal is assigned to, according to the push direction and the length. Next, we compute the Manhattan distance between the resulting transformed AABB and all other segments in the scene, which we assume will remain static. The free space feature f s is the minimum distance between the transformed AABB and the other AABBs. If the predicted final position of a segment would lead to collision with another segment the free space feature is zero, 3. The second feature includes the push history that we store for each segment. It follows the intuition that the same segment should not be pushed too often, regardless of the predicted free space around it. To retrieve the push history over all segments, we follow a segment-based tracking approach, which aligns the centroid and the principal components of two segments from the current set of segments and the set of segments from the last interaction, similar to [4]. We match a set of segments using a weighted average of the principal components and the segment centroid distances d(s l , s m ) = 0.6 · d pca (s l , s m ) + 0.4 · d c (s l , s m ). To punish multiple pushes r of a segment throughout a trial we map the push motion history into a normalized feature using an exponential decay function f h = exp(−r). 4. Accordingly, a push proposal a m receives the score p m = 0.5 · f s + 0.5 · f h . Quantitative Comparisons Our results with four objects, shown in Table 1 indicate that our method is able to improve over the performance of the manual baseline. The success rate of our vanilla network is 68%, which suggests that the model is making meaningful predictions about which push actions to perform in order to singulate all four objects. Fig. 5 provides a more fine-grained evaluation, showing the success rate with respect to the number of pushes. Note that the network requires less push actions to complete the task. Interestingly, the baseline method performs on par with the network after the robot has executed five push actions, but then it does not further improve after six executed pushes. We noted that the baseline method sometimes fails to singulate the last two objects of the scene and instead will choose an object that might already be singulated because it lacks a good attention mechanism. Results in Table 2 show that the task is more complex with six objects, due to additional space constraints and formation of multiple object clusters. When looking at the scene in Fig. 1 four out of the six objects are located very closely on the table (green drink bottle, blue box, white round bowl, white coffee bag). Although there are many possible push actions that we would consider reasonable, only pushing the coffee bag or the round bowl to the left side of the table can clear the scene. Accordingly, we find that the performance of the vanilla network drops with respect to the previous experiment with four objects. During training it has only seen a small amount of scenes where a random baseline would have cleared all but two objects and even less likely has seen examples where the robot by chance chose the right action to clear the two remaining objects. Therefore, in this scenario the vanilla network and the baseline method perform on par with a success rate of 40%. When comparing the average number of pushes the baseline performs slightly better, see Fig. 6. Results show that the aggregated network clearly outperforms the other two methods, winning seven out of ten trials with an average of 6.7 push actions needed to singulate all six objects, as shown in Table 2 . To get an intuition about the numerical results we refer the reader to the performance reported by Hermans et al. [12], who evaluated a very similar task with six objects. They report a success rate of 20% with twelve average number of push actions. Fig. 7 gives insights about the singulation trials from the perspective of the Kinect camera. The first and the third column shows the current scene, while the second and fourth column shows the robot performing a push action. The trial evolves from top to bottom of the figure. Qualitative Results trial successful trial unsuccessful trial Fig. 7: On the left, we see a successful trial. Note that the robot tries several times to clear a small plastic bottle which is stuck under a blue box and manages to singulate the objects in the very last action. On the right we see a run that fails because of two typical reasons: First, the robot is not able to perform a save push action (fourth column, fifth row), which results in two objects moving more closely together (blue box, cereal box). Second, the network is not able to draw its attention to the same two objects that are now touching and instead proposes an action that moves the green bottle further away from the scene (lower right image). Conclusions We presented a novel neural network-based approach to singulate unknown objects in clutter by means of pushing actions. Our push proposal network is able to reason about the scene based on segmented images from an RGB-D camera and predict which push action to perform in order to clear a set of objects with various sizes and shapes. We tested our method in extensive real-robot experiments using a PR2 robot and showed the ability of our method to achieve good performance for the task of singulating up to six objects.
2017-07-25T17:36:36.000Z
2017-07-25T00:00:00.000
{ "year": 2017, "sha1": "b59958ff4eed7059659f35f7f5df5b985e797f4b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.08101", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b59958ff4eed7059659f35f7f5df5b985e797f4b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
91307561
pes2o/s2orc
v3-fos-license
CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment’s pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies. Introduction The field of high-level visual and auditory research is concerned with the sensory and cognitive processes involved in the recognition of objects or words, faces or speakers and, increasingly, of social expressions of emotions or attitudes in faces, speech and music. In traditional psychological methodology, the signal features that drive judgments (e.g., facial metrics such as width-to-height ratio, acoustical features such as mean pitch) are posited by the experimenter before being controlled or tested experimentally, which may create a variety of confirmation biases or experimental demands. For instance, stimuli constructed to display western facial expressions of happiness or sadness may well be recognized as such by non-western observers a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 [1], but yet may not be the way these emotions are spontaneously produced, or internally represented, in such cultures [2]. Similarly in auditory cognition, musical stimuli recorded by experts pressed to express emotions in music may do so by mimicking expressive cues used in speech, but these cues may not exhaust the many other ways in which arbitrary music can express emotions [3]. For all these reasons, in recent years, a series of powerful data-driven paradigms (built on techniques such as reverse-correlation, classification image or bubbles; see [4] for a review) were introduced in the field of visual cognition to discover relevant signal features empirically, by analyzing participant responses to large sets of systematically-varied stimuli [5]. The reverse correlation technique was first introduced in neurophysiology to characterize neuronal receptive fields of biological systems with so-called "white noise analysis" [6][7][8][9]. In psychophysics, the technique was then adapted to characterize human sensory processes, taking behavioral choices (e.g., yes/no responses) instead of neuronal spikes as the systems' output variables to study, e.g. in the auditory domain, detection of tones in noise [10] or loudness weighting in tones and noise ( [11]; see [4] for a review of similar applications in vision). In the visual domain, these techniques have been extended in recent years to address not only lowlevel sensory processes, but higher-level cognitive mechanisms in humans: facial recognition [12], emotional expressions [2,13], social traits [14], as well as their associated individual and cultural variations ( [15]; for a review, see [5]). In speech, even more recently, reverse correlation and the associated "bubbles" technique were used to study spectro-temporal regions underlying speech intelligibility [16,17] or phoneme discrimination in noise [18,19] and, in music, timbre recognition of musical instruments [20,21]. All of these techniques aim to isolate the subspace of feature dimensions that maximizes participant responses, and as such, need to search the stimulus generative space e.g. of all possible images or sounds relevant for a given task. A typical way to define and search such a space is to consider a single target stimulus (e.g. a neutral face, or the recording of a spoken phoneme), apply a great number of noise masks that modify the low-or high-level physical properties of that target, and then regress the (random) physical properties of the masks on participant judgements. Techniques differ in how such noise masks are generated, and on what stimulus subspace they operate (Fig 1). At the lowest possible level, early proposals have applied simple pixel-level noise masks [13], and sometimes even no stimulus at all, akin to white noise "static" on a TV screen in which participants were forced to confabulate the presence of a visual target [22] or a vowel sound [19]. More consistently, noise masks generally operate on low-level, frequency-based representations of the signal, e.g. at different scales and orientations for image stimuli [12,14], different frequencies of an auditory spectrogram [17,23] or different rates and scales of a modulation spectrum (MPS) [16,21]. While low-level subspace noise has the advantage of providing a physical description of the stimuli driving participant responses, it is often a suboptimal search space for highlevel cognitive tasks. First, all stimulus features that are driving participant judgements may not be efficiently encoded in low-level representations: the auditory modulation spectrum, for instance, is sparse for a sound's harmonic regularities, coded for by a single MPS pixel, but not for features localized in time, such as attack time or transitions between phonemes [23]. As a consequence, local mask fluctuations will create continuous variations for the former, but not the latter features which will be difficult to regress on. Second, low-level variations in stimuli typically create highly distorted faces or sounds (Fig 1-top), for which one may question the ecological relevance of participant judgements. Finally, and perhaps most critically, many of the most expressive, cognitively-meaningful features of a face or voice signal are coordinated action-units (e.g. a contraction of a facial muscle, the rise of pitch at the end of a spoken utterance) that have distributed representations in low-level search spaces, and will never be consistently explored by a finite amount of random activations at that level. Consequently, the face research community has recently developed a number of higherlevel generative models able to synthesize facial expressions through the systematic manipulation of facial action units [24,25]. These tools are based on morphological models that simulate the 'amplitude vs time' effect of individual muscles and then reconstruct realistic facial textures that account for the modified underlying morphology, as well as head pose and lighting. Searching such high-level subspaces with reverse-correlation is akin to searching the space of all possible facial expressions (Fig 1-bottom), while leaving out the myriad of other possible facial stimuli that are not directly interpretable as human-made expressions-a highly efficient strategy that has been applied to characterize subtle aspects of social and emotional face perception processes, such as cultural variations in emotional expressions [2], physical differences between different kinds of smiles [26] or in the way these features are processed in time [27]. In auditory research, however, similarly efficient data-driven strategies have not yet been common practice, by lack of tools able to manipulate the high-level stimulus dimensions that are relevant for similar judgement tasks when they apply on, e.g., voice or music. Here, we present an open-source audio-transformation toolbox, called CLEESE (Combinatorial Expressive Speech Engine, named after British actor John Cleese, with reference to the "Ministry of Silly Talks"), able to systematically randomize the prosody/melody of existing Reverse-correlation paradigms aim to isolate the subspace of feature dimensions that maximizes participant responses, and as such, need to search a stimulus generative space e.g. of all possible images or sounds relevant for a task. In a majority of studies, noise masks operate on low-level, frequency-based representations of the signal, e.g. at different scales and orientations for image stimuli (top-left) or different frequencies of an auditory spectrogram (top-right). More recent models in the vision modality are able to explore the subspace of facial expressions through the systematic manipulation of facial action units (bottom-left). The present work represents a conceptually-similar development for the auditory modality. speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment's pitch, duration, amplitude or frequency content, using a new Python-language implementation of the phase-vocoder digital audio technique. Transformations made with CLEESE explore the space of speech intonation and expressive speech prosody by allowing to create random time-profiles of pitch (e.g. rising pitch at the end of an utterance, as in interrogative sentences [28]), of duration (e.g. word-final vowel lengthening, as used as a cue for word segmentation [29]) or amplitude (e.g. louder on prominent words or phonemes [30]). In music, the same transformations can be described as melodic, manipulating the pitch/tuning of successive notes in a sequence (e.g. ��, � or �]), their duration (e.g. �, � or �) or amplitude (e.g. p, mf or f). All transformations are parametric, thus allowing to generate thousands of random variants drawn e.g. from a gaussian distribution; and realistic (within appropriate parameter ranges), such that the resulting audio stimuli do not typically appear artificial/transformed, but rather plausible as ecological speech or music recordings. In recent work, we have used CLEESE with a reverse-correlation paradigm to uncover what mental representations of pitch profiles underlie judgements of speaker dominance and trustworthiness in short utterances like the word 'hello' [31]. We recorded a single utterance of the word 'hello' by one male and one female speaker. CLEESE was first used to flatten the pitch of the recordings (by transforming it with a pitch profile that alters its original prosody to constant pitch), and then to generate random pitch variations, by manipulating the pitch over 6-time points on Gaussian distributions of SD = 70 cents clipped at ±2.2 SD (these values were chosen in order to map natural variations in speech). Pairs of these randomly-manipulated voices were then presented to observers who were asked, on each trial, to judge which of the two versions appeared most dominant/trustworthy. The participants' mental representations were then computed by computing the mean pitch contour (a 6-point vector) of the voices they classified as dominant (resp. trustworthy) minus the mean pitch contour of the voices classified as non-dominant (resp. non-trustworthy). In this article, we describe the functionality of the CLEESE toolbox, the algorithms that underlie it as well as how to deploy it in psychophysical reverse-correlation experiments such as those above. We then present two case-studies in which we use CLEESE to generate pitchshifted speech stimuli to study the perception of interrogative vs declarative speech prosody, and time-stretched musical stimuli to study the rhythm processing of sung melodies. Functionality CLEESE is an open-source Python toolbox used to create random or fixed pitch, time and amplitude transformations on any input sound. CLEESE can also be used to create spectral transformations, a technique not described here-see [32]. The transformations can be both static or time-varying. Besides its purpose of generating many stimuli for reverse-correlation experiments, the toolbox can also be used for producing individual, user-determined transformations. CLEESE is available for download upon free registration at http://forumnet.ircam.fr/ product/cleese. CLEESE operates by generating a set of random breakpoint functions (BPFs) for each transformation, which are then passed to a spectral processing engine (based on the phase vocoder algorithm, see below) for the transformation to occur. BPFs determine how the sound transformations vary over the duration of the stimulus. CLEESE can randomly generate BPFs of two types: ramps, where the corresponding sound parameter is interpolated linearly between breakpoints (Fig 2-top) and square, where the BPF is a square signal with sloped transitions (Fig 2-bottom). BPF segments can be defined either by forcing all of them to have a fixed duration (i.e., their number will depend on the whole sound's duration, see Fig 2-left), or by forcing a fixed number of segments along the total sound duration (i.e., their duration will depend on the whole sound's duration, see Fig 2-right). Alternatively, the user can pass custom breakpoint positions, which can be defined manually e.g. to correspond to each syllable or note in given a recording. BPFs can be defined to transform sounds along three signal dimensions: Pitch: The BPF is used to transpose up and down the pitch of each segment in the sound, while maintaining its amplitude and duration constant. Each breakpoint in the BPF corresponds to a pitch shift value p in cents or percents of a semitone, with 0 cents corresponding to no change with respect to original pitch. See S1 Sound for an example of pitch shifting manipulation. Time: The BPF is used to stretch or compress the duration of each segment in the sound, while maintaining its amplitude and pitch constant. Each breakpoint corresponds to a time stretch factor t in ratio to original duration, with 0 < t < 1 corresponding to compression/ faster speed, t > 1 stretched/slower speed, and t = 1 no change with respect to segment's original duration. See S2 Sound for an example of time stretching manipulation. Amplitude: The BPF is used to amplify or decrease the signal's instantaneous amplitude in each segment, while maintaining its pitch and duration constant. Each breakpoint corresponds to a gain value g in dBs (thus, in base-10 logarithm ratio to original signal amplitude), with g = 0 corresponding to no change with respect to original amplitude. The default mode is for CLEESE to generate p, t or g values at each breakpoint by sampling from a Gaussian distribution centered on p = 0, t = 1 or g = 0 (no change) and whose standard deviation (in cents, duration or amplitude ratio) allows to statistically control the intensity of the transformations. For instance, with a pitch SD of 100 cents, CLEESE will assign random shift values at every breakpoint, 68% of which are within ± 1 semitone of the segment's original pitch (Fig 3A and 3B). The transformations can be chained, e.g. manipulating pitch, then duration, then amplitude, all with separate transformation parameters. For each type of transformation, distributions can be truncated (at given multiples of SD) to avoid extreme transformation values which may be behaviorally unrealistic. Alternatively, transformation parameters at each breakpoint can be provided manually by the user. For instance, this allows to flatten the pitch contour of an original recording, by constructing a custom breakpoint function that passes through the pitch shift values needed to shift the contour to a constant pitch value ( Fig 3C and 3D). Algorithm The phase vocoder is a sound processing technique based on the short-term Fourier Transform (STFT, [33]). The STFT decomposes each successive part (or frame) of an incoming audio signal into a set of coefficients that allow to perfectly reconstruct the original frame as a weighted sum of modulated sinusoidal components (Eq (1)). The phase vocoder algorithm operates on each frame's STFT coefficients, modifying them either in their amplitude (e.g. displacing frequency components to higher frequency positions to simulate a higher pitch) or their position in time (e.g. displacing frames to later time points to simulate a slower sound). It then generates a modified time-domain signal from the manipulated frames with a variety of techniques meant to ensure the continuity (or phase coherence) of the resulting sinusoidal components [34,35]. Depending on how individual frames are manipulated, the technique can be used e.g. to change a sound's duration without affecting its pitch (a process known as time stretching), or to change a sound's pitch without changing its duration (pitch shifting, see [36] for a review). In more detail, the phase vocoder procedure considers a (digital) sound x as a real-valued discrete signal, and a symmetric real-valued discrete signal h composed of N samples, usually called window, that is used to cut the input sound into successive frames. In the analysis stage, frames are extracted with a time step (or hop size) R a , corresponding to successive time positions t u a ¼ R a u, where u is the index of the u-th frame. The discrete STFT of x with window h is given by the Discrete Fourier Transform (DFT) of the time frames of x multiplied by h, X is a two-dimensional complex-valued representation, which can be expressed in terms of real and imaginary parts or, equivalently, amplitude and phase. The amplitude of the coefficient at time index u and frequency bin k is given by jXðt u a ; O k Þj, while its phase is ff Xðt u a ; O k Þ ¼ arg ðXðt u a ; O k ÞÞ. A given transformation T(X) = Y can then be performed by altering the amplitudes, phases or temporal positions of the frames, leading to a complex-valued representation Y of the same dimension as X. The transformed coefficients in Y are then used to synthesize a new sound y, using the inverse procedure to Eq (1). Using the same window function h and a synthesis hop size R s such that t u s ¼ R s u, the output signal is given by Time stretching is performed by modifying the synthesis hop size R s with respect to the analysis hop size R a . In CLEESE's implementation, the analysis hop size R a is changed according to the desired stretch factor t, while the synthesis hop size R s is kept constant (the other way around is also possible). Pitch shifting with a factor p can be performed by either warping the amplitudes of each frame along the frequency axis, or by performing time stretching with the same factor p followed by resampling with the inverse factor 1/p to restore the original duration (CLEESE uses the latter method). Finally, amplitude manipulations by a time-frequency mask G composed of scalar gain factors g u,k can be generated by taking jYðt u a ; O k Þj ¼ g u;k � jXðt u a ; O k Þj for the amplitudes, and leaving phases unchanged. When STFT frames are time-shifted as part of the phase-vocoder transformation, the position of the u-th output frame is different from t u a and the phases of the STFT coefficients have to be adapted to ensure the continuity of the reconstructed sinusoidal components and the perceptual preservation of the original sound's timbre properties. Phase vocoder implementations offer several methods to this end [36]. In one classic procedure (horizontal phase synchronization), phases are adjusted independently in all frequency positions, with phases at position t u s extrapolated from phases at position t uÀ 1 s [34]. In an improved procedure (vertical phase synchronization, or phase-locking), each frame is first analysed to identify prominent peaks of amplitude along the frequency axis, and phases are only extrapolated in the peak frequency positions; phases of the frequency positions around each peak are locked to the phase of the peak frequency position [35]. It is this second procedure that is implemented in CLEESE. Further artifacts are inherent to phase-vocoder transformations. For instance, the pitch shifting procedure also shifts the positions of the formants, which results in unnatural timbres for shifting factors of more than a few semitones. Several strategies have been devised in the literature to compensate for this formant shift, often based on the detection and preservation of the original spectral envelope of the input signal (a spectral envelope is a smooth curve that approximately follows the contour defined by the amplitude peaks of the sinusoidal partials in the spectrum). Another common artifact is transient smearing, which is the distortion and loss of definition of the transients when they are subjected to time stretching (transients are noisy, aperiodic parts of the sound, such as unvoiced consonants). This can be avoided by an additional step aimed at detecting the unvoiced portions of the signal and bypassing phase vocoder modifications on those segments. Neither envelope nor transient preservation are implemented in the current version of CLEESE, but their perceptual importance is deemed to be minimal in the usage scenario discussed here, which involves small random variations of the pitch and stretch factors. Usage CLEESE is implemented as module for the Python (versions 2 and 3) programming language, and distributed under an open-source MIT License. It provides its own implementation of the phase vocoder algorithm, with its only dependencies being the numpy and scipy libraries. In its default 'batch' mode, CLEESE generates many random modifications from a single input sound file, called the base sound. It can be launched as follows: import cleese inputFile = 'path_to_input_sound.wav' configFile = 'path_to_config_file.py' cleese.process(soundData = inputFile, configFile = configFile) where inputFile is the path to the base sound (a mono sound in WAV format) and configFile is the path to a user-generated configuration script containing generation parameters for all transformations. For each run in batch mode, the toolbox generates an arbitrary number of random BPFs for each transformation, applies them to the base sound, and saves the resulting files and their parameters. The main parameters of the configuration file include how many files should be generated, where the transformed files should be saved, and what transformation (or combinations thereof) should be applied. For instance, the following configuration file generates 10 audio files which result from a random stretch, followed by a random pitch transformation of the base sound. In addition, the configuration file includes parameters that specify how BPFs should be generated for each transformation, including the number or duration of BPF windows, their type (ramp or square) and the standard deviation of the gaussian distribution used to sample breakpoints. As an example, the following parameters correspond to pitch BPFs consisting of 6 ramp windows, each with a normally distributed pitch shift between -300 and 300 cents. The CLEESE module is distributed with a Jupyter notebook tutorial, showing further examples of using the toolbox for sound manipulation. In addition, all experimental data and analysis scripts (also in the form of Jupyter notebooks) from the following two case-studies are made available as supplementary material. Case-study (1): Speech intonation To illustrate the use of CLEESE with speech stimuli and pitch shifting, we describe here a short reverse-correlation experiment about the perception of speech intonation. Speech intonation, and notably the temporal pattern of pitch in a given utterance, can be used to convey syntactic or sentence mode information (e.g. whether a sentence is interrogative or declarative), stress (e.g. on what word is the sentence's focus), emotional expression (e.g. whether a speaker is happy or sad) or attitudinal content (e.g. whether a speaker is confident or doubtful) [37]. For instance, patterns of rising pitch are associated with social traits such as submissiveness, doubt or questioning, and falling pitch with dominance or assertiveness [38][39][40]. In addition, recent neurophysiological evidence suggests that intonation processing is rooted at early processing stages in the auditory cortex [41]. However, it has remained difficult to attest of the generality of such intonation patterns and of their causality in cognitive mechanisms. Even for information as seemingly simple as the question/statement contrast, which is conventionally associated with the "final Rise" intonation, empirical studies show that, while frequent, this pattern is not as simple nor common as usually believed [42]. For instance, in one analysis of a corpus of 216 questions, the most frequent tone for polar questions (e.g. "Is this a question?") was a Fall [43]. In addition, in English, interrogative pitch contours do not consistently rise on the final part of the utterance, but rather after the first syllable of the content word [44] (e.g. "Is this a question you're asking? vs "Is this a question you're asking?"). We give here a proof of concept of how to use CLEESE in a reverse-correlation experiment to uncover what exact pitch contour drives participants' categorization of an utterance as interrogative or declarative. Data come from the first experiment presented in [31]. Methods Stimuli. One male speaker recorded a 426ms utterance of the French word "vraiment" ("really"), which can be experienced either as a one-word statement or question. We used CLEESE to artificially manipulate the pitch contour of the recording. First, the original pitch contour (mean pitch = 105Hz) was artificially flattened to constant pitch, using the procedure shown in Fig 3C and 3D. Then, we added/subtracted a constant pitch gain (± 20 cents, equating to ± 1 fifth of a semitone) to create the 'high-' or 'low-pitch' versions presented in each trial (note that we created these "high" and "low" pitch categories to facilitate participants' task, but they are not necessary). Finally, we added Gaussian "pitch noise" (i.e. pitch-shifting) to the contour by sampling pitch values at 6 successive time-points, using a normal distribution (SD = 70 cents; clipped at ± 2.2 SD), linearly interpolated between time-points, using the procedure shown in Fig 3A and 3B. The choice of SD in this experiment was set empirically, based on the plausible pitch changes in natural speech (<200 cents between consecutive time points). See S3 Sound for examples of stimuli. Procedure. 700 pairs of randomly-manipulated voices were presented to each of N = 5 observers (male: 3, M = 22.5yo), all native French speakers with self-reported normal hearing. Participants listened to a pair of two randomly-modulated voices and were asked which of the two versions was most interrogative. Inter-stimulus interval in each trial was 500 ms, and inter-trial interval was 1s. Apparatus. The stimuli were mono sound files generated at sampling rate 44.1 kHz in 16-bit resolution by Matlab. They were presented diotically through headphones (Beyerdynamic DT 770 PRO; 80 ohms) at a comfortable sound level. Analysis. A first-order temporal kernel [45] (i.e., a 7-points vector) was computed for each participant, as the mean pitch contour of the voices classified as interrogative minus the mean pitch contour of the voices classified as non-interrogative. Kernels were then normalized by dividing them by the absolute sum of their values and then averaged over all participants for visualization. A one-way repeated-measures ANOVA was conducted on the temporal kernels to test for an effect of segment on pitch shift, and posthocs were computed using Bonferroni-corrected Tukey tests. All experimental data and analysis code for Case Study 1 is provided in supporting information, as S1 Data and S1 and S2 Codes. Results Observers' responses revealed mental representations of interrogative prosody showing a consistent increase of pitch at the end of the second syllable of the stimulus word (Fig 4-left), as reflected by a main effect of segment index: F(6, 24) = 35.84, p = 7.8e-11. Pitch shift at segment 5 (355ms) was significantly different from all other segment locations (all ps <0.001). The pattern was remarkably consistent among participants, although all participants were tested on separate set of random stimuli (Fig 4-right). Case-study (2): Musical rhythm To illustrate another use of CLEESE, this time with musical stimuli and the time-stretching functionality, we describe here a second reverse-correlation experiment about the perception of musical rhythm and expressive timing. The study of how participants perceive or accurately reproduce the rhythm of musical phrases has informed such domain-general questions as how humans represent sequences of events [46], internally measure speed and tempo [47] and entrain to low-and high-frequency event trains [48]. For instance, it is often observed that musicians tend to lengthen notes at the ends of musical phrases [49] and that even non-musicians anticipate such changes when they perceive music [50]. However, the cognitive structures that govern a participant's representation of the rhythm of a given musical passage are difficult to uncover with experimental methods. In [50], it was accessed indirectly by measuring the ability to detect timing errors inserted at different positions in a phrase; in [51], participants were asked to manually advance through a sequence of musical chords with a key press, and the time dwelt on each successive chord was used to quantify how fast they internally represented the corresponding musical time. Here, we give a proof of concept of how to use CLEESE in a reverse-correlation experiment to uncover what temporal contour drives participants' judgement of a rhythmically competent/ accurate rendition of the well-known song Happy birthday. Methods Stimuli. One female singer recorded a a capella rendering of the first phrase of the French folk song "Joyeux anniversaire" (�. � � � � �; translation of English song "Happy birthday to you" [52]). We used CLEESE to artificially manipulate the timing of the recording by stretching it between different breakpoints. First, we manually identified the time onset of each sung note in the phrase, and use these positions as breakpoints. Second, the original temporal contour was artificially flattened (i.e. all notes were stretched/compressed to have identical duration � � � � � �), while preserving the original pitch of each note. The duration of this final stimulus was 3203ms. Then, we added Gaussian "temporal noise" (i.e. time-stretching) to each note by sampling stretch values at 6 successive time-points, using a normal distribution centered at 1 (SD = 0.4; clipped at ± 1.6 SD), using a square BPF with a transition time of 0.1 s. The resulting stimuli were therefore sung variants of the same melody, with the original pitch class, but random rhythm (see S4 Sound). Procedure. Pairs of these randomly-manipulated sung phrases were presented to N = 12 observers (male: 6, M = 22yo), all native French speakers with self-reported normal hearing. Five participants had previous musical training (more than 12 years of instrumental practice) and were therefore considered as musicians, the other seven participants had no musical training and were considered as non-musicians. Participants listened to a pair of two randomlymodulated phrases and were asked which of the two versions was best sung/performed. Interstimulus interval in each trial was 500 ms, and inter-trial interval was 1s. Each participant was presented with a total of 313 trials. There were 280 different trials and the last block of 33 trials was repeated twice (in the same order) to estimate the percentage of agreement. After the test, all participants complete the rhythm, melody, rythm-melody subtests of the Profile of Music Perception Skills (PROMS, [53]), in order to quantify their melodic and rhythmic perceptual abilities. Apparatus. Same as previous section. Analysis. A first-order temporal kernel [45] (i.e., a 6-points vector) was computed for each participant, as the mean stretch contour of the phrases classified as 'good performances' minus the mean stretch contour of the phrases classified as 'bad performances' (note that because stretch factors are ratio, the kernel was in fact computed in the log stretch domain, as mean log t + − mean log t − , where t + and t − are the contours of selected and non-selected trials, resp.). Kernels were then normalized by dividing them by the absolute sum of their values and then averaged over all participants. A one-way repeated-measures ANOVA was conducted on the temporal kernels to test for an effect of segment on time-stretch, and posthocs were computed using Bonferroni-corrected Tukey tests. In addition, the amount of internal noise for each subject was computed from the double-pass percentage of agreement, using a signal detection theory model including response bias and late additive noise [54,55]. All experimental data and analysis code for Case Study 2 is provided in supporting information, as S1 Data and S1 and S2 Codes. Results Observers' responses revealed mental representations of note durations that significantly evolve through time (Fig 5-left) with a main effect of time index: F(5, 40) = 30.7, p = 1e-12. The ideal theoretical timing contour (0.75-0.25-1-1-1-2), as given by the score of this musical phrase, was converted in time-stretch units by taking the log of these values and was superimposed on Fig 5. Several deviations from this theoretical contour are worth noting. First, both musicians and non-musicians had shortened representations for the first note of the melody (�. � ! � �) and, to a lesser extent, the second note. This is consistent with previous findings showing that expressive timing typically involving a shortening of the short notes, and a lengthening of the long notes of a melody [49]. Second, both musicians and non-musicians had similarly shortened representations for the duration of the last note (� � � � ! � � � �), a pattern is at odds with the note-final lengthening reported for non-musicians in a self-paced listening paradigm [51], but consistent with the lengthening of the last-but-one inter-onsetinterval found in professional pianists [49,56]. Finally, the analysis revealed a significant interaction of time index and participant musicianship: F(5, 40) = 2.5, p = .045. The mental representations of a well-executed song for the non-musician participants had longer durations on the third note ("happy BIRTH-day to you") compared with musicians participants. Needless to say, these differences between musicians and non-musicians are only provided for illustrative purposes, because of the small-powered nature of this case-study, and their experimental confirmation and interpretation remain the object of future work. One should note that it is complicated to quantitatively compare the time-stretch contours obtained with reverse-correlation with an ideal theoretical pattern. First, time-stretch factors derived by reverse-correlation have arbitrary scale, and the ideal pattern is therefore be represented at an arbitrary position on the y-axis, and rescaled to have the same (max-min) stretch range as the one obtained experimentally. Second, because participants' task is to infer about the best overall rhythm and not individual note duration, the time-stretch values at individual time points are not perceptually independent (e.g., if the first two notes of a stimulus are long, participants may infer a slower beat and expect even longer values for the following notes). For these reasons, the ideal rhythmic pattern superimposed on these measured patterns should simply be taken as an illustration of how note durations theoretically evolve from one note (i) to the next (i+1). Further theoretical work will be needed to best analyze and interpret duration kernels derived from such reverse-correlation experiments. In addition, internal noise values computed from the repeated block of this experiment indicate that subjects did not behave at random but rather relied on a somewhat precise mental representation of the ideal temporal contour (mean internal noise of 1.1 (SD = 1.1) in units of external noise; comparable with the average value of 1.3 obtained in typical low-level sensory psychophysical detection or discrimination tasks, see [55]). In an exploratory manner, we asked whether these internal noise values would correlate with the participant's degree of musical skills (average score obtained for the 3 sub-tests of PROMS). We found a significant negative correlation (r = -0.65, p = 0.02), suggesting that low musical skills (regardless of musicianship) are associated with a more variable mental/memory representation of the temporal contour of this melody (Fig 5-right). Because this correlation may also be driven by the amount of attention that participants had both in the PROMS and in the reverse-correlation tasks (which may be similar), further work is required to determine the exact nature of this observed relationship. Discussion By providing the ability to manipulate speech and musical dimensions such as pitch, duration and amplitude in a parametric, independent manner in the common environment of the Python programming language, the open-source toolbox CLEESE brings the power of datadriven/reverse-correlation methods to the vast domain of speech and music cognition. In two illustrative case-studies, we have used CLEESE to infer listeners' mental representations of interrogative intonation (rising pitch at the end of the utterance) and of the rhythmic structure of a well-known musical melody, and shown that the methodology had potential to uncover potential individual differences linked, e.g., to participant's training or perceptual abilities. As such, the toolbox and the associated methodology open avenues of research in communicative behavior and social cognition. As a first application of CLEESE, we have recently used a reverse-correlation paradigm to uncover what mental representations of pitch profiles underlie judgements of speaker dominance and trustworthiness in short utterances like the word 'hello' [31]. The technique allowed to establish that both constructs corresponded to robust and distinguishing pitch trajectories, which remained remarkably stable whether males or female listeners judged male or female speakers. Other potential questions include, in speech, studies of expressive intonation along all characteristics of pitch, speed and amplitude, judgements of emotions (e.g. being happy, angry or sad) or attitudes (e.g. being critical, impressed or ironic); in music, studies of melodic and rythmic representations in naive and expert listeners, and how these may differ with training or exposure. Beyond speech and music, CLEESE can also be used to transform an study non-verbal vocalizations, such a infant cries or animal calls. By measuring how any given individual's or population's mental representations may differ from the generic code, data-driven paradigms have been especially important in studying individual or cultural differences in face [2,57] or lexical processing [23]. By providing a similar paradigm to map mental representations in the vast domain of speech prosody, the present technique opens avenues to explore e.g. dysprosody and social-cognitive deficits in autismspectrum disorder [58], schizophrenia [59] or congenital amusia [60], as well as cultural differences in social and affective prosody [61]. Because CLEESE allows to create random variations among the different dimensions of both speech and musical stimuli (pitch, time, level), it also opens possibilities to measure the amount of internal noise (e.g. using a double-pass technique as in the case-study 2 here) associated to the processing of these dimensions in many various high-level cognitive tasks. This is particularly interesting because it provides a quantitative way to (1) demonstrate that participants are not doing the task at random (which is always an issue in this type of high-level task where there is no good/bad answers that would lead to an associated d-prime value) and (2) investigate which perceptual dimensions are cognitively processed with what amount of noise. The current implementation of CLEESE, and its application to data-driven experiments in the auditory modality, leaves several methodological questions open. First, one important choice to make when generating stimuli is the SD of the Gaussian distributions used to draw parameters' values (pitch in case study 1; time-stretch in case study 2). Our rationale in this work has been to choose SDs that correspond to the typical variation observed for each parameter in natural speech; e.g. <200 cents (SD = 70cents cut at 2.2SD) for pitch contours in the first case study. We find this empirical approach appropriate in practice, since it allows creating stimuli that sound natural. However, it would be interesting to examine the optimality of this choice in future studies, for example by testing different values of SDs or using an adaptive staircase procedure to select the one that is best suited to uncover the mental representations. It is important to note that, theoretically, first-order kernels do not depend on the choice of SD, as filters are scaled in units of external noise variance, i.e. SD [4]. However, it is reasonable to expect that using low values of SD can lead to poor kernel resolution (because many trials would not be audible), and high values can create undesired sound artefacts or inconsistencies (e.g. pitch variations that are too high or fast, or would not occur with a natural voice production system). Therefore, basing the value of SD on the range of variation measured in natural voices, or on pilot/preliminary experiments seems the most appropriate choice. Second, in the current form of the software, the temporal dynamics of the noise/perturbations is purely random, and changes of pitch/time or amplitude are independent across segments (a classical assumption of the reverse-correlation technique [4]). This assumption may create prosodic patterns that are not necessary realizable by the human voice and thus bias the participant responses for those trials. In vision, recent studies have manipulated facial action units with a restricted family of temporal profiles parametrized in acceleration, amplitude and relative length [25]. In a similar manner, future versions of CLEESE could constrain prosodic patterns to correspond more closely to the dynamics of the human voice or to the underlying production process, e.g. not aligned with arbitrary segment locations but with the underlying phonemic (or musical) structure. A third important question when running reverse-correlation experiments concerns how many trials are necessary to obtain stable kernel. Theoretically, the optimal number of trials depends not only on the number of points that are manipulated (i.e. the size of the BPF, e.g. 7 time points in case study 1), but also on the number of points that actually drive participants' judgments (e.g. in case study 1, only 2 or 3 points, the weights given to other time points being virtually null), the complexity of the judgement (and how far from linear it is), the suitability of the dimension considered to probe the mental representation Correlation between the kernel derived using the n first trials of case-study 1, and the kernel derived using all trials (n = 700); n varying by steps of 20 trials (individual curves: Thin colored lines; averaged ± sem: Thick black line with shaded error bars). These curves reflect the "speed" at which our measure converges toward the final kernel estimate (insets show temporary kernels at stages n = 100, n = 300 and n = 700, averaged across subjects in black line, and for the subject with the slowest convergence in yellow). https://doi.org/10.1371/journal.pone.0205943.g006 (here, pitch), as well as the amount of internal noise. In practice, it is therefore almost impossible to predict the required number of trials without having collected preliminary data. To illustrate the "speed" at which our measure of kernel converges towards the final measure in data from case-study 1, we computed the correlation between the kernel derived using the n first trials and the kernel derived using all 700 trials, for increasing values of n (Fig 6). For most subjects, the correlation reaches already.8 when considering only the first 100 trials; but for some subjects (e.g. yellow line), it takes 300 trials to get the same correlation. Overall, it is clear that the choice of having 700 trials for that task was overestimated, and that we could have reached the same precision with fewer trials. Other tasks, however, may require several thousands of trials for each participant to reach the same correlation. A good practice when using this method therefore seems to run a few participants before the main experiment, and determine how much trials are required to reach stable estimates. On a technical level, another possible direction for improvement is the addition of other high-level manipulation dimensions. As an example, spectral envelope manipulation (optionally formant-driven) allows powerful transformations related to timbre, speaker identity and even gender which all could be manipulated by CLEESE [62]. At an even higher abstraction level, research could consider more semantically-related features (such as the mid-level audio descriptors typically used in machine learning methods) or even feature-learning approaches that would automatically derive the relevant dimensions prior to randomization [63]. Adding these new dimensions will likely require a trade-off between their effectiveness for stimuli randomization and their suitability for physical interpretation. Finally, improvements of the current phase vocoder implementation in CLEESE may include additional modules such as envelope or transient preservation to further improve the realism of the transformations.
2019-04-03T13:07:40.868Z
2018-10-05T00:00:00.000
{ "year": 2019, "sha1": "c632d348c53b778abba52e888d08f25787e01e80", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0205943&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c632d348c53b778abba52e888d08f25787e01e80", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
248337084
pes2o/s2orc
v3-fos-license
Automated Classification of Left Ventricular Hypertrophy on Cardiac MRI : Left ventricular hypertrophy is an independent predictor of coronary artery disease, stroke, and heart failure. Our aim was to detect LVH cardiac magnetic resonance (CMR) scans with automatic methods. We developed an ensemble model based on a three-dimensional version of ResNet. The input of the network included short-axis and long-axis images. We also introduced a standardization methodology to unify the input images for noise reduction. The output of the network is the decision whether the patient has hypertrophy or not. We included 428 patients (mean age: 49 ± 18 years, 262 males) with LVH (346 hypertrophic cardiomyopathy, 45 cardiac amyloidosis, 11 Anderson–Fabry disease, 16 endomyocardial fibrosis, 10 aortic stenosis). Our control group consisted of 234 healthy subjects (mean age: 35 ± 15 years; 126 males) without any known cardiovascular diseases. The developed machine-learning-based model achieved a 92% F1-score and 97% recall on the hold-out dataset, which is comparable to the medical experts. Experiments showed that the standardization method was able to significantly boost the performance of the algorithm. The algorithm could improve the diagnostic accuracy, and it could open a new door to AI applications in CMR. Introduction Cardiovascular diseases are the leading cause of death in developed countries [1,2]. Cardiovascular magnetic resonance (CMR) provides functional and morphological information of the heart for the evaluation, management, and diagnosis of patients with suspected or established cardiovascular disease. CMR is a multi-parametric, non-invasive imaging modality, which is considered the gold standard for the assessment of global and regional function and is able to evaluate myocardial perfusion and viability, tissue characterization, and coronary artery anatomy [3]. Left ventricular hypertrophy (LVH) is present in 15% to 20% of the population. It is more common in Afro-Americans and in patients with hypertension and obesity [4]. LVH is an independent predictor of future cardiovascular events, including coronary heart disease, heart failure, and stroke, regardless of its etiology [5,6]. The definition of LVH is an increase in left ventricular mass either due to an increase in wall thickness, an increase in cavity size, or both. In clinical practice, LVH is a common condition, which can be caused by diverse physiological and pathological mechanisms such as athlete's heart, hypertension, aortic stenosis, hypertrophic cardiomyopathy, infiltrative heart muscle disease, storage, and metabolic disorders (amyloidosis, Anderson-Fabry disease, etc.). LVH can develop silently over several years without symptoms, and it can be difficult to diagnose. The electrocardiogram (ECG) is a useful, but less sensitive tool for detecting LVH. The utility of the ECG lies in its relative inexpensiveness and wide availability. Its limitations stem from its moderate sensitivity or specificity, depending need for additional examinations. During the post-process evaluation, it could improve the diagnostic accuracy by recognizing a milder, incipient form of LVH, which can be challenging for the less-experienced readers. The early detection of LVH and appropriate therapy will decrease cardiovascular morbidity and mortality [38]. In this paper, steps toward this ambition were made by developing an algorithm that considers more views of the heart and classifies the patient's hearts as normal or exhibiting hypertrophy. The algorithm we developed achieved results comparable to the human readers. Its high recall and sufficient precision allow for its use in an on-site setting, potentially causing the operators to change the CMR protocol (e.g., to administer the contrast agent, acquire late enhancement images, etc.) if hypertrophy is suspected. During the CMR examination, usually, the long-axis cine images are acquired first, then the short-axis cine images, then the late enhancement images if needed. We found that if the algorithm is restricted to only use long-axis cine images, it is still sufficient to alert the operator in order to select an appropriate CMR protocol, but might be limited in some selected cases. The rest of the paper is structured the following way: In Section 2, we introduce the dataset we utilized during our research, then we describe how our method works. In Section 3, we report the experimental results on a hold-out dataset and we make a comparison to the human-level performance. Section 4 describes our concluding thoughts. Materials and Methods The goal of this research is to develop an algorithm for hypertrophy classification from CMR scans. The scans contain more views: short-axis, long-axis. Our dataset was collected from the database of the The Heart and Vascular Center of Semmelweis University. Our method is based on the raw image scans with all available views, and the classification result is the direct output; we did not calculate intermediate features such as wall thickness. Dataset After the exclusion of patients with poor image quality, we investigated 428 patients (mean age: 49 ± 18 years, 262 males) with left ventricular hypertrophy in whom CMR examination was clinically indicated and 234 healthy subjects (mean age: 35 ± 15 years; 126 males) without any known cardiovascular diseases as a control group. The patients underwent CMR examination in our tertiary referral center between January 2009 and February 2019. Out of the 428 LVH patients, 346 had HCM (age: 46.9 ± 18.2 144 males), 45 patients had cardiac amyloidosis (age: 63.9 ± 9.7 years, 26 males), 11 patients had Anderson-Fabry disease (age: 48.3 ± 12.9 years, 7 males), 16 patients had endomyocardial fibrosis (age: 46.4 ± 14.3, years 9 males), and 10 patients had aortic stenosis (age: 63.4 ± 17.5 years, 5 males). Appendix C shows example images. CMR examinations were performed on a 1.5 T magnetic resonance (MR) scanner (Achieva, Philips Medical Systems) using a cardiac coil. ECG gated balanced steady-state free precession (bSSFP) cine images were acquired in the three standard long-axis views: 2-chamber, 4-chamber, and LV outflow tract views. The protocol used for cine images in the present study was described in detail in a previous publication [39]. Short-axis (SA) images were also acquired with the full coverage of the left ventricle. Model Architecture The algorithm decides whether the patient has hypertrophy. The input to the algorithm is created from CMR scans of 4 views (axis). We used multi-view data because hypertrophy classification is challenging and different views provide different information. It is possible to see a pattern on a short-axis image that cannot be seen on the long-axis images or the other way around. The input images were collected from 4 views: • Short-axis images from the apex to the base at different stages of the cardiac cycle; • Long-axis, two-chamber images at different stages of the cardiac cycle (heart beat); • Long-axis, three-chamber images at different stages of the cardiac cycle; • Long-axis, four-chamber images at different stages of the cardiac cycle. The usage of all the images from the short-axis scan has difficulties. The input would be too big, and the number of images were not the same for all patients. Therefore, for the short-axis view, we took three images in each second phase of the cardiac cycle. At each chosen cardiac cycle, we used one image from the basal, one from the mid, and one from the apical region, resulting in 36 images; see Figures 1 and 2. In the case of the long-axis views, we took each second image from the cardiac cycle, resulting in 12 images; see Figure 3. The model is an ensemble of the extractors of the separate views. Images from each view are fed into a separate network to extract features. The features are concatenated, then the ensemble classifier is applied to obtain the prediction (normal or exhibiting hypertrophy); see Figure 4. The architecture of the extractor for the best-performing model can be seen in Figure 4, left side. We used the same extractor for each view. The extractors were trained separately; therefore, a temporary layer was applied to create a temporary classifier. The architecture of the temporary layer can be seen in Table 1. After the extractors are trained, an ensemble model is created with an ensemble classifier; see Figure 4, right side, under the classifier block. The models were built from residual blocks, with each block containing 3-dimensional convolutions and batch normalizations; see Figure 4, bottom part. The reason for the 3D convolution is the positive effect of considering the time dimension of the input (how the heart moves). For further elaboration on the performance and the choices we made, see the details in Section 3.3. The architecture of the 3D ResNet blocks can be seen in the middle of the image. The ResidualBlock and the ResBlockPooling differ in the strides. For the pooling block, the first convolution and the convolution on the skip branch have stride 2; otherwise, it is 1. The activations are ReLUs, which were applied after the batch normalization layers. In the ResidualBlock version, we applied padding in each convolution, while in pooling, we applied padding in the last convolution of the straight branch. Padding was: (k − 1)/2 for each dimension. The kernel sizes were chosen as odd values in each case. Preprocessing and Data Augmentation Before the images are fed into the model, two main steps are executed: (1) preprocessing and (2) augmentation. The augmentation is the same for each of the views, but preprocessing contains an additional step for the long-axis views. Preprocessing always applies noise reduction by cropping the intensity values between the 1st and 99th percentiles. Then, the images are normalized into a 0-1 interval. For the long-axis views, the images are standardized because their orientation shows high variance. Standardization is achieved by a superposition to a reference frame calculated for each view separately. The reference frame is given as the normal vector of a typical image for a given view. The superposition applies mirroring and a rotation around the center point of the image to be preprocessed. Appendix A gives further insight into the details of the standardization. In Section 3.3, further details are shown about the effect of the standardization on performance. The augmentation contains a random rotation and Gaussian noise. Training Scheme We trained the model in two stages. This training process falls into the supervised learning paradigm, because we have the ground truth pathologies for each scan. The dataset was unbalanced; therefore, we sampled the normal group with higher probability to equalize the occurrences of hypertrophic and normal samples in the training batches; see Section 2.1 for the ratio. The dataset was split into three parts: training (70%), validation (15%), and testing (15%). The test set was created only once, and we kept it until the final test with the best model chosen on the validation test. We repeated the training with each parameter setting three times to understand the stability of the results. In each repetition, the training and validation parts were resampled. First, the feature extractors were trained separately to predict whether the patient had hypertrophy. For this part, we used a temporary layer at the end of each extractor to create a classifier. Then, the temporary layer was removed, and the ensemble model was built. For combining the outputs of the feature extractors, we concatenated (the long-axis was padded in the depth dimension) the features, then fed them into the classifier; see Figure 4. The whole ensemble was trained, but the feature extractors' weights were frozen. The training was applied on different combinations of the possible views. The combinations were based on realistic scenarios, because the earlier we can detect the condition of hypertrophy, the faster the operators can react during the scanning procedure. In a clinical setting, the examination process mostly follows similar orders among the views. During the CMR examination, the typical order was long-axis views, then short-axis view. It is important to test only using the long-axis view, the short-axis view, and then, their combination. The parameters of the best model can be seen in Table A1. Human Evaluation The performance of the algorithm was also compared to human experts (hearafter readers). The design of the evaluation simulated a realistic setup for an everyday examination procedure. The readers were asked to read CMR scans of 117 subjects, but they were not told the real purpose of the study. About each subject, a very brief patient history was provided (without giving clear reference to the real disease) along with the images of a full MRI scan. This included the short-axis and long-axis images. For the analysis, we included CMR scans from the normal group as well and the following pathologies: acute or chronic myocardial infarction, dilated cardiomyopathy, Takotsubo cardiomyopathy, and acute myocarditis. The list contained the most frequent pathologies encountered during regular assessments. We also included different cardiac pathologies that could cause LVH (HCM, Anderson-Fabry disease, amyloidosis, aortic stenosis, and endomyocardial fibrosis). The reason for pathologies outside of hypertrophy was to avoid bias during the evaluation. Overall, six experts finished the experiment. Two of them were senior colleagues (25 and 10 years of experience) and three of them at the mid-senior level (4-7 years of experience), and one of them was a junior (2 years of experience). Results We experimentally proved that the algorithm described in Section 2 can achieve comparable performance to human experts. Results of Human-Evaluation The human evaluation established a baseline to raise expectation against the algorithm. Table 2 shows the results. Overall means the accuracy of the diagnosis of each expert for all 117 subjects. This includes all the pathologies. In the Hyp-Norm row, the pathologies are grouped into two groups, normal and hypertrophy, which includes all the LVH etiologies considered earlier in this paper. The prediction of a reader was considered as valid if the predicted pathology fell into the hypertrophy group, but the etiology did not have to be accurate. In the HCM row, we measured the accuracy of differentiating between the patients with HCM and other cardiac disorders, which usually represents LVH. In the last three rows, precision, recall, and F1-score were calculated for the Hyp-Norm case. Hypertrophy was considered as a positive event in the confusion matrix. If we compare the consistency among the experts in terms of three groups: normal, hypertrophy, and the rest, we found 83 %, 71 %, and 91 % consistency values, respectively. Consistency is defined as an agreement among at least five radiologists. The high value of recall and the lower value of consistency for the normal group indicates that radiologists tend to classify healthy patients as those having a condition. This is understandable, as a false positive can easily prove to be negative after some further examinations. On the contrary, false negatives can lead to delayed and inappropriate patient care. Performance of the Algorithm The performance of the best model can be seen in Table 3. The table shows that only using the LA views was enough to achieve comparable results to humans by considering the standard deviations as well (3-4%). This is important, because the contrast agent can be injected after the long-axis measurements (if the algorithm indicates it and the experts accept it), then the short-axis cine images can be acquired, since the late enhancement images could be acquired at least 10 min after contrast material administration. This approach can save significant amounts of time and can also warn the on-site medical staff that the MRI protocol should be changed in order to avoid further, unnecessary examinations. The box plots in Figures 5 and 6 were calculated by repeating the test evaluation on 20 randomly sampled subsets of the test data, and in each sample, we used 70% of the test data. This method is similar to bootstrapping. Both images show the same relative performance. The algorithm using only the LA views had lower performance, but when short-axis and long-axis views were combined, the human level and the algorithm scores became close to each other, especially in the case of the recall. The results showed lower F1 and recall for the only short-axis case (see Table 3), which can be a result of the higher complexity of the data. More samples for the SA case could scale up the performance. Similarly, the algorithm (SA+LA) had lower performance than the experts, but we claim that a larger dataset would reduce the gap. Figure 5. Comparison of the human (expert) and algorithm (auto) performances. The p-value between auto (LA) and auto (SA+LA) is lower than 0.001, which means using the short-axis images contributed to a significantly better performance. Between the auto (SA+LA) and expert group, the p-value was less than 0.001. For calculating the p-values, we used two-sample t-tests. Figure 6. Comparison of the human (expert) and algorithm (auto) performances. High recall is beneficial because the algorithm can identify samples suspicious of hypertrophy with a high probability. The false positives can be handled by the experts who supervise the examination. The pvalue is less than 0.001 between the auto (LA) and auto (SA+LA) groups. When comparing auto (SA+LA) and the expert groups, we obtained a p-value = 0.3, indicating there was no statistically significant difference between them. Therefore, the auto (SA+LA) was statistically identical to the expert group in terms of the recall. Ablation Study We executed several experiments before we arrived at the final model, data processing, and parameter choices. In this subsection, we briefly summarize our findings. We cover the three main aspects of the algorithm: The above order does not represent the order of our experiments. It was established in order to explain our experience in more logical fashion. We did not measure every possible combination of choices; therefore, we can explain and showcase the tendencies of the different choices. Model selection. We tried three main architectures. The first architecture was a fully convolutional model with 4-5 convolutional layers, assuming the ensemble model with more views can achieve good results overall and we would not need strong learners per view. Our results indicated that bigger networks would be required to achieve scores (accuracy, F1-score, etc.) around 90 percent. The second architecture was similar to ResNet with two-dimensional convolutions. The time dimension in the long-axis view was stacked together to form a 12-channel image. The structure was similar to the ResNet described in Section 2.2. We experienced significant performance growth (around 3-4 percent) as the model size achieved 8 residual blocks, meaning 16 convolutional layers overall. Further increasing the size did not affect performance significantly. One reason for that may be the size of the dataset. During the data-preprocessing-related changes, we came to the conclusion that taking into account the time dimension (basically the movement or dynamic patterns of the heart) had a major effect on the results (over six percent in the case of the short-axis views). Therefore, we created a 3D convolution-based ResNet model to properly handle the time dimension. We formed a 3D image, as time became the depth dimension of the image. This model performed better and more robustly (regarding the sensitivity for the hyper-parameters). However, the drawback of the 3D ResNet lies in its slow training speed. As the performance on the short-axis view was worse, we tried to increase the model size for this view only, but this did not cause relevant changes. Finally, we used the same architecture for all the views. Data preprocessing. Data preprocessing and the input representation to the network proved to be the most important factors. To speed up the training, we tried less input data first. We used only two images from the long-axis views, one from the systole phase and one from the diastole phases. We used six images from the short-axis view and three images at the systole and the diastole phases, respectively. This input formation resulted in fair accuracy values (around 84 percent), but it turned out that taking images from other points of the cardiac cycle contributed to better results. Standardization (see Appendix A) had a very important role in achieving the final results. We identified the long-axis views to be noisy as a result of the different orientations of the images. This was not true for the short-axis. One way to cope with this is to use random rotation for augmentation with degrees between 0 and 180. We found this approach to be inefficient in helping the learning process. The standardization method caused a significant performance growth. Therefore, we used only a small eight-degree angle for rotation during augmentation. We also used cropping and some noise during augmentation. Hyper-parameter tuning. When a model and a data preprocessing method were chosen, there were some hyper-parameters to optimize. These were batch size, number of epochs, learning rate, optimization algorithm, loss function, regularization method and their parameters, and the cropping size of the image. We chose batch size 16 because 8 was too noisy for the training. Larger batch sizes require too much memory. The number of epochs was chosen between 20 and 50, and we used early stopping to avoid overfitting. We found that the AdamW [40] algorithm with learning rate 5 × 10 −4 achieved better results than Adam, SGD, and RMSProp. We used focal loss [41], because focal loss can distinguish the easy samples from the difficult ones by applying a factor ((1 − p) γ ), which reduces the loss for the well-classified samples. Our intuition was that the samples contained some very difficult cases (due to etiologies such as amyloidosis, which is difficult to diagnose), and therefore, focal loss could help. In our experiments, we experienced L1 and L2 losses to be harmful, and dropout with large values was disadvantageous. This can be explained by the observation that batch normalization has some regularization effect, which can eliminate the need for dropout [42], and our 3D ResNet contains batch normalization layers. The final cropping size of the input image proved to be 150 × 150. Smaller (120 × 120) and larger sizes (190 × 190) were worse. For the larger size, the image can contain too much noise, while the smaller crop can miss some details with the heart not always being at the center of the image. Discussion and Conclusions Cardiovascular diseases are the leading causes of death around the world [1,2,43]. LVH is a well-recognized independent risk factor for several cardiovascular complications [5]. The diagnosis of LVH can be challenging. For this, there are some methods used in clinical practice such as electrocardiography, echocardiography, and CMR. CMR is a non-invasive tool for diagnosing myocardial pathologies. CMR-based hypertrophy detection can be more efficient and reliable and may improve the diagnostic method in order to recognize LVH in an earlier stage. We developed a deep-learning-based algorithm for identifying left ventricular hypertrophy during a CMR examination (on-site) and for helping the diagnostic process following the examination (off-site). The on-site application can save time, if the algorithm indicates the presence of LVH right after the long-axis measurements; therefore, some additional, necessary images could be acquired and contrast administration should be applied. With the use of on-site application, the CMR protocol can be changed during the scanning, in order to avoid the need to call back the patient for an additional CMR examination to provide the correct diagnosis. Nevertheless, if the algorithm is used during post-process evaluation, it can warn the reader that LVH is present, so the diagnostic accuracy can be improved. This is important because the identification of the incipient or milder form of LVH is difficult for less-experienced readers, and early detection of LVH and subsequent therapy are key factors in reducing cardiovascular morbidity and mortality [38,44]. Our algorithm achieved a performance close to the medical experts' (readers) scores. Our comparison was based on the F1-score, precision, and recall. The model we implemented was an ensemble model. Each view had a separate extractor, and the features extracted from the acquired images were concatenated. Then, an ensemble classifier takes the concatenated features as the input and calculates the probability of having LVH. The dataset was collected from the Heart and Vascular Center of Semmelweis University, and it contains the raw image scans with all available views (long-axis and short-axis cine images) and the corresponding pathologies. Our algorithm had a recall rate of 90% when the combination of long-axis views was used as the input. In the case of the combination of long-axis views and short-axis views, we had a 96% rate. The corresponding F1-scores were 89% and 91%, respectively. High recall is beneficial, because fewer LVH cases will be left undiagnosed. False positives (predicted as LVH, yet normal) can be discarded by the experts supervising the examination. In order to judge the applicability of our method, we established a baseline by measuring the scores of medical experts. The measurement involved six readers with varying levels of experience. The measurement was designed to simulate a realistic clinical scenario where the reader has no clear reference to the real case, but has access to the images of full CMR scans. To make it more realistic, we included several other diseases in addition to LVH. We included diseases that appear frequently in clinical practice, and the readers were blinded to the purpose of the study. There are three main outcomes of the human experiment: (1) the differences among the scores (F1-score, recall, etc.) of the readers were surprisingly small; (2) recall was the highest value indicating that the readers had a bias toward having a cardiac disease; (3) we obtained the baseline values for the scores (F1-score-95%, recall-98%); see Table 2. High recall was also achieved by our algorithm in the case of the combined long-axis and short-axis model. Figures 5 and 6 indicate that our algorithm can already be advantageous in clinical practice even though there is still room for improvement. We claim that by using a larger dataset, the gap can be bridged and that this method can be a good candidate to become part of the daily clinical routine during CMR examinations. Our method was limited to only one vendor and clinic center. For creating a more robust method, the model should be trained on data gathered from different clinic centers and vendors. Another limitation is the classification of etiologies. The current method differentiates between two groups, normal (healthy) subjects and hypertrophy. There are different etiologies for hypertrophy (e.g., HCM, amyloidosis), which can be differentiated by including late enhancement images. From the dataset, we excluded healthy athletes, but LVH can be present as a physiological condition in athlete's heart; therefore, it could be an interesting topic to differentiate between physiologic and pathologic LVH. To the best of our knowledge, this is the first paper where a method for automatic classification of LVH from different CMR images (short-axis, long-axis cine images) was investigated and compared to medical experts. Future work can focus on the separation of the etiologies within LVH automatically. Sports-related LVH should be also addressed in order to create a more complete methodology. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available upon request from the corresponding author. The data are not publicly available; for acquiring the dataset, the permission of the local institution board is necessary. Conflicts of Interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A. Standardization The method for standardizing the long-axis images is based on defining a reference system relative to a fixed axis. This axis is the Z-axis, which points from the feet to the head of the patient, running parallel with the bore of the MRI. Each acquired image has a plane, and it can be described in this coordinate system. The plane is characterized by its normal vector and its orientation. The orientation is relative to the Z-axis. The images are stored in dicom files, which contain the orientation and position matrices. The standardization is achieved by rotation with a proper angle around the axis parallel with the normal vector and crossing the middle of the image; see Figure A1. First, the algorithm calculates the normal vector of the image from the orientation matrix. The orientation matrix contains the directions of the left side and the upper side of the image ( e and f ). Therefore, the normal vector is: The normal vectors are almost the same for each view. Then, a new reference frame can be calculated ( p and q): q = z × n, p = q × n (A2) where z = (0, 0, 1), then p, q are normalized. The orientation is defined as the direction of e in the p, q plane: d = [ e · p, e · q]. (A3) We can define a reference orientation ( d 0 ), then each image can be compared and rotated against the reference orientation. To decrease the size of the required rotation angle, we calculated the average orientation of the images in the dataset per view. Then, we defined the reference orientations according to the average values. For the sake of completeness, these values were: LA2 (−0.937, 0.166), LA4 (0.632, 0.032) and LALVOT (−0.0054, −0.635). The rotation angle (ϕ) is given as follows: (A4) Appendix C. Example Images The following images show examples for different heart conditions: normal, HCM, amyloidosis, and Anderson-Fabry disease. In each row of pictures, the views from left to right are the following: short-axis, long-axis 2-chamber, long-axis 4-chamber, and long-axis 3 chambers-view.
2022-04-23T15:14:17.164Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "0be75d5dd6d68dcbf80691897f2e0c1dc3361726", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/9/4151/pdf?version=1650508944", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3ee46acf8b3ffe701cc0d76faa620cb7e1f1c986", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118860793
pes2o/s2orc
v3-fos-license
Concentration analysis and cocompactness Loss of compactness that occurs in may significant PDE settings can be expressed in a well-structured form of profile decomposition for sequences. Profile decompositions are formulated in relation to a triplet $(X,Y,D)$, where $X$ and $Y$ are Banach spaces, $X\hookrightarrow Y$, and $D$ is, typically, a set of surjective isometries on both $X$ and $Y$. A profile decomposition is a representation of a bounded sequence in $X$ as a sum of elementary concentrations of the form $g_kw$, $g_k\in D$, $w\in X$, and a remainder that vanishes in $Y$. A necessary requirement for $Y$ is, therefore, that any sequence in $X$ that develops no $D$-concentrations has a subsequence convergent in the norm of $Y$. An imbedding $X\hookrightarrow Y$ with this property is called $D$-cocompact, a property weaker than, but related to, compactness. We survey known cocompact imbeddings and their role in profile decompositions. Introduction Convergence of functional sequences is easy to obtain in problems where one can invoke compactness, typically via a compact imbedding X ֒→ Y of two Banach spaces. In many cases, however, one deals with problems in functional spaces that possess some non-compact invariance, such as translational or scaling invariance, which produces non-compact orbits and thus makes any invariant imbedding trivially non-compact. Fortunately, the same set of invariances that destroys compactness can be employed to restore it. This approach, historically called the concentration compactness principle, emerged in the 1980's from the analysis of concentration phenomena by Uhlenbeck, Brezis, Coron, Nirenberg, Aubin and Lions, and remains widely used, in a standardized formulation, in terms of sequences of measures, due to Willem and Chabrowski. The early concentration analysis has been then refounded by two systematic theories developed in mutual isolation, a functional-analytic one ( [45], with origins in [49]), and a wavelet-based one, in function spaces, (see Bahouri, Cohen and Koch [7] whose origins are in Gérard's paper [28]). The purpose of this survey is to present current results of general concentration analysis as well as some areas of its advanced applications, such as elliptic problems of Trudinger-Moser type and "mass-critical" dispersive equations, where cocompactnes of Strichartz imbeddings was proved and employed by Terence Tao and his collaborators. The key element of the cocompactness theory is the premise that compactness of imbeddings of two functional spaces, which in many cases is attributed to the scaling invariance u → t r u(t·) (that leads to localized non-compact sequences of "blowups" or "bubbles" t r k w(t k ·), t k → ∞), can be caused by invariance with respect to any other group of operators acting isometrically on two imbedded spaces X ֒→ Y . Furthermore, proponents of the cocompactness theory insist that there are many concrete applications which involve such operators ("gauges" or "dislocations") that are quite different from the Euclidean blowups. Indeed, recent literature contains concentration analyis of sequences in Sobolev and Strichartz spaces, involving actions of anisotropic or inhomogeneous dilations, of isometries of Riemannian manifolds (or more generally, of conformal groups on sub-Riemannian manifolds and other metric structures) and of transformations in the Fourier domain. Improvement of convergence, based on elimination of concentration (understood in the abstract sense as terms of the form g k w, where {g k } is a noncompact sequence of gauges) can be illustrated in the sequence spaces on an elementary example based on Proposition 1 of [29]. Example. The imbedding ℓ p (Z) ֒→ ℓ q (Z), 1 ≤ p < q ≤ ∞ is not compact, since sequences of the form u(· + k) converge to zero weakly in ℓ p , but have constant ℓ ∞ -norm. Let us decide that u(· + j k ) with |j k | → ∞ is a typical "concentrating behavior" and eliminate it from a given sequence u k ∈ ℓ p by assuming that for any sequence j k ∈ Z, one has u k (·+ j k ) ⇀ 0 in ℓ p . Then u k (j k ) → 0 in R for any sequence j k ∈ Z, which implies u k → 0 in ℓ ∞ . Since u q q ≤ u q−p ∞ u p p , one also has u k → 0 in ℓ q for any q > p. We conclude that elimination of possible "concentration" caused by shifts u → u(· + j), j ∈ Z, assures convergence in ℓ q . This example gives motivation to the following definitions. Definition 1.1. (Gauged weak convergence.) Let X be a Banach space, and let D be a bounded set of bounded linear operators on X containing the identity operator. One says that a sequence u k ∈ D converges to zero D-weakly if g k u k ⇀ 0 with any choice of sequence (g k ) ⊂ D. We denote the gauged weak convergence as u k D ⇀ 0. Definition 1.2. Let X be a Banach space continuously imbedded into a Banach space Y . One says that the imbedding X ֒→ Y is cocompact (relative to the set D) if u k D ⇀ 0 implies u k Y → 0. Definition 1.3. One says that the norm of Y provides a (local) metrization of the D-weak convergence on X, if for any bounded sequence u k ∈ X, Note that we speak only about local metrization, on the balls of X, and that convergencies in different norms, for sequences restricted to such balls, become equivalent. For example, all ℓ q -convergences with q > p ≥ 1 are equivalent on a ball of ℓ p , and all L q -convergences with q ∈ (p, pN N −p ) are equivalent on a ball of W 1,p (R N ), 1 ≤ p < N (by the limiting Sobolev imbedding and the Hölder inequality). Example. Let X = H 1 0,rad (B) be the subspace of all radial functions in the Sobolev space The lack of compactness of the imbedding X ֒→ Y is demonstrated by the blowup sequences t If, however, u k is a bounded sequence in X and all its "deflation sequences" t , with t k → ∞, converge to zero weakly in X, the sequence u k has a subsequence convergent in Y . This we express as D-cocompactness of the imbedding H 1 0,rad (B) ֒→ L 6 (B). See Proposition 2.3 below with an elementary proof. While the term cocompact imbedding is recent, the property itself has been known for Sobolev spaces for decades, and can traced to a lemma by Lieb [35]. Cocompactness of imbeddings relative to rescalings (actions of translations and dilations), which for the Sobolev spaces is usually credited to Lions [38], is known today for a range of Besov and Triebel-Lizorkin spaces as X and different Besov, Triebel-Lizorkin, Lorentz and BMO spaces as Y ( [7], once we note that the crucial Assumption 1 in this paper, expressed in terms of the wavelet bases implies cocompactness and is in a general case equivalent to it, see the argument in subsection 2.2 below). Cocompactness of Sobolev imbeddings is known also for function spaces on manifolds, with the role of translations and dilations taken over by the conformal group. Cocompactness is also established in Trudinger-Moser imbeddings (with inhomogeneous dilations) and in Strichartz imbeddings in dispersive equations. Section 2 gives a summary of known cocompact imbeddings. Section 3 studies profile decompositions that arise in the presence of cocompact imbeddings. These are largely functional-analytic results, established in two general cases, for Hilbert spaces and for Banach spaces imbedded into L p -spaces. Beyond that, profile decompositions have been proved for a wide range of imbeddings for spaces that have a wavelet basis of rescalings, presented in [7] (with a remainder that vanishes in a weaker sense than in the Y -norm), as well as for some Trudinger-Moser and Strichartz imbeddings. First profile decompositions in literature were found for specific sequences, typically, Palais-Smale sequences for semilinear elliptic functionals (Struwe [52], Brezis and Coron [16], Lions himself [40] and Benci & Cerami [12]). The first profile decomposition for general bounded sequences in D 1,p (R N ) was proved by Solimini [49], and, furthermore, from Solimini's argument it also became clear that a profile decomposition is essentially a functional-analytic phenomenon, and it is the cocompactness (still not a named property) that requires a substantial hard analysis. Section 4 deals with reduction of cocompactness and profile decomposition to subspaces, which in many cases results in more specific types of concentration or even if disappearance, as, for example, in case of the radial subspaces of Sobolev spaces (Strauss Lemma), as well as list few sample arguments that allow to derive cocompactness by transitivity of imbeddings or interpolation. The range of applications of concentration analysis is wider than the scope of this survey. We do not consider here in any great detail time evolution of concentration in the initial data that arises in semilinear dispersive equations, large-time emergence of concentration in evolution equations, applications to geometric problems, and blow-up arguments for sequences of solutions of PDE that may benefit from further built-in structure of the equations. The following theorem is essentially a lemma of Lieb from [35]. Assume that N > p > 1. The Sobolev imbedding W 1,p (R N ) ֒→ L q , p ≤ q ≤ p * = pN N −p , is cocompact relative to the group D, unless q = p or q = p * . The L q -norm gives a W 1,p -local metrization of the D-weak convergence. The original statement asserted only convergence in measure, but under a W 1,p -bound, so that convergence in measure implies convergence in L q , p ≤ q ≤ p * = pN N −p . Moreover, the original statement (and many other cocompactness statements in literature) is expressed in terms of negation: if a bounded sequence in W 1,p does not converge in measure, then it has a subsequence with a nonzero, under suitable translations, weak limit. Proof. Let Q = (0, 1) N . Assume that and u n (· − y n ) ⇀ 0 in W 1,p (R N ) for any sequence y n ∈ Z N . For every y ∈ Z N we have by the Sobolev inequality Adding up the inequalities over y ∈ Z N while estimating the last factor by the supremum over y, and replacing the supremum by twice the value of the term at the "worst" values of y n , we get , which converges to zero as a consequence of compactness of the subcritical Sobolev imbedding over Q. This is a model proof of cocompactness that uses partitioning of the norm of the target space into "cells of compactness", in this case involving the fundamental domain of the lattice group acting on R N , where one can benefit from compactness, followed by a reassembly of the full norm from the vanishing terms. In the wavelet approach presented in subsection 2.2 the role similar to that of "compactness cell" is played by the mother wavelet. The limiting Sobolev imbedding is cocompact only if one enlarges the group D by the action of dilations. The following statement originates in Lions [38] with three different proofs given later by Solimini [49]; Gérard [28] (for p = 2) and Jaffard [29]; and the author [56]. This result is usually rendered in literature with the scaling factor in (0, ∞) instead of the discrete subset γ Z . Sufficiency of the discrete values can be observed by following the available proofs, or derived a posteriori. We give below a proof the statement reduced to the case of the radial subspace of D 1,p . Proof. Let u k ∈ D 1,p rad (R N ) and assume that with any j k ∈ Z, γ N −p p j k u k (γ j k ·) ⇀ 0. Let p * = pN N −p and apply the limiting Sobolev inequality to Taking the sum over j ∈ Z and estimating the last integrand in the right hand side by the Hardy inequality, we have Replace the supremum in the right hand side by twice the value at a suitable sequence j k . The integrand in this term is a radial function, uniformly bounded in L 1 ∩ L ∞ , which, as a rescaled weakly vanishing sequence, converges to zero pointwise. Consequently, the left hand side converges to zero and cocompactness is proved. To verify the local metrization, assume that u k → 0 in L p * (R N ) and that the sequence is bounded in D 1,p . Then, for any sequence g k ∈ D, the scaling invariance of the L p * -norm implies that g k u k ⇀ 0 in L p * . Then, by density, g k u k ⇀ 0 in D 1,p , and thus u k D ⇀ 0. Cocompactness relative to rescalings: the wavelet approach Imbeddings of Sobolev type are cocompact relative to the rescalings group (2.2) for a large class of spaces. The first results of this type were obtained for imbeddings of Riesz potential spacesḢ s,p (R N ) into L pN N −sp (R N ), s > 0, 1 < p < N , Gérard [28] (for Sobolev spaces with p = 2) and Jaffard [29] (for general p). Here we summarize a major generalization of their analysis in Bahouri, Cohen and Koch [7]. Let X ֒→ Y be two Banach spaces of functions on R N , and assume that there exists an unconditional Schauder wavelet basis of wavelets Γψ, same for X and Y , where and r > 0 and ψ ∈ X ("mother wavelet") are fixed. It's assumed that operators u → u(· − y), y ∈ R N , and u → t r u(t·), t > 0, are isometries in both X and Y . For each M ∈ N and for every function u ∈ X, expanded in the basis Γψ as u = g∈Γ c(g)gψ, define a subset Γ M (u) ⊂ Γ of cardinality M which corresponds to the M largest values of |c(g)|, and set Note that such set Γ M always exists (and is not unique when some |c(g)| are equal, so one fixes it arbitrarily) due to the fact that Γψ is a Schauder basis for X, and thus for any η > 0 only finitely many coefficients c(g) have their absolute value larger than η. The main condition in [7], required for the imbedding X ֒→ Y (Assumption 1), is sup Consider the wavelet expansion coefficients c k (g), g ∈ Γ, of functions u k , and let g k ∈ Γ be such that |c k (g k )| = η k := max g∈Γ |c k (g)| = max g∈ΓM |c k (g)|. Then the coefficient c ′ k (id), corresponding to the basis vector ψ in the wavelet expansion for g −1 k u k , is equal to c(g k ). At the same time c ′ k (id) → 0 since since the coefficients of expansions in a Schauder basis of a reflexive space are continuous linear functionals and Let k → ∞ and note that ǫ was arbitrary. Conversely, cocompactness of the imbedding relative to rescalings easily implies (2.4) if the space X and its basis Γψ satisfy the following condition: For spaces brought up below, existence of an unconditional Schauder basis of rescaling wavelets is known, and the norms in their spaces have equivalent definitions in terms of expansion coefficients in the wavelet basis (see [41]). Using these characterizations, [7] verifies condition (2.4), and thus cocompactness of the imbeddings whenever X is reflexive. Results of [7] contain several earlier wavelet-based cocompactness results, in particular, [28,29,23,33]. Some cases follow from the others via a simple transitivity argument: if X ֒→ Y , Y ֒→ Z and one of the imbeddings satisfies 2.4 (or is cocompact), then satisfies 2.4 (resp. is cocompact), or from the equivalence of different Y -convergences on bounded sets of X. The following imbeddings satisfy (2.4) (and are cocompact relative to rescalings whenever the domain is a reflexive space). Note that for m ∈ N, the spaceḞ m p,2 coincides with the Sobolev space D m,p (R N ), andḞ 0 q,2 coincides with L q , so the last imbedding includes the limiting Sobolev imbeddings. Profile decompositions for these imbeddings, which we discuss in Section 3, are also given in [7]. Cocompactness of trace imbeddings The following imbeddings are cocompact: Proof. The proof of (i) is repetitive of the argument in the paragraph 2.3.1 above, employing the cubic lattice neighborhood of the hyperplane, instead of the cubic tessellation of the whole R N . The proof of (ii) for p = 2 is given in Lemma 5.10 in [56], and extends trivially to general p > 1. Results in both cases easily extend to the case of hyperplanes of any dimension d > p. In both cases the corresponding L p -norms give metrization of D-weak convergence. We are not aware of further results in literature on cocompactness of trace imbeddings, but it is plausible that most trace imbeddings for Besov and Triebel-Lizorkin spaces are similarly cocompact, and the wavelet argument of [7] can be applied here as well. Cocompactness of Sobolev imbeddings on metric structures For the subcritical Sobolev imbeddings on manifolds, the actions of isometries play the same role in cocompactness as parallel translations in the Euclidean case. Let M be a complete Riemannian manifold and let I(M ) be its isometry group. The following result deals with cocompactness of "magnetic" Sobolev spaces, corresponding in appropriate cases to the Schrödinger operator with external magnetic field. Magnetic Sobolev space W 1,p α (M ), with a fixed α ∈ T * M , called magnetic potential, is the space of functions M → C with measurable weak derivatives, characterized by the finite norm The usual Sobolev space corresponds to the form α being exact (i.e. zero modulo gauge transformation). By the diamagnetic inequality |du + iαu| ≥ |d|u||, so one always has W 1,p α (M ) ֒→ W 1,p (M ). When α is not exact, assume in addition that M is simply connected. Then for each diffeomorphism η : An elementary computation shows that if η ∈ I(M ), then W 1,p α (M )-norm is preserved by the operators u → e iϕη u•η, called magnetic shifts (see [6]). In physics, the meaning of the relation dα • η = dα is that the magnetic field dα is periodic under isometries of M . (2.5) Proof. The proof of cocompactness and verification of metrization for p = 2 is given in [56], Lemma 9.4. The proof for general p is completely analogous. The argument is similar to that for Theorem 2.1, and involves existence of a covering for M of uniformly finite multiplicity by sets {ηV } η∈G ′ , with some set G ′ ⊂ G and some open set V ⊂ M . We give here an argument for reduction of the general (magnetic) case to the case α = 0. , then |u k • η k | converges to zero a.e. and, by the diamagnetic inequality, is bounded in W 1,p (M ). Then, from cocompactness of the "non-magnetic" imbedding Cocompactness of subcritical imbeddings relative to isometries extends to Sobolev spaces on metric structures other than Riemannian manifolds, in particular, to the Sobolev spaces of the Kohn Laplacian on Carnot groups ( [50]) and of blowups of a self-similar fractal of a class involving Sierpinski gasket, as in [55]; see also the paper [14] in the setting of axiomatic Sobolev spaces on general metric structures. Let now M be a simply connected nilpotent stratified Lie group (Carnot group) with a stratification T e M = m j=1 Y j . Let ν = m j=1 j dim Y j ≥ 3. One calls the diffeomorphism T s = exp e τ s exp −1 e of M an anisotropic dilation if τ s is given by τ s | Yj = s j , j = 1, . . . , m. Let v 1 , . . . v dim Y1 ∈ T e M be an orthonormal basis in Y 1 and consider the subelliptic Sobolev spaceḢ 1 (M ), characterized by the norm In particular, for the Heisenberg group H N , identified as the anisotropic dilations are T s (x, y, t) = (sx, sy, s 2 t), and the Sobolev norm is given by For further details on subelliptic Sobolev spaces we refer to ( [25], [26], as well a brief exposition in [56], Chapter 9. Theorem 2.8. LetḢ 1 (M ) be the subelliptic Sobolev space on the Carnot group M as above, let γ > 1 and let Then the imbeddingḢ 1 This result is proved in [56], Lemma 9.14, for p = 2 (extension to p = 2 is elementary). Earlier results can be found in [50] for the subcritical case and the general Carnot group, and in [11] for the case of Heisenberg group). Like in the Euclidean case, many applications to subelliptic PDE on Lie groups could be handled with the help of the Willem-Chabrowski version of concentration compactness (e.g. in papers of Garofalo et al) or with Struwe's "global compactness" ( [52], see Theorem 3.1 in the book [24] of Hebey, Druet and Robert), which is a realization of a possible more general profile decomposition for manifolds, for the case of Palais-Smale sequences with dilations expressed via the exponential map. Cocompactness of the Moser-Trudinger imbedding The counterpart of Sobolev imbedding of D 1,p (R N ) in the borderline case p = N , is the imbedding defined by the Moser-Trudinger inequality (Yudovich,[58], independently rediscovered by Pohozhaev, Peetre and Trudinger, and with the optimal exponent proved by Moser [42]). Moser-Trudinger inequality is stated for bounded domains and it is false for R N . There reason for that is that the gradient norm ∇u N on C 0 (R N ) does not dominate any linear functional ϕ, u with ϕ ∈ D ′ (R N ) \ {0}, i.e. the space D 1,N (R N ) defined as a formal completion of C ∞ 0 (R N ) in the gradient norm, is not continuously imbedded even into the space of distributions. Another way to express this is that there exists a Cauchy sequence representing zero of the completion, which converges to 1 uniformly on every compact set. A counterpart of the Moser-Trudinger inequality for R N , proved by Li and Ruf [34], involves the full Sobolev norm rather than the gradient norm, and it also avoids lower powers of u which have poor integrability at infinity. In what follows we will use the Sobolev norm u 1,N equal to the standard Sobolev norm if the domain Ω ⊂ R N is unbounded and to the equivalent norm of W 1,N 0 (Ω), ∇u N , if Ω is bounded. The inequality that expresses both the Moser-Trudinger inequality and the Li-Ruf inequality, is sup For bounded domains one may equivalently use e t instead of exp N (t), since the subtracted polynomial also has a bounded integral. The imbedding expressed by the Moser-Trudinger inequality is In the spaces of radial functions cocompactness of imbeddings is established in the cases when Ω is a disk (without loss of generality we consider here the unit disk B) or R N (if Ω is an annulus or an exterior disk, the imbedding is compact for elementary reasons). The result below is due to [2]: Furthermore, a local metrization of the Dweak convergence is provided by the norm sup 0<r<1 One can easily derive from here cocompactness in W 1,N rad (R N ) by considering a sequence u k − u k (1) on B, bounded in H 1 0 (B), and noting that the restriction of a radial sequence u k ⇀ 0 in W 1,N (R N ) to the complement of B has a uniform bound and converges to zero in L N N ′ . 6) where z expresses coordinates of B as a complex variable. A (quasi-)metrization of the D-weak convergence is acheived by the quasinorm sup 0<r<1 u ⋆ (r) (log 1 r ) 1/2 , where u ⋆ denotes the symmetric decreasing rearrangement of u. Cocompactness of Strichartz imbeddings Consider a Strichartz imbedding that estimates the space-time L q -norm of the solution of the evolutionary Schrödinger equation by the L 2 -norm of the initial data (for details see [17,32]). The following result is due to Terence Tao, [53], and the version of the proof in [32] optimizes the group. From the presentation of [32] we could easily infer that cocompactness remains valid if one restricts dilations to a discrete group. In the theorem belowû denotes the Fourier transform. is cocompact in L 2 with respect to the product group of the following transformations (γ > 1 is a fixed number): Translations in the Fourier domain allows to consider functions with support on a cube in the Fourier domain, for which convergence in L 2 implies convergence in C m (R N ) for any m ∈ N. The main technical point in the proof is the "reassembly" of the inequality from the "cells of compactness" using methods of harmonic analysis. Profile decompositions Given an imbedding of a Banach space X into a Banach space Y and a bounded set D of bounded linear bijections X → X, a profile decomposition is a representation of a bounded sequence {u k } ⊂ X in the form where the terms g ⇀ 0. Convergence of the remainder in X should not be generally expected, as can be illustrated on the following example. Let X = ℓ p (Z) and let u k (j) = 1/k 1/p for j = 1, . . . , k taking zero values for all other j. If D is the set of shifts u → u(· − j), j ∈ Z, then u k has no (nonzero) profiles under any shift sequence, so r k = u k and r k p = 1. At the same time r k q → 0 for any q > p. Many profile decompositions found in literature (typically in papers using the wavelet argument, starting with [28]) are stated with a weaker remainder, namely, in the form In many significant cases, however, such as Palais-Smale sequences for elliptic problems, one can establish a lower bound on some norm of the concentration profiles, which assures that only finitely many profiles are non-zero, in which case the weak remainder r with M equal to the number of nonzero profiles, is the same as the strong remainder. As it was established in [45], profile decompositions hold whenever X is a Hilbert space under a general condition on the set D. A similar result for Banach space is also expected to be true, under some, rather weak but yet unknown general conditions on X. As a temporary fix we give here a profile decomposition in the case when X is continuously imbedded into L q (M ), where M is a measure space. The abstract profile decomposition gives a remainder r k that converges to zero D-weakly. Cocompactness is defined as the property of X, Y, D that Dweak convergence in X implies convergence in the norm of Y . It is verified (although not under that name and often expressed in different terms) and employed also in the proofs of profile decompositions for specific functional spaces, such as profile decompositions in the cited papers of Solimini and Bahouri, Cohen and Koch, (the latter verifies the property named Assumption 1 which, as we shown above, implies cocompactness). It is not clear yet, when there are profile decompositions with the weak, but not with the strong, remainder, but we expect that this may occur only under some special conditions, such as absence of some strong convexity. The abstract profile decompositions were not yet considered for non-reflexive spaces. For the sake of simplicity we here consider the case when the set D consists of isometries on X, as this is the case most often studied in applications. A profile decomposition with quasi-isometric operators in the Hilbert space is given in [56], Theorem 3.1. Let us define the general class of the set of isometric operators that yields general profile decompositions. As usual, pointwise (or strong) operator convergence in X, g k s → g, is convergence g k x → gx in X for any x ∈ X. Definition 3.1. A set D of surjective linear isometries on a Banach space X, closed with respect to the pointwise operator convergence, is called a dislocation (or a gauge) set if any sequence g −1 k with g k ∈ D, that does not converge weakly to zero, has a pointwise convergent subsequence. All operator sets in the cocompactness results of the previous section are known to be dislocation sets (see [56] for Euclidean rescalings, anisotropic rescalings on Carnot groups, actions of isometries on locally compact manifolds, and magnetic shifts, see [2] and [4] for inhomogeneous dilations in the Moser-Trudinger settings, and [32] for shifts in the Fourier variable.) Banach space case There is no generalization of Theorem 3.2 for the general Banach space, but there is one ( [21]) for functional spaces cocompactly imbedded into L p (M, µ) where (M, µ) is a measure space. Profile decompositions, that are not particular cases of Theorem 3.2 or Theorem 3.3 below, are summarized, to a great extent, in [7], which is dedicated to the case of Euclidean rescalings on functions of R N . Theorem 3.3. Let M be a measure space and let D be a set of dislocations on a reflexive Banach space X continuously imbedded into L p (M ) with some p > 1, and assume that the operators in D are isometric in L p and that the imbedding X ֒→ L p is D-cocompact. let u k be a bounded sequence in X. There is a renumbered subsequence of u k and sequences (g and the series in the last expression converges in L p uniformly with respect to k. If X is dense in L p , then L p provides a metrization of the D-weak convergence. As a corollary of this, with D 1,p (R N ) ֒→ L pN N −p (R N ) cocompactly relative to rescalings u → 2 rj u(2 j (· − y)), y ∈ R N , j ∈ Z, r = N −p p , one has the profile decomposition of Solimini [49]. The orthogonality condition in Solimini's profile decomposition is (3.3). Cocompactness is proved in [49] for the imbedding into L pN N −p ,q (R N ), p < q ≤ ∞, but on the bounded subsets of D 1,p (R N ) convergence in all these quasinorms is equivalent to that in L pN N −p (R N ). Remark 3.4. In Section 9.9 of [56] Solimini's profile decomposition is generalized to Sobolev spaces D 1,2 of Carnot groups. The proof of cocompactness there can be trivially extended, with suitable parameter changes, to D 1,p with general p, and then Solimini's profile decomposition for Carnot groups with anisotropic rescalings follows for any p ∈ (1, ∞) from from Theorem 3.3. Wavelet bases and profile decompositions Profile decompositions for a cocompact imbedding of functional spaces of R N , relative to rescalings, are established in [7] (following a number of earlier results surveyed there, starting with Gérard's paper [28]) in the weak remainder form (3.2) and with the asymptotic decoupling condition (3.3). Additional information about the profile decomposition is given there in form of stability estimates, namely ℓ p -bounds on the sequence { w (n) X } n∈N in the cases of Besov and Triebel-Lizorkin spaces, which indicates a possibility of a stronger remainder. There is a recent work with an expressed objective to replace the weak remainder (3.2) in the profile decomposition with a strong remainder, Palatucci & Pisante [44], which accomplished this task for the profile decomposition of [28], that is, for imbedding of the Bessel potential spacesḢ s (R N ), s > 0. Profile decomposition of [7] follows from the following assumptions on the function spaces X ֒→ Y of R N . 1. The norms of X and Y are invariant with respect to shifts and to dilations t r u(t·) with some r ∈ R; 2. There exists a function ψ ∈ X such that the set Dψ is an unconditional Schauder basis on both X and Y where D is the set (2.3); 3. The imbedding X ֒→ Y satisfies condition (2.4) (which, as we above, implies cocompactness of the imbedding whenever X is reflexive); 4. There is a C > 0 such that for any sequence u k bounded in X, with expansion u k = g∈D c k (g)gψ, whose coefficients c k (g) converge, for each g ∈ D, to respective finite limits c(g) as k → ∞, the series g∈D c(g)gψ converges in X with g∈D c(g)gψ X ≤ C lim inf u k X . Conditions above (and thus the profile decomposition (3.2)) are verified in [7] for all imbeddings listed in Theorem 2.5 above. Concentration in time-evolution problems Profile decompositions for sequences of initial data (bounded in respective Sobolev norms) of dispersive evolution equations, such as nonlinear Schrödinger or wave equation, give rise to profile decomposition of finite energy solutions with these data. This type of concentration analysis is usually called energy-critical and involves the usual group of rescalings (see [30,31,27,54]), and, for a survey, [32]). While for a linear equation construction a "time-evolved" profile decomposition in the energy space of initial data is straightforward, further technical effort is unvolved in extending such decomposition to solutions of equations with a non-linear term of critical growth. Below we quote a representative result from [8]. On the other hand, profile decomposition for sequences of data bounded in the Lebesgue norm, usually called mass-critical, involve a larger group of gauges. A profile decomposition by Terence Tao, based on the cocompactness of Strichartz imbedding in Theorem 2.12, is derived directly from Theorem 3.2). We refer to [32]) for further details. ) denote the solution (whose existence and uniqueness were established by Shatah and Struwe [46,47]) of the equation x , satisfying the initial condition u(x, 0) = ϕ k ⇀ ϕ, ∂ t u(x, 0) = ψ k ⇀ ψ, and let u be an analogous solution with the initial data ϕ,ψ. Then, for a renumbered subsequence, there exist sequences t Moreover, the elementary concentrations in the profile decomposition are asymptotically decoupled in the sense that In addition, the energy functional (the quadratic portion of ||| · |||) is additive with regard to the terms in the decomposition. k → ∞, and w (n) ∈ W 1,N 0,rad (B) n ∈ N, such that for a renumbered subsequence, and the series n∈N s Remark 3.7. An immediate generalization of this result to the radial subspace of W 1,N (R N ) can be stated in form of the decomposition above for u k − u k (1) ∈ H 1 0 (B) with vanishing of u k (r) for r > 1. Such decomposition is presented for N = 2 (with an additional assumption on the sequence and the weak remainder) in [9], which is very similar to [2] and is mentioned here only on the merit of details on vanishing of a sequence u k ⇀ 0 in H 1 0,rad for r > 1. A strongly vanishing remainder is, nonetheless, immediate. Without the assumption of radiality, profile decompositions for the Trudinger-Moser case are known when N = 2. The result for the bounded domain is contained in [4]. We quote it below with a trivially refined (in view of Corollary 3.2, [56]) energy estimate. Note that the concentration profiles are always radial, even when the original sequence u k is not. Indeed, when the concentration profiles are defined as weak limits of j −1/2 k u k (z j k with j k → ∞, it is obvious that they will be symmetric with respect to discrete rotations by an arbitrarily small angle, i. e. radially symmetric. ∈ Ω, and w (n) ∈ W 1,N 0 (B) n ∈ N, such that for a renumbered subsequence, (with the latter convergence equivalent, for bounded sequences in H 1 0 to convergence in exp L 2 ), and the series n∈N j Recently, Bahouri, Majdoub and Masmoudi [10] announced the following result, partly extending Theorem 3.8 to Ω = R 2 . Then, there exists a sequence of asymptotically orthogonal rescalings s (n) k , y (n) , and a sequence of profiles w (n) such that, up to a subsequence extraction, for all ℓ ≥ 1, the following asymptotic relation holds true: In addition to inhomogeneous dilations u → j −1/2 u(z j ), j ∈ Z of the unit disk, the gradient norm ∇u 2 on the unit disk is preserved by Möbius transformations, u(z) → u( z−ζ 1−ζz ), ζ ∈ B. This has a geometric meaning of isometries of the hyperbolic plane represented by the Poincaré disk coordinates, where ∇u 2 2 represents the quadratic form of the Laplace-Beltrami operator. This puts concentration compactness relative to such "translations" into the framework of Sobolev spaces on periodic manifolds, discussed in Section 2 (and based on Lemma 9.4, [56]), with the profile decomposition given by Theorem 3.2. Note only that application of Lemma 9.4, [56], gives cocompactness of the imbedding of (1−r 2 ) 2 is the Riemannian measure of the hyperbolic plane in the disk coordinates, and that equivalent metrizations of D-weak convergence include not only L q norms, but any Orlicz norms ψL(B, µ) associated with the even convex functions ψ such that ψ(t)/t 2 → 0 when t → 0 and log ψ(t)/t 2 → 0 when t → ∞. See [3] for details. Cocompactness and profile decompositions by reduction Condition of invariance with respect to a non-compact group is of exceptional kind, and so it is important to consider the implications of cocompactness on spaces without such invariance. In many cases it is advantageous to see a Banach space X 0 as a subspace of a larger space X, that admits a profile decomposition. This profile decomposition, reduced to sequences in X 0 , may take a significantly simplified form, as we see from the examples below. Furthermore, there are situations when restriction of a cocompact imbedding to a subspace results in a compact imbedding. Example. Consider the limit Sobolev imbedding D 1,p (R N ) ֒→ L pN N −p (R N ) and let u k be a sequence bounded in the W 1,p (R N )-norm. For any such sequence the L pbound implies that t N −p p k u k (t·) ⇀ 0 whenever with t k → 0, and consequently, a renamed subsequence of u k can be written as a sum of translations w (n) (·−y (n) k ) plus a sum of dilations (at variable cores y k ) with t k → ∞, which, in turn vanishes in L q , q ∈ (p, pN N −p ), plus a remainder that vanishes in L pN N −p (and is still bounded in L p , and thus also vanishes in L q ). In other words, by considering W 1,p as a subspace of D 1,p , we derived a profile decomposition in W 1,p , with translations as only gauges and with a remainder vanishing in L q , q ∈ (p, pN N −p ). Transitivity and interpolation Cocompactness of imbeddings is often possible to infer by means of the following elementary arguments. A. Let D be a set of isometries on three nested Banach spaces: X ֒→ Y and Y ֒→ Z, and one of the two imbeddings is cocompact. Let g k u k ⇀ 0 in X for any g k ∈ D. If the first imbedding is cocompact, then u k → 0 in Y and thus u k → 0 in Z. If the second imbedding is cocompact, then, by continuity of the first imbedding, g k u k ⇀ 0 in Y and thus u k → 0 in Z. B. Let X ֒→ Y 0 , X ֒→ Y 1 , and assume that the first imbedding is cocompact. If the convergence in Y 0 and Y 1 , for sequences bounded in X, is equivalent, then obviously, the second imbedding is cocompact as well. C. Let X ֒→ Y 0 , X ֒→ Y α , and assume that the first imbedding is cocompact. In general, one would also expect that, given a common set of gauges, interpolation of two imbeddings, X 0 ֒→ Y 0 , and X 1 ֒→ Y 1 , results in a cocompact imbedding, if one of this imbeddings is cocompact. We refer to [20] where a more specific statement is proved, under additional conditions, for functional spaces of R N , which is then applied to verify that subcritical imbeddings of Besov spaces are cocompact with respect to lattice shifts D = {u → u(· − y)} y∈Z N . This result can be also deduced from cocompactness of homogeneous Besov spaces mentioned in Section 2, by means of the reduction method below. Example. If Ω ⊂ R N is a bounded domain, one obtains a profile decomposition for the imbedding W 1,p 0 (Ω) ֒→ L pN N −p (Ω) by reducing Solimini's decomposition as follows. By Friedrichs inequality, the sequence u k has a L p -bound, which and eliminates all profiles subjected to unbounded deflations. Furthermore, there are no concentrations with t k → ∞ at cores lying outside of Ω, since the corresponding weak limits will necessarily be equal zero. For the same reason, if t k if bounded (equivalently, with t k = 1), there are no concentration terms with translations by unbounded sequences y k . As a result, every bounded sequence in W 1,p 0 (Ω) has a subsequence consisting of a countable sum of local concentrations t N −p p k w(t k (· − y)), t k → ∞, and a remainder vanishing in L pN N −p . This local concentration can be easily transferred, using the exponential map, to compact Riemannian manifolds, giving rise to profile decompositions, which were introduced, under the name of global compactness by Struwe [52] for Palais-Smale sequences, (see Theorem 3.1 in [24]), although, allowing infinitely many terms and arbitrary profiles, they can be easily re-established for general sequences. Compactness as reduced cocompactness Reduction to subspaces may in some cases eliminate all concentrations, which makes a restriction of the cocompact imbedding to a subspace a compact imbedding. In this case the argument does not involve profile decompositions and uses only cocompactness. We give here two examples: subspaces defined by restriction of support (compactness of Sobolev imbeddings on domains thin at infinity) and subspaces defined by a compact symmetry. The following statement is simpler and more general than its partial counterparts in [56], and we bring it with a proof. Proposition 4.1. Let M be a complete Riemannian N -manifold, periodic relative to some subgroup G of its isometries. Let X be a reflexive Banach space cocompactly imbedded into L q (M ) for some 1 < q < ∞, and assume that u • η q = u q for all η ∈ G. Assume that the imbedding X ֒→ L q (M ) is cocompact relative to the action of G. Let Ω ⊂ M be an open set such that the measure of the set lim inf η k Ω, η k ∈ G, is zero whenever η k x 0 has no convergent subsequence for some x 0 ∈ M . If X(Ω) is the subspace of X consisting of functions that equal zero a.e. on M \ Ω, then the imbedding X(Ω) ֒→ L q (Ω) is compact. Proof. Let u k ∈ X(Ω) be a bounded sequence, and assume without loss of generality, that u k ⇀ 0. By assumption, if η k x 0 has no convergent subsequence, then any weakly convergent subsequence of u k • η k converges weakly to a function, supported, as a L q -function, on a set of measure zero, i.e., by the imbedding, to the zero element of X. Assume now that, on a renumbered subsequence, η k → η. Then w − lim u k • η k = w − lim u k • η = 0 in X. Since this is true for any convergent subsequence, this is true for the original η k . We conclude then that in any case u k • η k ⇀ 0, and thus, by cocompactness of the imbedding, u k → 0 in L q (M ). The following theorem includes as the particular cases, the Strauss lemma [51] as well as its generalization to subspaces with the block-radial symmetry. Theorem 4.2. (Compactness under coersive symmetries, [48]) Let M be a complete Riemannian N -manifold with a transitive group G of isometries. Let X be a Banach space cocompactly imbedded into L q for some q ∈ (1, ∞), relative to the actions of G. Let Ω ⊂ G be a compact subgroup and let X Ω be a subspace of X invariant with respect to actions of Ω. Flask spaces Profile decomposition can be naturally extended to a class of subspaces that are not gauge-invariant. Del Pino and Felmer [22] have discovered them as Sobolev spaces of "flask domains" in R N . On the functional-analytic level flask spaces were defined in [56]. Definition 4.3. Let D be a set of surjective isometric operators on a Banach space X. A subspace X 0 ⊂ X is called a D-flask subspace if the set of sequential weak limits w − lim(DX 0 ) of sequences {g k u k , g k ∈ D, u k ∈ X 0 } k∈N remains in DX 0 . This property does not hold, in particular, if X ֒→ L q (R N ), D includes dilations, and X 0 X contains a continuous function, or if D includes all translations and for any R > 0 the subspace X 0 X, contains a function f R positive on some ball of radius R. On the other hand if X(Ω) = W 1,p 0 (Ω) where Ω = (−1, 1) × R N −1 ∪ R N −1 × (−1, 1), an infinite cross, and D is a group of shifts by Z N , then any nonzero translated weak limit of a sequence from W 1,p 0 (Ω) will be supported on a domain that is a translate of Ω, or a translate of (−1, 1) × R N −1 , or a translate of R N −1 × (−1, 1), which in any case is a subset of Ω. We have an immediate statement which is an elementary generalization of Proposition 3.5 of [56]. Theorem 4.4. Assume that D be a group of surjective isometric operators on a Banach space X, such that the profile decomposition (3.1) holds true. If X 0 ⊂ X is a D-flask subspace, then any bounded sequence in X 0 has a profile decomposition (3.1) with profiles w (n) ∈ X 0 . Proof. Since X 0 is a D-flask space, for each n ∈ N there exists g n ∈ D such that w (n) = g n w (n) ∈ X 0 . The profile decomposition (3.1) can be rewritten then withw (n) replaced with w (n) and g The following sufficient condition for a D-flask set is a slightly generalized version of Remark 9.1(d) in [56]. Theorem 4.5. Let M be a complete Riemannian manifold, and let G be a subgroup of its isometries. Let X be a reflexive Banach space continuously imbedded into L q (M ), 1 < q < ∞, let D = {u → u • η} η∈G and assume that u • η X = u X for all η ∈ G. Let X(Ω), Ω ⊂ M, be a subspace of all functions in X that vanish a.e. in M \ Ω. If for any sequence η k ∈ M there exist η ∈ G and a set of zero measure Z ⊂ M , such that lim inf η k Ω ⊂ ηΩ ∪ Z, then X(Ω) is a D-flask set.
2013-09-13T12:12:28.000Z
2013-09-13T00:00:00.000
{ "year": 2013, "sha1": "6f0b494a94d11d929f17eb8a0ac29ae9b66d308c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1309.3431", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6f0b494a94d11d929f17eb8a0ac29ae9b66d308c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233459290
pes2o/s2orc
v3-fos-license
Feasibility of cardiopulmonary exercise testing in interstitial lung disease: the PETFIB study Introduction Cardiopulmonary exercise testing (CPET) provides a series of biomarkers, such as peak oxygen uptake, which could assess the development of disease status in interstitial lung disease (ILD). However, despite use in research and clinical settings, the feasibility of CPET in this patient group has yet to be established. Methods Twenty-six patients with ILD (19 male) were recruited to this study. Following screening for contraindications to maximal exercise, participants underwent an incremental CPET to volitional exhaustion. Feasibility of CPET was assessed by the implementation, practicality, acceptability and demand, thus providing clinical-driven and patient-driven information on this testing procedure. Results Of the 26 recruited participants, 24 successfully completed at least one CPET, with 67/78 prospective tests being completed. Contraindications included hypertension, low resting oxygen saturation and recent pulmonary embolism. Of the CPETs undertaken, 63% successfully reached volitional exhaustion, with 31% being terminated early by clinicians due to excessive desaturation. Quantitative and qualitative feedback from participants revealed a positive experience of CPET and desire for it to be included as a future monitoring tool. Conclusion CPET is feasible in patients with ILD. Identification of common clinical contraindications, and understanding of patient perspectives will allow for effective design of future studies utilising CPET as a monitoring procedure. INTRODUCTION Interstitial lung disease (ILD) is the collective term for a series of pulmonary disorders characterised by inflammation, interstitial and alveolar damage, and often irreversible declines in lung function, with idiopathic pulmonary fibrosis (IPF) being the most common subtype of ILD, affecting ~32 500 people and accounting for 1% of all deaths in the UK. 1 Traditionally, measures of pulmonary function, including forced vital capacity (FVC) and the diffusing capacity for carbon monoxide (DL CO ), have been used to monitor disease progression and evaluate the efficacy of treatments. Both variables are predictive of mortality 2 and provide greater predictive power for survival over 6 months than histopathological factors alone. 3 However, peak oxygen uptake (VO 2peak ), the primary outcome from cardiopulmonary exercise testing (CPET), is also associated with mortality 4 and is therefore an important variable to consider alongside traditional resting spirometry. For CPET to be integrated into clinical practice, it must be shown to be a feasible procedure and well tolerated by patients undergoing the test-an important consideration given its exhaustive nature. Therefore, this study (Exploring the potential of Cardio Pulmonary Exercise Testing as a biomarker in patients diagnosed with Fibrosing Lung Disease) sought to assess the feasibility of CPET, notably the implementation, practicality, acceptability and demand, in a cohort of patients with ILD. Open access CPETs in a 6-month period, with 3 months separating each test. The inclusion criteria for this study were as follows: (1) clinical diagnosis of fibrotic lung disease as determined by the Royal Devon and Exeter ILD Team; (2) 40-85 years. of age; (3) FVC >40%; (4) DL CO >25%; (5) willing and able to provide informed consent. Exclusion criteria included: (1) unable/unwilling to provide informed consent; (2) left or right ventricular ejection fraction <50%; more than mild valvular heart disease; lack of available chest CT images; (3) significant repolarisation abnormalities or arrhythmias identified by resting 12-lead ECG or untoward ECG changes and/or symptoms of ischemia during previous baseline testing of CPET; (4) significant neurological impairment (anything that prevents patients from cycling); (5) poorly controlled (symptomatic) asthma, or recent exacerbation of asthma (requiring hospitalisation or medical therapy) within the preceding 4 weeks; (6) severe cardiovascular comorbidity or other medical conditions that could contribute to dyspnoea, (7) forced expiratory volume in 1 s/FVC (FEV 1 /FVC) ratio <65%; (8) daytime oxygen therapy; (9) contraindications to exercise testing. All participants provided written and informed consent on recruitment to the study. Physiological measures Stature and body mass were assessed using standard methods, with body mass index (BMI) subsequently calculated. Body fat percentage was assessed using air displacement plethysmography (BodPod; COSMED, Rome, Italy), with subsequent values for fat mass and fatfree mass (FFM) calculated. Measures of FEV 1 , FVC and DL CO were retrospectively extracted from pulmonary function test (PFT) data from each participant's medical records at the date closest to their CPET. Data are presented as absolute values and as a per cent of a predicted value for age, sex, and stature. In addition, composite 'Gender-Age-Physiology' scores, 5 were also calculated for each participant. Cardiopulmonary exercise testing Participants performed a CPET on an electronically braked cycle ergometer (Lode Excalibur; Lode, Groningen, the Netherlands), undertaking an incremental protocol as per existing international guidelines. 6 Participants performed 3 min unloaded cycling (0 W) as a warm-up, before an incremental ramp phase whereby resistance increased by 10 W/min. Participants maintained a cadence between 60 and 80 revolutions per minute (rpm) until volitional exhaustion, defined as a decrease in cadence >10 rpm for five consecutive seconds despite verbal encouragement from research staff. On exhaustion, participants returned to unloaded cycling at 0 W for a further 3 min to cool down. On cessation of unloaded cycling, participants recovered in a seated position off of the cycle ergometer for approximately 10 min. Once recovered, and with permission from the attending doctor, participants were free to leave. Throughout CPET, pulmonary gas exchange was recorded using a metabolic cart (Medgraphics Ultima; Medical Graphics UK Ltd., Gloucester, UK). Data were measured breath-by-breath and analysed in 10 s averages. Normative values 7 were utilised to present VO 2peak and peak work rate (WR peak ) as a percentage of predicted. Participant safety Prior to CPET, all participants were clinically screened for contraindications to maximal exercise (eg, hypertension). Furthermore, all participants wore a 12-lead ECG (Welch Allyn CardioPerfect; Hillrom, Chicago, USA) and pulse oximeter (Choice MMed MD300C2; ChoiceMMed, Dusseldorf, Germany), to monitor cardiac changes and peripheral capillary oxygen saturation (SpO 2 ), respectively. All CPETs were supervised by an exercise physiologist and medical doctor, and the CPET was terminated if either ECG (eg, arrhythmia) or SpO 2 responses warranted early cessation for patient safety. In the first round of CPETs, the SpO 2 limit was conservatively set at <88%, and extended to <80% in the second and third CPETs. This latter cut-off aligns with international guidelines, 6 as hypoxaemia was also shown to be well tolerated in the first CPET; any adverse symptoms (should they have occurred) also provided clinicians with reasons for CPET termination in addition to desaturation. Feasibility Feasibility was assessed using existing guidelines, 8 predominantly by focusing on 'implementation' (degree, and success/failure, of execution of CPET), 'practicality' (ability of participants to perform CPET, with focus on safety), 'acceptability' (perceived appropriateness of CPET), and 'demand' (expressed interest in use of CPET). Each of these components were measures in differing ways: 1. Implementation of CPET was assessed by (A) identifying reasons, and their number, as to why participants did not undertake CPET, and (B) identifying reasons, and their number, for clinician-led termination of CPET. 2. Practicality was assessed by characterising the number of excessive ECG and SpO 2 changes during CPET. 3. Acceptability and demand were established by identifying participant opinions on satisfaction with, and suitability of, CPET for future use. This was undertaken as part of an evaluation of the wider study (full questionnaire provided in online supplemental file 1 and was completed using two processes. Statistical analyses Baseline anthropometric, pulmonary and clinical data were compared between (1) males and females, and (2) participants on antifibrotic medication and those that were not. This was undertaken using independent samples t-tests to infer any homogeneity (or heterogeneity) in the sample. Effect sizes, using existing thresholds 9 were also utilised to infer trivial (<0.2), small (0.2<0.5), medium (0.5<0.8) and large (≥0.8) differences between groups. Paired samples t-tests identified changes in SpO 2 in each CPET. Pearson's correlations were utilised to establish relationships between parameters of fitness, and nadir and change in SpO 2 ; as well as between SpO 2 values at the start and end of CPETs. Magnitudes of coefficients were described as small (0.1<0.3), medium (0.3<0.5) and large (≥0.5). 9 All analyses were undertaken using SPSS V.26 (IBM), and a p<0.05 was considered significant. Patient and public involvement There is no patient or public involvement (PPI) to report in the design of this feasibility project, however, a questionnaire (online supplemental file 1) was used to assess experiences of participation within the trial, as previously mentioned in the 'Feasibility' section above. Participants Participant characteristics are listed in table 1. Significant differences were observed between males and females for stature, absolute FEV 1 and FVC, body fat percentage and FFM. Based on use of antifibrotic medication, significant differences were found for participants on antifibrotic medication who had both a lower body mass and BMI. However, there were no differences to be observed between groups for pulmonary function when normalised to percent predicted (table 1). At baseline, mean (±SD) parameters of fitness for the n=24 to successfully complete one CPET were as follows for VO 2peak ; 1.32±0.40 L/min; 16 The mean time difference between CPET and PFTs across the course of the study was 33±95 days (0.09±0.26 years), indicating that, on average, PFTs and CPET were separated by 1 month. Feasibility: implementation Two participants were excluded from performing CPET at their baseline visit by the attending clinician (hypertension, n=1; resting SpO 2 <90%, n=1), resulting in n=24 undertaking at least one CPET. At the second visit, two further CPETs were not undertaken due to voluntary withdrawal of a participant (citing a lack of energy; n=1) and a participant experiencing a pulmonary embolism within the 4 weeks prior to study visit (n=1). At the third visit, three CPETs were not undertaken due to the aforementioned participant withdrawal (n=1), significant participant exertional desaturation (SpO 2 <80%) on exertion (n=1), and one participant passed away in between study visits (n=1). Exclusions resulted in 67/78 of possible CPETs from the recruited n=26 being completed (figure 1) -an 86% completion rate. The majority of CPETs undertaken were satisfactorily completed, as patients successfully reached volitional exhaustion (n=42, 63%), compared with a number of participants who failed to reach exhaustion and tests were terminated by clinical staff. These reasons for clinical cessation included excessive desaturation (n=21, 31%), right bundle branch block (n=1, 1%) and a poor ECG trace, leading to a precautionary termination (n=1, 1%). Feasibility: practicality Desaturation during exercise occurred in 63/67 CPETs, with n=3 CPETs experiencing no desaturation, and n=1 increasing SpO 2 during the CPET (from 90% to 91%) as shown in figure 2. No relationship was evident between SpO 2 at rest, and nadir SpO 2 at termination of the CPET (figure 2). Nadir SpO 2 and changes (Δ) in SpO 2 held small to medium correlations with markers of fitness, as shown by relationships with WR peak and VO 2peak (table 2) Open access when all n=67 CPETs were pooled for analysis. No significant correlations were reported when separated out by individual CPET. In addition to the aforementioned right bundle branch block leading to CPET termination, ECG readings during the course of CPETs also revealed atrial fibrillation (n=1); possible atrial fibrillation (n=1); poor R-wave progression (n=1); T-wave inversion (n=1); asymptomatic widening of the QRS complex accompanied by T-wave inversion (n=1); and asymptomatic ventricular bigeminy (n=2), although these were not a cause for immediate CPET cessation. Referrals for 24-hour ECG monitoring were subsequently made by the attending clinical staff. During exercise, all participants were able to maintain the pedalling rate as instructed, and the majority of individuals self-selected a cadence of 60-70 rpm, while two participants selected a cadence >70 rpm. Postexercise, one participant reported dizziness, although this ceased after a 5 min period and did not re-occur in subsequent CPETs. No other postexercise complications were reported. Feasibility: acceptability and demand A total of n=19 participants completed the post-trial evaluation. Of those to respond, participants rated their involvement in the study highly, and responded positively to the questions aimed at evaluating CPET. Responses for each question (mean±SD, range) were as follows: Q1 (1.5±0.6, 1-3), Q2 (6.7±0.5, 6-7), Q3 (1.7±1.3, 1-6). Qualitative reflections to the three questions are provided in table 3. Furthermore, in response to the semi-structured interview, participants reflected on their perspectives on the CPET, and in relation to other testing modalities. Broadly, CPET was viewed on positively: I felt quite able and capable of doing it -the results will show but I was able to exert as much as I could and ► "I was fine -it depends what level you're at" ► "I think I could do more than I did from the lung point of view -my legs gave out first" ► "It helps me to know how I feel" ► "The whole purpose of this exercise is for people who have a weakness in their body system" 'Based on my experience in this trial, I think cardiopulmonary exercise testing is feasible for lung disease patients' ► "I think there are certain people who wouldn't be able to manage it, although carers can hold patients back with their views about what the patient can do" ► "Oh yeah, and I think it's an interesting thing to watch -to know what's happening to your heart. Sometimes I've been thinking is it my heart or is it my lungs when I've been feeling really poorly" ► "I think it is essential and it should become compulsory" 'The idea of using exercise testing to develop individualised exercise programmes for patients does not appeal to me' ► "I may not want to adhere to an exercise programme" ► "It gives me confidence. I've leapt at the chance to do the pulmonary rehab here! (The physical therapist) here described it and it sounded exactly what I need to get my confidence back to do stuff -they said it's OK to get out of breath, whereas you think you can't because it's the PF (pulmonary fibrosis). How you exercise safely is a paradox for me. It's feeling that you're not allowed to with PF" ► "Apart from playing bowls and gardening; I'm not likely to start playing football again!" ► "I know it's good for you, but you have to motivate yourself to do it" ► "It could provide an immediate answer without the punishment of going through medicines" Open access as long as I could -there must've been a parameter in which I was performing fairly well or they wouldn't have allowed me to go on so long I think the bike test is really good because it gives you so much information For the most part, enjoyable! It obviously gets harder, but you're allowed to stop so that's alright There were also some negative comments related to the testing procedures: The bike test was OK -I was a little bit disappointed that they had to stop it [because of a right bundle block] [the CPET] Very good -except the seat -that bicycle seat is most uncomfortable Compared with other testing modalities, CPET was preferred to shuttle walks: I prefer this definitely -it's a tougher examination of your ability to move yourself and breathe. It's a more accurate examination of your ability, more detailed. The shuttle walk test didn't push me I prefer the exercise bike test -the level of monitoring is much more detailed than a six min walk test I don't think they compare really because the shuttle walk test was very easy -it didn't feel like a test really The walk test is a nonsense -the bicycle test you are measuring everything, stamina, heart rate, the whole response, oxygen test, you're doing everything. The walk test -you can choose how fast you walk -I could have gone on walking for a long time and it was up to me to choose the pace I did the six minute walk test a lot in the trials -that always went OK. I loved doing the bike! CPET was also preferred to spirometry: Compared with spirometry, it's easier -in [another hospital] I did two sorts -the one where you breathe in the mixture of gases and breathe out, one where you hold your breath out very quickly -but I thought the bike was better than that Finally, spirometry was viewed on negatively by some: [the spirometry] depends on who you've got taking it and you know what's going to happen and it's very hard to hold your breath when you've got stuff blowing down the back of your throat; I don't think it's a very good indication of your health. I don't like it at all. The static lung function tests are very daunting and unpleasant to undergo Patient and public involvement While there was no PPI in the initial design of this study, general feedback from involvement in the trial revealed a desire from patients to be involved in future research and therefore a new patient-driven, research steering group was established (Exeter Patients in Collaboration for Pulmonary Fibrosis Research) in conjunction with the Royal Devon and Exeter National Health Service Foundation Trust and the College of Medicine & Health, University of Exeter. This group will be utilised to codesign trials following the outcome of this feasibility study. DISCUSSION This study aimed to assess the feasibility, namely the implementation, practicality, acceptability and demand, of CPET in patients with ILD. The results have shown CPET can be feasibly undertaken in individuals with ILD, and is widely accepted by patients, therefore highlighting its prospective use as an alternative biomarker in this condition. The feasibility of CPET was assessed with regards to clinically-driven, as well as patient-reported outcomes. First, evaluation of the implementation and practicality provides an objective assessment of whether this test could be used in a routine clinical setting, rather than a research-only environment. A successful 86% completion rate was achieved, and of the 11 tests not undertaken, six were due to immediate exclusion of participants at baseline due to contraindications, with a further two CPETs fitting this category from subsequent visits. The remaining three were accounted for by patient death and withdrawal from the study itself. These exclusions align with established absolute and relative contraindications to maximal exercise, 6 although the authors are not aware of any previous studies to characterise the contraindications to exercise in ILD. Co-morbidities, including atrial fibrillation (which was also identified in our present cohort) have been reported in a previous study to use CPET, although this appears to be from a descriptive, rather than exclusionary perspective. 10 This study also stated, unlike further CPET-based studies, 4 11 12 that exercise was stopped by a clinician if necessary, although there is no further elaboration on any reasons if this occurred. Therefore, the present study is unique and advances our understanding in characterising clinical factors responsible for exclusion from, or cessation of, CPET in patients with ILD. Within the current CPETs, the majority of participants desaturated, although only 31% to such an excessive extent that exercise had to be terminated, with the magnitude of desaturation in line with previous studies (eg, 87.7%±5.7% 11 ; 90%±6%. 13 Given that international recommendations propose exercise is terminated if SpO 2 <80%, 6 it is reassuring that our present results conform with previous studies and international guidelines, and that our patient group appears to be exercising safely within accepted norms, only prematurely stopping CPET Open access in one-third of cases. In our sample, we noted two participants that presented with a normal pattern of desaturation, prior to a rapid drop in SpO 2 . For these two participants, CPET was terminated when SpO 2 reached 80%, although this value continued to drop for ~5 s, resulting in two abnormally low nadir SpO 2 values as seen in figure 2. This unexpected drop in SpO 2 values was rare (ie, 2/67 CPETs), but it is important for clinicians to be aware of this potential risk. Future trials could consider the use of supplemental oxygen during CPET to offset this risk of desaturation (provided any equipment is technically compatible); although such hyperoxic conditions may affect the implementations of tests, and interpretation of results. 14 Previous research has also suggested the degree to which patients desaturate during exercise testing is associated with baseline SpO 2 , 15 although this does not appear to be supported in the present study as baseline and end-exercise SpO 2 were not significantly correlated, even at the final CPET (figure 2), when disease severity may have progressed over the intervening period of time (up to 65 weeks for some participants). Moreover, desaturation and markers of fitness (VO 2peak , WR peak ) held only small to medium correlations (table 2) -even when pooling all CPETs for increased power -indicating a level of homogeneity in the desaturation response to exercise in ILD. Thus, it is possible that disease presentation and severity has little effect on the risk of desaturation during exercise, and that alternative, non-disease related, mechanisms may be responsible and worthy of further investigation. The prognostic value of CPET has been established previously, with VO 2peak 4 11 and V E /VCO 2 12 being predictive of mortality in patients with IPF; and CPET is reported to be reproducible in restrictive lung disease. 16 Therefore, this highlights the need to consider physiological measures, and not solely rely on radiological outcomes when monitoring fibrotic interstitial diseases and their subsequent change over time. 3 17 Furthermore, as exercise intolerance in ILD is multifactorial, 18 use of CPET can be utilised to ascertain causes of intolerance, as well as informing personalised approaches towards pulmonary rehabilitation and exercise regimens in patients. 19 A personalised approach to exercise training was received with a mixed response in our cohort, with the mean score suggesting participants agreed with the principle of personalised regimens, although the qualitative responses provided contrasting views. Previous interviews of individuals with IPF show that patients feel exercise could benefit them physically and mentally, while proposing a preference for group-exercise, 20 aligning with some of the perspectives put forward in the present study. Furthermore, current pulmonary rehabilitation guidelines state that personalisation is warranted to optimise such programmes. 21 Therefore, CPET can be utilised to inform these processes, as the testing process itself has also been shown to be widely accepted in the present study. Furthermore, a common theme that emerged was in relation to preference of CPET over alternative exercise tests, as well as spirometry, for which there appeared to be a dislike among this cohort of participants. While PFT are well established processes for detecting and characterising changes in disease status, 22 and remain the gold-standard outcome measure in respiratory medicine, previous research reported that patients present with anxiety in relation to such tests, 23 and find it difficult to translate such test results in relation to future exercise and activity ability. 24 While the authors do not advocate for the removal, nor replacement of spirometry from clinical practice, a case can certainly be made for CPET to exist as an adjunct clinical measure alongside traditional PFT, for the benefit of clinicians and patients alike. There are both strengths and limitations to be discussed with this study. Our study provides real-life data on how CPET is tolerated in this patient group, and thus provides clinicians with valuable insight into how to ingrain this modality of testing into services, and what contraindications and responses to anticipate. Moreover, the combination of quantitative and qualitative assessments not only empower the patient population and their voice within research and routine clinical assessment; but the description of physiological changes and logistical challenges associated with CPET, will prove exceptionally useful to the wider ILD community. In contrast, we acknowledge that a sample size of 26 individuals can be interpreted as a limitation. This group is relatively homogeneous with regards to pulmonary function (as shown by lack of differences between sexes and antifibrotic usage in table 1) and can be considered mild in nature, with a sample that is composed predominantly of males with IPF. Therefore, there is a possibility this sample may not truly reflect wider patients responses to, and acceptability towards, CPET. However, previous exercise-based feasibility studies in ILD have recruited similar (or fewer) patient numbers, 25 26 and thus our study is in line with such similarly designed studies. Moreover, given that IPF is the most common progressive ILD, and is more common in males, 27 28 we conclude our sample is broadly reflective of the wider ILD population. CONCLUSION In conclusion, this study has shown CPET to be feasible within a clinical setting in terms of implementation and practicality, identifying reasons (and their number) for excluding patients from CPET, or stopping an exercise bout prematurely. Furthermore, CPET is acceptable by the intended user group (those with ILD). Therefore, this testing procedure should be considered for future use as an additional biomarker to evaluate prognosis and response to treatments in this patient population. Disclaimer The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. Competing interests MG has received support to attend conferences and professional fees from Roche and Boehringer-Ingelheim. Patient consent for publication Not required. Ethics approval Ethics approval for this study was granted by the Health Research Authority (IRAS #220189) following review by the South West (Frenchay) Research Ethics Committee (17/SW/0059). Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available on reasonable request. Please contact the corresponding author (CS). Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
2021-05-01T06:17:23.052Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "7cd029a69351f283c999efd609710987dfc9d8d3", "oa_license": "CCBYNC", "oa_url": "https://bmjopenrespres.bmj.com/content/bmjresp/8/1/e000793.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe02262ecf2d52f399ed87857ab072f3cd839e2d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239467919
pes2o/s2orc
v3-fos-license
Neuron-Derived Extracellular Vesicles Modulate Microglia Activation and Function Simple Summary In this study we investigated how neuron-derived extracellular vesicles (NDEVs) mediate neuroimmune regulation in primary cell culture systems. Rat cortical neurons released EVs that improved microglial survival and inhibited the expression of activation markers on microglia. Furthermore, NDEVs reduced the LPS-induced proinflammatory response and promoted an anti-inflammatory response. Thus, neurons critically regulate microglia activity and control inflammation via EV-mediated neuron–glia communication. Abstract Microglia act as the immune cells of the central nervous system (CNS). They play an important role in maintaining brain homeostasis but also in mediating neuroimmune responses to insult. The interactions between neurons and microglia represent a key process for neuroimmune regulation and subsequent effects on CNS integrity. However, the molecular mechanisms of neuron-glia communication in regulating microglia function are not fully understood. One recently described means of this intercellular communication is via nano-sized extracellular vesicles (EVs) that transfer a large diversity of molecules between neurons and microglia, such as proteins, lipids, and nucleic acids. To determine the effects of neuron-derived EVs (NDEVs) on microglia, NDEVs were isolated from the culture supernatant of rat cortical neurons. When NDEVs were added to primary cultured rat microglia, we found significantly improved microglia viability via inhibition of apoptosis. Additionally, application of NDEVs to cultured microglia also inhibited the expression of activation surface markers on microglia. Furthermore, NDEVs reduced the LPS-induced proinflammatory response in microglia according to reduced gene expression of proinflammatory cytokines (TNF-α, IL-6, MCP-1) and iNOS, but increased expression of the anti-inflammatory cytokine, IL-10. These findings support that neurons critically regulate microglia activity and control inflammation via EV-mediated neuron–glia communication. (Supported by R21AA025563 and R01AA025591). Introduction Microglia, one of three types of glial cells found in the CNS, though of myeloid origin [1], act as the brain's primary immune cells. As such, microglia play varying roles in development versus damage, infection, aging, or neurodegenerative diseases [2]. Microglia display a variety of functional states in the healthy brain but especially with neuropathology. For example, under resting conditions, microglia exhibit a ramified morphology allowing for active surveillance of their environment [3,4]. Upon homeostatic disturbances, microglia adopt reactive profiles, which range across a spectrum from classical activation (proinflammatory, M1-like) to alternative activation (anti-inflammatory, M2-like) phenotypes. Proinflammatory activated "M1-like" microglia produce cytokines, chemokines and Cells All procedures were in accordance with the Guide for the Care and Use of Laboratory Animals and were approved by the University of Kentucky Institutional Animal Care and Use Committee prior to the start of experiments. Primary cortical neurons were prepared and cultured from cortex of embryonic day 18-19 rat embryos as described previously with modification [21]. Briefly, cortices were dissected out and incubated with 0.25% trypsin in Hank's Balanced Salt Solution (Thermo Fisher, Waltham, MA, USA) for 15 min at 37 • C, followed by mechanical dissociation. Single-cell suspensions were plated onto T75 flasks coated with poly-D-lysine (50 µg/mL, Millipore Sigma, St. Louis, MO, USA) at a density of 5 × 10 5 cells/cm 2 in Neurobasal medium containing B27 supplement (1×, Thermo Fisher) and penicillin-streptomycin (1×, Thermo Fisher). Cells were incubated at 37 • C and 5% CO 2 in a humidified incubator. Microglia cultures were prepared as described previously [22]. Briefly, cortices were obtained from postnatal day 2-3 rat pups, stripped of meninges, dissociated with a pipette, and passed through a 100 µm cell strainer. The cell suspension was seeded into T75 tissue culture flasks (one rat pup brain per flask) and maintained in Dulbecco's Modified Eagle's Medium (Thermo Fisher, Waltham, MA, USA) supplemented with 10% Fetal Bovine Serum (Thermo Fisher) and 1% penicillin/streptomycin and grown as a mixed glia culture for 7-10 days. After mixed glia cultures were completely confluent, flasks were sealed with parafilm and shaken gently at 100 rpm for 1 h at 37 • C to detach microglia. Next, microglia in suspension were removed from mixed culture and pelleted at 400× g for 5 min at 4 • C. The purity of microglia was determined by immunocytochemical staining (Supplementary Figure S1). Results showed that over 99.9% of cells were Iba-1+ (microglia-specific marker), with less than 0.1% of cells Glial Fibrillary Acidic Protein (GFAP+; astrocyte-specific marker), and no NG2+ or MBP+ (oligodendrocyte-specific markers) cells were observed in the culture. Cells were then plated at a density of 2 × 10 5 cells per mL for further treatment. Extracellular Vesicle Purification Neuron-derived extracellular vesicles (NDEVs) were prepared from neuron-conditioned medium by differential ultracentrifugation as described previously [23]. Briefly, neuronconditioned medium was collected from neuronal cultures that were maintained for 5-7 days in vitro and subjected to serial differential centrifugations at 300× g for 10 min and 2000 × g for 20 min at 4 • C to remove dead cells and cell debris. Supernatants were then centrifuged at 10,000× g (Beckman XL 90 ultracentrifuge; 70 TI Rotor; k-factor, 44) for 30 min at 4 • C to pellet large EVs (L-EVs) [23]. The L-EV pellet was washed with phosphate buffered saline (PBS) and subjected to an additional centrifugation at 10,000× g for 30 min at 4 • C. Large EV pellets were then resuspended in PBS and stored at −80 • C in aliquots. Small EVs (S-EVs) remaining in the medium were then pelleted by ultracentrifugation at 100,000× g (Beckman XL 90 ultracentrifuge; 70 TI Rotor; k-factor, 44) for 70 min at 4 • C. The S-EV pellet was washed with PBS and then subjected to another centrifugation at 100,000× g (Beckman XL 90 ultracentrifuge; 70 TI Rotor; k-factor, 44) for 70 min at 4 • C. S-EVs were resuspended in PBS and stored at −80 • C in aliquots or proteins were extracted with RIPA buffer (Thermo Fisher) for further analysis by Western blotting. EV concentration and size distribution were measured using multiple particle tracking with a Nanosight NS300 (Malvern Panalytical, Malvern, UK) equipped with a 488 nm laser. Multiple tracking analysis measures the diffusion time of individual nanoparticles to determine the size and concentration. All samples were measured at least 5 times for a duration of 60 s each with a minimum of 200 valid tracks per recording. Analysis was performed using Nanosight 3.4 software. Instrument calibration was verified using 100 nm polystyrene standard beads. EV protein concentration was measured via a bicinchoninic acid (BCA) protein assay kit (Pierce, Rockford, IL) following the manufacturer's instructions: (S-EVs: 0.800 ± 0.322 × 10 9 /mg; L-EVs: 0.291 ± 0.047 × 10 9 /mg). For in vitro microglia treatment experiments, EVs (0-10 µg/mL) were suspended in microglia culture medium and added into microglia culture (1.5 to 2 × 10 5 cells per well) for 24 h (based on pilot studies), followed by incubation with or without Lipopolysaccharide (Millipore Sigma) for 8 h for RNA or 24 h for protein. Microglia were collected 24-32 h following EV treatment for flow cytometric analysis or RNA extraction. Microglia supernatant was collected 48 h following EV treatment for cytokine ELISA. Western Blot Western blots were performed on EV protein or whole neuronal cell lysate extracted using RIPA buffer containing a protease inhibitor cocktail (Thermo Scientific, Rockford, IL). Quantification of the isolated protein was achieved using a BCA protein assay according to the manufacturer's instructions. A total of 10 µg of protein was boiled in 4× Laemmli Sample Buffer (Invitrogen) supplemented with 2% β-mercaptoethanol for 5 min before being loaded for electrophoresis on 10% polyacrylamide gels. The resolved proteins were then transferred onto nitrocellulose membranes (BioRad, Hercules, CA, USA) with Precision Plus Protein Dual Color Standards (Bio-Rad, #1610374) on the side well. Membranes were blocked in 5% skim milk powder in Tris-buffered saline (TBS), and blotted with primary antibodies (Table 1) overnight at 4 • C on a shaker. Membranes were then incubated with goat anti-rabbit or goat anti-mouse Ig G IR800 secondary antibody (1:20,000 dilution; Azure Biosystems, Inc., Dublin, CA, USA) for 1 h at room temperature and then visualized using Sapphire Biomolecular Imager (Azure Biosystems). Blots were quantified by ImageJ software (NIH, Bethesda, MD, USA). MTT Assay The MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay is used to measure cellular metabolic activity as an indicator of cell viability. Microglia were treated with different concentrations (0, 1, 5, 10 µg/mL) of S-EVs or L-EVs for 24 h. MTT was added to make up a final concentration of 0.5 mg/mL in medium and cells were incubated for 1 h at 37 • C in 5% CO 2 . Cells were dissolved in dimethyl sulphoxide (DMSO) and absorbance at 490/570 nm was determined in a plate reader (M5, Molecular Devices, Sunnyvale, CA, USA). LDH Assay Lactate dehydrogenase (LDH) is a cytosolic enzyme released into the cell culture media upon damage to the plasma membrane. Microglia were incubated with S-EVs for 24 h then LDH release into the medium was measured by a Pierce LDH cytotoxicity Assay Kit (Thermo Scientific) following exactly the manufacturer's instructions. Absorbance at 562 nm was measured in an M5 plate reader. Propidium Iodide Flow Cytometric Assay Propidium iodide (PI) flow cytometric assays are well-accepted methods for the evaluation of cell cycle and apoptosis [24]. Microglia were detached from culture wells by trypsin and fixed in 66% ethanol on ice. Cells were then incubated in PI (50 µg/mL) + RNase (10 µg/mL) at 37 • C in the dark for 30 min. Samples were run on an Attune Acoustic Focusing Cytometer (ABI, Carlsbad, CA) and PI fluorescence was collected in FL2 channel. DNA content was quantified in a histogram plot to delineate cells in G1 (2N), DNA synthesis (2N-4N), mitotic (4N), and apoptotic stages (<2N). Microglia Staining and Flow Cytometry Microglia were scraped from culture wells and suspended in incubation buffer (50 µL; 1 × PBS + 0.1%BSA) for 30 min on ice. Cells were incubated with anti-CD32 (BD Bioscience, San Jose, CA) to block F c receptors on microglia for all assays except when used to assess CD32 immunoreactivity. Cells were then stained with fluorescent conjugated antibodies on ice for 30 min in order to assess microglia purity (mouse anti-rat CD11b-FITC, #554982, BD Bioscience, San Jose, CA; mouse anti-rat-CD45-APC, #17-0461-82, eBioscience, San Diego, CA, USA) and state of M1 activation (mouse anti-rat: MHC-II-PE #554929 and CD32-PE, #552189, BD Bioscience). For alternative/M2-like activation, cells were incubated in rabbit anti-rat CD206 (#ab64693, Abcam, Cambridge, MA) for 30 min followed by incubation with donkey anti-rabbit-PE secondary antibody (#12-4739-81, BD Bioscience) for 30 min. Cells were washed with PBS, fixed with IC fixation buffer (Invitrogen by ThermoFisher) and analyzed on an Attune Acoustic Focusing Cytometer (ThermoFisher). Prior to each run, the flow cytometer was calibrated with commercially available beads (ThermoFisher). Fluorescence spillover compensation values were then generated from both non-stained cell populations as well as single-color staining controls. Isotype controls were also utilized to exclude any the non-specific binding of the antibodies. For each sample, 1 × 10 4 events were collected. RNA Isolation and Real-Time PCR After EVs and/or LPS treatment, microglia were lysed with TRIZOL Reagent (Life Technologies, Carlsbad, CA, USA) and total RNA was extracted using a mirVana miRNA Isolation Kit (ThermoFisher, Waltham, MA, USA) according to the manufacturer's instructions. Realtime RT-PCR was performed with Assays-on-Demand primers [TNF-α (Rn00562055_m1), IL-6 (Rn01410330_m1), MCP-1 (Rn00580555_m1), iNOS (Rn00561646_m1), IL-10 (Rn01483988_g1), Applied Biosystems Inc.], using a one-step quantitative Real-time RT-PCR system (Ther-moFisher, Waltham, MA, USA). The housekeeping gene, glyceraldehyde-3-phosphate dehydrogenase (GAPDH, Rn01462661_g1) was used as an internal control. Data were analyzed by calculating the differences between the delta cycle values for the EV/LPS treatments and control conditions (double delta cycle analysis) as previously described [25]. Results were expressed as fold difference as compared to no EV treatment control. Enzyme-Linked Immunosorbent Assay (ELISA) ELISA for TNF-α and IL-6 were performed according to the manufacturer's instructions (DuoSet ELISA for TNF-α and IL-6, R&D Systems, Minneapolis, MN, USA). Briefly, 96-well plates were coated with capture antibodies for TNF-a or IL-6 in PBS overnight at room temperature (RT). Plates were blocked with 1% bovine serum albumin (BSA) in PBS for 2 h at RT, following which samples or standards were added and incubated for 2 h at RT or overnight at 4 • C. Adhering antigen was detected by incubation with biotin-conjugated detection antibody for 2 h at RT followed by horseradish peroxidaseconjugated streptavidin for 20 min. Then, 100 µL of Substrate Solution (1:1 mix of H 2 O 2 and Tetramethylbenzidine) were added to each well, followed by 50 µL of Stop Solution (2N H 2 SO 4 ). Optical density was determined using a microplate reader (BioTec, Winoosk, VT, USA) set to 450 nm and wavelength correction set to 540 nm. Statistical Analysis All of the resulting raw data were compiled in Excel and then graphed and analyzed in Prism (v7, GraphPad Software, San Diego, CA, USA). Unless stated otherwise, all values are reported as mean ± S.D. with n indicating the number of replicates. Flow cytometry data were compared via Student's t-test (for two groups) or one-way ANOVA with Holm-Sidak's posthoc test (all groups versus control) or Tukey's (3 groups, all pairwise comparisons) posthoc test. Data for real time RT-PCR were compared using two-way ANOVA for EV treatment and LPS as factors with Tukey's posthoc tests. Statistical significance was accepted at a p < 0.05. EVs Derived from Neurons Improve Microglia Survival To determine if NDEVs contribute to neuron-microglia intercellular communication, we obtained EVs through serial steps of ultracentrifugations from rat cortical neuronal cultures as described in Section 2. The size distribution of small or large EVs (S-EVs or L-EVs) was analyzed by nanoparticle tracking analysis (NTA, Figure 1A). NTA revealed that S-EVs secreted from neurons were 158.3 ± 75.9 nm in size, with a peak diameter of about 106 nm ( Figure 1A). The L-EV isolation was more heterogenous, with multiple peak diameters from 124 to 768 nm ( Figure 1A). We then examined the characteristic markers, including Alix, flotillin, TSG101, and HSC70 by Western blotting ( Figure 1B). The characteristic markers Alix, flotillin and TSG 101 were highly expressed in S-EVs, while HSC70 was expressed in both S-EVs and L-EVs. These results confirmed that the particles we extracted were EVs. Table 2. To determine the effect of NDEVs on microglia, primary microglia were treated with of different concentrations of S-EVs (0-10 µg/mL) for 24 h. S-EVs increased microglia cell viability in a dose-dependent manner as indicated by MTT assay [F(6,23) = 283.2; p < 0.0001] by 1.32 ± 0.12 fold for 1 µg/mL (p < 0.05), 3.5 ± 0.22 fold for 5 µg/mL (p < 0.0001), and 3.75 ± 0.30 fold for 10 µg/mL (p < 0.0001; Figure 2A). Importantly, this effect is S-EV specific as L-EVs isolated from neuronal culture do not increase microglial cell viability (Figure 2A). To determine if the S-EV-mediated increase of microglial cell viability is through an increase in cell survival, we determined extracellular LDH in the supernatant of microglia treated with or without neuronal S-EVs. Results showed that S-EVs reduced LDH release from microglia [F(2,7) = 548.7; p < 0.0001] with a 16% reduction for 1 µg/mL, (Holm-Sidak: p < 0.0001) and 25% reduction for 10 µg/mL (p < 0.0001; Figure 2B). To further confirm that the effect of neuronal S-EVs on microglia is through an increase in cell survival but not through inducing cell proliferation, we determined cell cycle status by measuring DNA content using Propidium iodide (PI) staining combined with flow cytometric analysis. PI flow cytometry has also been used widely for the evaluation of cell apoptosis [24]. PI flow cytometry showed neither S-EVs nor L-EVs change cell populations undergoing DNA synthesis (2N-4N), which suggests that they do not promote microglial cell proliferation. In addition, S-EV treatment reduced the apoptotic cell population, <2N, from 24.4 ± 6.0% to 2.9 ± 1.4% [F(2,7) = 32.37; p = 0.0003; Tukey's p = 0.0004], while L-EVs had little effect on the apoptotic cell population (22.2 ± 3.0%, Figure 2C,D). These results suggest that S-EVs protect microglial cells from apoptosis but do not increase microglial proliferation. EVs Derived from Neurons Impact the Activity of Microglia To determine the effect of neuronal EVs on microglia activity, primary microglia cultures were treated with NDEVs purified from rat cortical neuron culture. Twenty-four hours later, microglia were scraped from culture dishes and stained with microglia surface antigens. Fixed cells were then analyzed by flow cytometry. Cells were stained for CD11b (a component of complement receptor 3) to confirm cell purity ( Figure 3A). Consistent with immunocytochemical staining (Supplementary Figure S1), these isolated cells were highly enriched for microglia. CD11b is expressed constitutively by microglia and increases to a greater extent upon microglia activation. Results showed that S-EVs reduced mean fluorescence intensity (MFI) of CD11b by 21.8 ± 8.3% ( Figure 3B). Microglia activation states can be classified as either M1-like or M2-like based on changes in morphology and/or expression of phenotypic, cell surface antigens [26][27][28]. For example, phenotypic markers such as major histocompatibility complex (MHC) II and CD32 have been used to identify M1-like cells [29]. M2-like microglia, on the other hand, express CD206 (macrophage mannose receptor 1) on the cell membrane. Results showed that S-EV treatment reduced M1-like microglia as indicated by a decrease in the expression frequency of MHC-II + cells (9.7 ± 1.2% in controls vs. 2.8 ± 0.4% in S-EV treated cells, p < 0.001; Figure 3F) and CD32 + cells (21.9 ± 3.2% in controls vs. 10.7 ± 6.2% in S-EV treated cells, p = 0.018; Figure 3G). S-EV treatment also decreased the MFI of MHC-II by 35.9 ± 3.2% ( Figure 3C) and CD32 by 20.8 ± 11.7%. L-EVs, however, had little effect on the MFI of CD11b or CD32, or the expression frequency of CD32 (Supplemental Figure S2). We also observed a decrease of M2-like microglia as indicated by decrease of CD206 + cells: 11.3 ± 2.0% in controls vs. 5.0 ± 0.3% in S-EV treated cells (p < 0.001; Figure 3H), as well as a 37.6 ± 7.3% reduction of CD206 MFI ( Figure 3E). Neuronal EVs Suppress LPS-Induced Microglia Activation Lipopolysaccharide (LPS, the major component of the outer membrane of Gramnegative bacteria) activates microglia/macrophages and induces proinflammatory activation, which produces proinflammatory cytokines and inducible nitric oxide synthase (iNOS) [30]. Primary microglia were pre-incubated with S-EVs for 24 h and then treated with LPS (0.1-10 ng/mL) for 8 h. Total RNA were extracted, and the levels of mRNA encoding pro-inflammatory cytokine/chemokines (IL-6, TNF-α, and MCP-1), iNOS, and antiinflammatory cytokine (IL-10) were quantified by real-time RT-PCR. Results showed that LPS induced concentration-dependent increases of TNF-α, IL-6, MCP-1, IL-10, and iNOS expression indicated by main effects of LPS concentrations (Table 3). S-EV pre-treatment inhibited LPS-induced proinflammatory cytokines TNF-α and IL-6 ( Figure 4A,B), chemokine MCP-1 ( Figure 4C), and iNOS expression ( Figure 4D), but promoted anti-inflammatory cytokine, IL-10, expression in microglia as indicated by main effects of EV treatment and a significant interaction of LPS concentration and EV treatment (Table 3; Figure 4E). To determine if L-EVs similarly suppressed LPS-induced microglia activation as S-EVs, primary microglia were pre-incubated with S-EVs or L-EVs for 24 h and then treated with LPS (1 ng/mL) for 8 h. L-EV pre-treatment did not inhibit LPS-induced TNF-α, MCP-1, or iNOS expression, and did not promote IL-10 expression in microglia (Supplementary Figure S3). In addition, primary microglia were pre-incubated with S-EVs for 24 h and then treated with LPS for 24 h. Cytokines (TNF-α and IL-6) released into culture supernatant were determined by ELISA ( Figure 5). Results showed that S-EV pre-treatment inhibited LPS-induced proinflammatory cytokine TNF-α and IL-6 expression. These data suggest that neuronal EVs modulate innate immunity in the brain, dampening pathogenic M1 microglia, and point to a possible mediator for suppression of neuroinflammation. Table 3. * p < 0.05 versus respective control with LPS. Discussion The interactions between neurons and microglia represent a key process of neuroimmune regulation with potential implications for the regulation of CNS integrity in neurodegenerative and psychiatric disease [31][32][33][34]. Secretion of exosomes from cultured primary neurons has been observed previously [11,20]. In this study we demonstrated the potential role of neuron derived EVs as a means of intercellular signaling in neuronmicroglia communication. We isolated EVs from rat cortical neuronal culture and exposed microglia to these NDEVs. Here, we show that supernatants of these primary cortical culture contained small EVs of a composition and size typical of exosomes. S-EVs promoted microglia survival and inhibited microglia activation marker expression, both effects of which were S-EV specific as large EVs did not have the same effects on microglia. We also found that incubating microglia with S-EVs inhibited LPS-induced pro-inflammatory cytokine expression. These results indicate that S-EVs released by neurons regulate microglia reactivity and control LPS-induced proinflammatory microglia activation. Considering the importance of microglia reactivity in both physiological and pathological conditions, these results suggest a new pathway of microglia regulation. Our understanding of the role of exosomes as an important mechanism for intercellular communication in the CNS is just beginning to emerge [13,17,18,35]. Exosomes facilitate the transfer of information between cells through their release and shuttling of a cargo of various signaling proteins and coding and/or regulatory RNAs, that are then taken up by target cells. Exosomes, therefore, not only play critical roles in physiological processes, such as synaptic function, nerve regeneration, and neuronal development, but are also implicated in the pathogenesis of a variety of neurodegenerative disorders. For example, exosomes secreted from a variety of cell types have been shown to contain prions or beta-amyloid peptides, which suggests their role in the transmission of toxic proteins in neurodegenerative conditions [36,37]. In addition, they may also contribute to the neuroimmune activities through the shuttling of signaling molecules between neurons and glia [17,18,35]. Neuron-glia communication has been shown to play a critical role in the nervous system in both normal physiological as well as pathological conditions. There is increasing evidence to indicate that neurons are not merely victims of (over)activated microglia but rather control microglial function and activity [2,7]. For example, neurons constitutively express "Off" signals which are thought to keep microglia in a quiescent state. This process aids in maintaining tissue homeostasis, but also restricts pro-inflammatory microglia activity to prevent further damage to the brain [7]. Most of these effects are through the expression of signaling molecules on plasma membranes (CD200, CD47, etc.) or the secretion of soluble ligands (CX3CL1) [2]. Our work further demonstrated that neurons release EVs that may have significant roles in maintaining a homeostatic phenotype of microglia and regulating their activation beyond these mechanisms. Our results showed that extracellular particles with the characteristics of EVs (size distribution and characteristic marker expression) are involved in neuron-to-microglia communication and may deliver cargo from neurons to microglia as evidenced by the functional change of microglia (improved survival, maintaining microglia quiescence and inhibition of over-activation) after S-EV treatment. Thus, the results of this study demonstrate that constitutively produced NDEVs represent a new means of regulating microglia function. Indeed, NDEVs have been shown to elicit various physiological responses in target microglia. For example, more microglia survived in vitro if they had received small EVs, which suggests that NDEVs may play a protective role and increase microglia tolerance to stress. The roles of NDEVs in control of microglial activation can be divided into two mechanisms: to stabilize microglia in their quiescent state by inhibition of activation mechanisms (as suggested by reduced activation marker expression) in normal conditions and/or antagonize LPS-induced proinflammatory activity. Microglial activation in the normal, healthy brain is constrained by "Off" signals that are constitutively expressed by neurons in the normal brain microenvironment [2,7]. Without these in vivo inhibitory signals and under the exposure of fetal bovine serum in the culture media, microglia in culture are a mixture of M1 or M2-like and non-activated cell populations as indicated by the M1 marker (MHC-II and CD32) and M2 marker (CD206) expression ( Figure 2). Here, we showed that small NDEVs inhibit activation markers expressed in microglia under normal culture conditions, which suggests that EVs may contribute to these "Off" signals in the normal brain microenvironment [2]. Under pathological insults, microglia respond as either neurotoxic or neuroprotective depending on the various signals in the microenvironment [2]. For example, LPS induces a neurotoxic microglia response through the release of proinflammatory cytokines and inducing oxidative stress [30]. To further evaluate if NDEVs regulate microglia activation under LPS-induced proinflammatory activation, microglia were exposed to S-EVs and stimulated with LPS. Our results showed that the gene expression pattern is modified in LPS-activated microglia that received S-EVs: LPS-induced pro-inflammatory cytokines/chemokine (IL-6, TNF-α, and MCP-1) and iNOS gene expression are inhibited, consistent with a recent study in spinal cord [12]. While mRNA expression may not always mirror protein expression, we confirmed that S-EVs inhibit LPS-induced proinflammatory cytokine (TNF-α and IL-6) secretion with ELISA ( Figure 5). We also demonstrated that NDEVs increased a potent, anti-inflammatory cytokine, IL-10, gene expression. IL-10 limits host inflammatory response to pathogens thus preventing inflammation. Although it has been shown that IL-10 inhibits LPS-induced proinflammatory cytokine secretion [38], whether or not IL-10 contributes to NDEV-mediated inhibition of LPS-induced proinflammatory cytokine production will need further investigation. In addition, we observed that S-EVs change microglia phenotypes from activated M1 or M2-like microglia to non-activated states in normal culture conditions, however, it remains unknown whether NDEVs inhibit LPS-induced proinflammatory cytokines and iNOS production through change a microglia phenotype or through the regulation of specific genetic pathways. The regulation of microglia phenotype and cytokine production may occur through different mechanisms. A thorough RNA sequencing analysis of regulated genes will be helpful to extend this work into mechanistic directions. Subsequent studies in cytokine knockout models would then be important for determining the specific genetic pathway effect versus phenotypic output. Thus, NDEVs dampen microglia immune reactivity induced by LPS and prevent the development of excessive and uncontrolled stimulation of microglia that may lead to secondary neuronal damage. The underlying mechanism behind the NDEV effect on microglia is not fully understood, but their various cargos provide clues to these effects. Neuronal EVs may express the inhibitory signaling molecules on their membrane, and thus keep microglia inactivated. The expression of these signaling molecules in neuronal EVs and their roles in maintaining a homeostatic phenotype of microglia needs further confirmation. In addition to protein cargo, miRNAs may also contribute to the effects of NDEVs on microglia. A recent study demonstrated that neuronal exosomes shuttle microRNA-124-3p to microglia and mediate the suppression of M1 microglia and A1 astrocyte activation after spinal cord injury [12]. Our preliminary miRNA sequencing analysis reveals that miRNAs known to regulate microglia/macrophage function such as miR-125b, miR-9a, miR-let-7a, miR-let-7c, miR-30a, and miR-181c are highly expressed in NDEVs [39] (unpublished observations; Peng et al., in preparation). Whether or not NDEVs shuttle these miRNAs to microglia and mediate the suppression of LPS-mediated microglia activation will need further investigation. The distinct function of S-EVs and L-EVs on microglia may be attributed to their differential cargos. Proteomic analyses have indicated that exosomes are enriched with receptors and kinases that mediate signaling in immune regulation, whereas MVs are more implicated in protein translation [13,40]. Although current EV isolation techniques do not distinguish exosomes versus MVs, a thorough analysis of the cargos (both protein and nuclei acid) of S-NDEVs and L-NDEVs will help to identify the components that contribute to their differential function on microglia. Future work combining next-generation RNA sequencing, proteomics, and bioinformatic analysis is needed to identify the specific RNAs and proteins present in NDEVs that mediate this effect of NDEVs on microglia function. Conclusions In summary, this study investigated a novel regulatory mechanism in neuron-tomicroglia communication. These data provide new insight into EV-mediated regulation of microglia function and activation under pro-inflammatory conditions. The specific components in EVs that contributed to these effects are unclear, but neuronal EVs contain numerous signaling molecules, including proteins and RNAs, that play significant roles in neuron-to-microglia communication. Ultimately, these results contribute to our understanding of the mechanisms of neuronal regulation of microglia activation, a phenomenon that has major implications for our understanding of-and the development of new therapies for-neurodegenerative and psychiatric disease. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/biology10100948/s1, Figure S1: Microglia purity, Figure S2: Effect of neuronal L-EVs on microglia activity, Figure S3: To determine if L-EVs had the similar effects to suppress LPS-induced microglia activation as S-EVs, primary microglia were pre-incubated with S-EVs or L-EVs for 24 h and then treated with LPS (1 ng/ml) for 8 h, Table S1: Primary antibodies. Author Contributions: Conceptualization, methodology, validation, investigation, data curation, writing-original draft preparation, H.P.; formal analysis, resources, writing-review, editing and revising, project administration, funding acquisition, H.P. and K.N.; methodology, investigation, C.I.R. and B.T.H. All authors have read and agreed to the published version of the manuscript.
2021-10-19T16:10:35.942Z
2021-09-22T00:00:00.000
{ "year": 2021, "sha1": "e2e108f378266a50cf501de89ef44b75c6a4b7fa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/10/10/948/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf646946094517052e92695589d5655d596a12d1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259856546
pes2o/s2orc
v3-fos-license
Experiences of childhood weight management among Norwegian fathers of children with overweight or obesity – a qualitative interview study ABSTRACT Objective Paternal participation and experiences in childhood weight management is an understudied studied area. Given the important role fathers play in childhood obesity prevention and treatment, the aim of this study was to explore Norwegian fathers’ experiences of helping to prevent further weight gain in their children with overweight or obesity. Methods Data were collected through semi-structured interviews with eight fathers of ten children with overweight or obesity and analysed by qualitative content analysis. Results The analysis resulted in one overall theme: Balancing between assuming and avoiding responsibility for weight management with a desire to preserve the child’s dignity, comprising two themes: 1) Alternating between concern, helplessness and responsibility, 2) Needing acknowledgement, and flexible and tailored professional support, both of which have several sub-themes. Conclusion Fathers need guidance on how to talk to their children to prevent further weight gain, while at the same time emphasizing safeguarding the child’s dignity. Healthcare professionals should address parents’ own emotional barriers and include fathers to a greater extent as a resource in family-centred counselling and tailor guidance and support to help with childhood weight management. Introduction The prevalence of overweight (ISO-BMI ≥25) or obesity (ISO-BMI ≥30) in children has nearly tripled since 1975. Worldwide, 39 million children under the age of five were afflicted by overweight or obesity in 2020. Over 340 million children and adolescents aged 5-19 were afflicted by overweight or obesity in 2016 (World Health Organization, 2021). In Norway, a total of 15-20% of children and 25% of adolescents are afflicted by overweight or obesity (The Norwegian Institute of Public Health, 2022). In low and middle-income countries, the BMI is still rising, and in many high-income countries, rising BMIs have plateaued, even if they are at high levels (Abarca-Gómez et al., 2017). Children with obesity tend to remain obese into adulthood (World Health Organization, 2016). According to the World Health Organization (WHO) (World Health Organization, 2021), obesity is preventable, however, psychosocial stressors and comorbidities may make behavioural change difficult (Gurnani et al., 2015). Children with obesity are at a high risk of multiple comorbidities and childhood obesity is associated with a higher risk of poorer health, hypertension, insulin resistance, disability in adulthood and premature death (Kumar & Kelly, 2017;Sahoo et al., 2015;World Health Organization, 2021). Childhood obesity can affect children's social and emotional wellbeing and self-esteem, and is associated with poor academic performance and lower quality of life (Sahoo et al., 2015;Steinsbekk, 2012). Weight-based bullying among young people is considered a common and serious problem in many countries (Puhl et al., 2016). Weight stigmatization may mediate negative health outcomes which in turn can be damaging to a child (Pont et al., 2017;Puhl & Latner, 2007;Sanyaolu et al., 2019). Many adolescents with obesity are socially marginalized (Strauss & Pollack, 2003) and experience bullying and fragile social relationships (Øen et al., 2018;Rankin et al., 2016). Adolescents with obesity report significantly higher bodily dissatisfaction, social isolation, symptoms of depression and negative selfesteem than people of a normal weight (Goldfield et al., 2010). Stigma contributes to behaviours such as social isolation, avoiding healthcare services, binge eating, decreased physical activity, and increased weight gain, which worsens obesity and creates additional barriers to healthy behaviour change (Pont et al., 2017). Some children who are overweight might seek emotional comfort in food, and stress in children is associated with emotional eating and more unhealthy dietary patterns (Michels et al., 2012). Many children today are growing up in an obesogenic environment (World Health Organization, 2016). Increasing urbanization and technology developments have led to greater physical inactivity (World Health Organization, 2016), and energy-dense food, which is high in fat and sugars, is easily accessible. Obesity can be triggered by genetic, psychological, lifestyle, nutritional, hormonal and environmental factors. Environmental, genetic and societal factors have an impact on the development of overweight and obesity (Silventoinen et al., 2010;World Health Organization, 2016;World Health Organization, 2021). This also includes parenthood and family environment (East et al., 2019;Eg et al., 2017;Halliday et al., 2014;Mazzeschi et al., 2013;Sigman-Grant et al., 2015). Children's eating and activity habits are influenced by their parents (Birch & Davison, 2001;Davison et al., 2018). Parents are responsible for their children, and for preventing overweight and obesity (Holm, 2008;Salemonsen et al., 2022;Wolfson et al., 2015). Research indicates that fathers play a key role in influencing children's eating behaviour. Fathers' dietary intake was a predictor of their children's dietary intake, and fathers' food-related parenting style predicted their children's eating behaviour (Litchford et al., 2020). A previous interview study has found conflicting foodrelated parenting practices among 40% of the fathers involved in the study related to differences in parental eating habits, feeding philosophies and concern for the child's health. These differences often resulted in tantrums or a refusal to eat (Khandpur et al., 2016). A literature review shows differences in mothers' and fathers' feeding practices. Fathers were generally less likely to monitor their children's food intake and to limit access to food compared to mothers (Khandpur et al., 2014). Fathers are an important and valued part of a child's life, and children whose fathers were highly involved in caregiving were less likely to be overweight (Sato et al., 2020). Despite the expanding role of fathers in raising their children, they are underrepresented, compared to mothers, in child obesity research (Davison et al., 2018;Morgan et al., 2017). Qualitative empirical research concerning the role of the father in preventing obesity in his children is marginal, both in Norway and worldwide. Qualitative research methods examining fathers' perceived roles and specific feeding strategies are required, and fathers should be routinely included in research on child feeding (Khandpur et al., 2014). Knowledge about fathers' experiences could be helpful for healthcare professionals (HPs) and public health nurses (PHNs) when they provide help and support to families and try to meet fathers' needs in childhood weight management. Understanding perceptions and challenges of tackling childhood overweight and obesity among caregivers is critical to provide healthcare services in need. No studies have been found that describe Norwegian fathers' weight management strategies or feeding practices, or how Norwegian fathers experience helping to prevent further weight gain, and what they need to help their children. Therefore, the aim of this study was to explore Norwegian fathers' experiences of helping to prevent further weight gain in their children with overweight or obesity. The following research question was asked: What do Norwegian fathers experience while trying to help and prevent further weight gain in their children? Design To increase understanding of fathers' experiences of preventing further weight gain in their children with overweight or obesity, and how healthcare professionals may help, an explorative and interpretative design was chosen (Polit & Beck, 2021). Individual interviews were deliberately chosen to gain insight into experiences of fathers who know "where the problem lies . . . ", and purposive sampling (Malterud et al., 2016) was used to find fathers with experience from their care of children with overweight or obesity. Setting Child health clinics and school health services in primary healthcare in Norway offer free advice and support from PHNs and general practitioners (GPs) to children and adolescents (aged 0-20 years) and their parents (Helsenorge, 2022). National guidelines recommend that GPs and PHNs in these child health services in primary care act at both the individual and structural level to prevent the development of overweight and help prevent and reduce obesity among children and adolescents (The Norwegian Directorate of Health, 2010). PHNs measure children's height and weight at given consultations from birth to the age of 14 and are ready to follow up children and adolescents that have been identified as having overweight or obesity. Parents, children or adolescents are free to contact these free child health services if they need support or advice to manage the child's weight. Participants and recruitments PHNs in 24 primary care child health services (local child health clinics or school health services) were asked to recruit fathers from families who had contacted them to obtain help for their children's or adolescents' weight, or from families where the family (either father, mother or both) were followed up by PHNs due to the child's weight. The inclusion criteria were fathers of children who were afflicted by overweight or obesity (ISO-BMI >25), and who were involved in the family's efforts to manage their child's weight excess. The participants in this study were recruited from two school health services and three local child health clinics in three small (rural) and medium-sized (urban) municipalities. Our sample included eight fathers of ten overweight children. The children's ages ranged from 4 to 16 years old. All the fathers have at least one overweight child, and all but one has daily custody of his child and live together with the child's mother (Table I). Data collection Data were collected through individual interviews and the use of a semi-structured interview guide with follow-up questions. The form of the interviews was open, allowing for elaboration and follow up of the fathers' statements. The interview guide was developed to explore how the fathers experienced taking care of their child or adolescent with overweight or obesity in order to prevent further weight gain, how they managed in everyday life and what they needed in their efforts to prevent further weight gain (Table II). The first author contacted all the fathers who had consented to participate after the PHNs had obtained informed and written consent for this purpose. All the interviews took place at the family's local child health clinic based on the wishes and preferences of the fathers. After a short presentation, the purpose of the study was presented, and confidentiality was emphasized. The first author conducted eight individual interviews in Norwegian at three different local child health clinics. The interviews, which lasted between 80 and 90 minutes, were digitally recorded and transcribed verbatim by the first author. Information power, as discussed by Malterud et al (Malterud et al., 2016), such as the quality of the data, the narrow aim, access to participants and the nature of the topic, guided the sample size. The empirical data provided detailed and rich descriptions from the participants' perspective. Data analysis Data were analysed by using qualitative content analysis (Graneheim & Lundman, 2004). The interviews were read as open-mindedly as possible to gain an overall understanding of the whole text, as well as an understanding of each interview. Several aspects of the fathers' experiences of their efforts to prevent further weight gain in their children were identified. To handle the data in a systematic way as a coding process, a matrix was developed by the first author. The primary analysis, coding and categorization of the meaning units and preliminary themes were performed by the first author. Similar codes were grouped into categories and later sorted and abstracted into sub-themes. All authors discussed the identified codes, sub-themes and themes and made a tentative interpretation aiming to remain as close as possible to the meaning of the text. We discussed and agreed on interpreting the subthemes and themes into one overall theme on a higher level of abstraction and interpretation (Graneheim et al., 2017;Lindgren et al., 2020). This Table II. Thematic guide for individual interviews. What do fathers experience while trying to help and to prevent further weight gain in their children? • How do you prefer to talk about your child's overweight and with whom? • What is your understanding of your child's overweight? • What do you perceive as difficult in childhood weight management? • How do you talk to your child or adolescent about healthy diets, physical activity and weight? • What do you need to help your child to prevent further weight gain? • Have you been included in the local child health clinics? • Who do you prefer to help you and your child? abstraction can be seen as a presentation of the whole text. A number of quotations will be used to demonstrate the essence of the themes. Trustworthiness and rigor To ensure the trustworthiness of this study, the criteria credibility, dependability, confirmability and transferability, as put forward by Lincoln and Guba (Lincoln & Guba, 1985), were used. Credibility was strengthened through consistency in the research process, between the aim of the study, the participants, data collection and analysis. Direct quotations from the participants perspective demonstrated credibility in the interpretation of the data. Dependability deals with the stability and reliability of the data based on the potential for replication and was demonstrated through descriptions of the research process, including context, participants, data collection and analysis. Confirmability was strengthened by discussing the meaning-units, sub-themes and themes between the authors on several occasions to find the most appropriate interpretation. Transferability of this study's findings to another setting or group of participants was demonstrated through descriptions of the study context and participants so that the reader can consider the relevance of the findings to other contexts. Ethics This study was approved by the Norwegian Centre for Research Data (NSD), project number 35008, and adheres to the requirements and ethical guidelines in the Helsinki Declaration. Informed consent was obtained from each father prior to the interviews. Participation in the study was voluntary and the participants were informed about their right to withdraw at any time, without this compromising their future healthcare. We were aware of the potential reactions from the fathers when discussing this sensitive topic, as parents of children with excess weight are often blamed and shamed for their children's weight (Gorlick et al., 2021). The first author, who conducted the interviews, was familiar with working with families and children afflicted by overweight or obesity. Before the individual interviews were held, precautions were taken by reflecting on how to take care of the participants if the interview situation became unpleasant or challenging. The interview setting was wellprepared beforehand, and emphasis was placed on creating a respectful, emphatic and non-judgemental atmosphere. Results The analysis yielded one overall theme: Balancing between assuming and avoiding responsibility for weight management with a desire to preserve the child's dignity, comprising two themes: 1) Alternating between concern, helplessness and responsibility, 2) Needing acknowledgement, and flexible and tailored professional support, both of which have several subthemes (Table III). Balancing between assuming and avoiding responsibility for weight management with a desire to preserve the child's dignity The analysis revealed that the fathers experienced substantial challenges in everyday life in their efforts to prevent further weight gain in their children. The fathers tried to balance between assuming and avoiding responsibility for weight management with a desire to preserve their children's dignity. This overall theme is based on the fathers' concern about their child's psychosocial health, including self-esteem and stigma, being highly sensitive, and trying to avoid offending the child. They found it difficult to talk about the overweight and dietary restrictions with their child, fearing that they could inflict shame, guilt, or even contribute to their child's eating disorder. The fathers felt that they were alternating between concern, helplessness and responsibility in an effort to prevent further weight gain, and that emotional barriers prevented them from adequate management. The fathers described a lack of arenas for conversation outside the family and expressed a need to be more included and acknowledged by health professionals. They called for a greater flexibility and access to counselling and professional Table III. Overall theme, themes and sub-themes describing Norwegian fathers' experiences of preventing further weight gain while caring for their child with overweight or obesity. Overall theme: Balancing between assuming and avoiding responsibility for weight management with a desire to preserve the child's dignity Sub-theme Theme Recognising their own food preferences and eating habits in their children Alternating between concern, helplessness and responsibility Struggling with their own feelings, shortcomings and discomfort Worrying about the consequences on psychosocial health A desire to help and protect Lack of arenas for conversation Needing acknowledgement, and flexible and tailored professional support A need for inclusion and acknowledgement Calling for flexibility and access to counselling and support support to manage their responsibility. These findings are further elaborated below. All quotations are assigned anonymously. Alternating between concern, helplessness and responsibility The first theme described the fathers' feelings of having shortcomings and helplessness due to their own emotional barriers, uncertainty and fear, and their concern for their child's psychosocial health. They feared that they could inflict an extra burden on to the child or adolescent by addressing their weight gain or dietary restrictions. This helplessness and concern gave rise to an ambivalent feeling towards their perceived responsibility and how to help their children. Recognising their own food preferences and eating habits in their children. Most of the fathers recognized themselves in their children's eating habits and some of them experienced having the same craving for sweets and soft drinks as their child did. Some of them described a sense of never being properly full. Several of them had experienced falling back on unhealthy habits and a feeling that overeating may be followed by shame and guilt. Some of them recognized themselves in their children's search for something to eat in the cabinet, and some of the fathers admitted that they still do it themselves. One of the fathers said: 3.1.1.2 Struggling with their own feelings, shortcomings and discomfort. Several of the fathers described that having an overweight child and having experienced being overweight themselves often generated negative feelings. Some of them said that they found it unfair that they put on weight more easily, and that they always needed to be careful with what they ate or needed to be "on their guard". Several of the fathers described how bad they felt if they had to limit the amount of food or say no to their children. This gave them a bad conscience and made them feel uncomfortable. I find it difficult . . . when you know there is something they like very much, and they ask for more, and you always have to say no . . . that is not easy . . .. (P7) Some of the fathers described soreness and pain, both in themselves and in their children. These feelings lead to ambivalence and emotional barriers, which prevent them from adequate coping. One of the fathers stated: . . . She asks us if we think that she is fat, and I tell her that she is great. I think she is likely to accept this, but I don't know. It hurts me a bit when she asks about it, it is not that nice that they have this problem . . . but we often send contradictory signals: "I say you are fine, but you are not allowed to eat" . . .. (P8) Other fathers described irritation, anger and frustration about their children not being able to restrain themselves. One of them said: . . . My son cannot restrain himself when it comes to sweets and soda, and he is addicted to sugar. I might feel annoyed when he fails to restrain himself, but I recognize it in myself. I want to say to him: . . . "you have to understand that you can't sit and eat that much! You have to control yourself!" (P3) Several of the fathers found it important that there should be a pleasant atmosphere around the dinner table. Several of them said that they are permissive just to keep a peaceful atmosphere when they eat. Many of the fathers expressed that mothers were most strict and most consistent, both in relation to the food that was purchased and offered in the family. One of the fathers said: . . . Mothers are maybe stricter than fathers . . . but we try to do the same at our home, but maybe it's me giving in . . . Can't they just get some more so we can have a peaceful meal . . . instead of arguing at the table. (P7) The fathers found it difficult to talk about the weight and dietary restrictions with their child, fearing that they could inflict shame or guilt and even contribute to their child's eating disorder. They were able to talk to their partner; however, when it came to their child, they started to feel uncertain and afraid. The uncertainty was related to figuring out the right way to say it and finding the right words with which to approach the topic with their children. The fear was related to them potentially expressing themselves in the wrong way and focusing too much on the overweight, and the fact that they could inflict an extra burden, and in the worst case scenario, psychosocial issues on their children. One of the fathers said: . . . there has been this fear, that we have to be careful . . . that we have to find the right words . . . so that he doesn't end up with an eating disorder . . . (P3) 3.1.1.3 Worrying about the consequences on psychosocial health. All the father's described numerous concerns about the consequences of overweight and obesity, whether it be physical, social and mental health risks. Regarding physical health, they were concerned that cardiovascular diseases and diabetes could occur later in life. The fathers had experienced consequences for their children's participation in sports and performance in different activities and described that their children did not manage to run as much or as fast as other children and that they are left behind. However, at this point, it was primarily the social and psychological consequences for their children's health that worried them most. All the fathers described a deep concern about their children's selfesteem, stigmatization and psychosocial health. One of the fathers said: . . . I see that she is very sensitive to this. If her sister is getting a little rude, and I know if they start a quarrel, she will call her "fat girl" . . . and I see that it hurts, you know . . . (P8) Two of the fathers described that they had struggled with overweight themselves since childhood. It was their sincere desire that their own children should avoid the same experiences as them and take the overweight with them into adulthood. One of the fathers said that his son, like himself, never found trousers that fitted, which damaged his body image and self-esteem. According to the fathers, most of the children had already experienced teasing, bullying and stigmatization, and they had observed that this affected their children in different ways and to different degrees. One of the fathers stated: A desire to help and protect. The fathers expressed a concern that the overweight will be a lasting challenge and wanted to help their children so that they could avoid having to struggle with this for the rest of their lives. Several of them find it important to be open and that more people should get to know what other people struggled with. Some of the fathers choose to focus on a healthy diet rather than the child's weight and choose not to talk about their children's weight with the child. The fathers acknowledged their responsibility as parents on equal footing as mothers, and that they felt responsible for having and maintaining a healthy lifestyle within the family. Several of the fathers found it important to take their time to plan, to have regular mealtimes and to be structured in their everyday lives. They believed this would help them to create healthy habits for their children and family early on, and it could help them to eat less fast-food and prevent impulse buying. All the fathers underlined the importance of not having sweets and high-density food such as cookies, chocolate or potato crisps available at home. However, several of the fathers found it challenging to be a good role model, always planning healthy meals and engaging in activities and sports. One of the fathers stated: We try to make a good plan for the meals in our family, which includes a healthy diet and sticking to the shopping list . . . (P5) All the fathers highlighted the importance of giving the children's confidence and belief in themselves through respect, encouragement and positive involvement in their children's life. Most of them underlined the importance of being good role-models. One of the fathers stated: I have this urge to connect with my kids. It is important to be part of their activities, yes, because it provides confidence. I see that the more confident they are in themselves, the more responsibility they take for their own life. . . yes, it is important that they believe in themselves. (P1) Needing acknowledgement, and flexible and tailored professional support The second theme described the fathers' perceived lack of arenas for conversation, their need for acknowledgement for their perceived parental role, and a desire to be included in consultations. In addition, they asked for flexible and tailored professional support in their efforts to prevent further weight gain in their children. Lack of arenas for conversation. Most of the fathers find it perfectly natural to talk about their children's diet, activity and overweight with the child's mother. Several of the fathers felt, however, that it is not natural to talk to other fathers about their children's diet or problems. Several of them described that they were most comfortable discussing activities, sports and handcrafts with other fathers. They believed that mothers have more natural arenas and networks to talk about this issue than fathers have. A need for inclusion and acknowledgement. The fathers in this study described that they are more engaged in their children's life than their parents were. They described keeping up with their children at home, at school, with homework and in leisure activities, and believed that they have an equally important parental role as mothers do. Some of the fathers expressed a need for greater acknowledgement of their parenting role in general, and all the fathers wanted to be included and involved in the family's efforts to prevent and treat their children's overweight or obesity. However, they experienced that all information and communication between the child health services and the family went through the child's mother. Some of them described a feeling of being set aside, and one of the fathers felt that there was a certain degree of discrimination since the kindergarten, school and the child health services always contacted his child's mother first. Most of the fathers wanted to be more included by health professionals and to receive guidance on discussing this problem with their children, without imposing additional burdens on their child: I believe that our public health nurse at the child health clinic could help us to manage our daughter's weight problem and give us some advice on how to talk to her. "What is the best way to deal with this problem?" . . . however, they could be better at contacting and including me, not only the child's mother . . .. (P6) Calling for flexibility and access to counselling and support. Several of the fathers had experi- enced that it could be difficult to attend counselling at the child health clinics due to their work. Most of them expressed an understanding of the difficulties in organizing extended opening hours, however, they hoped for greater flexibility. Some of the fathers requested counselling in the afternoon and believed that it would be easier for them to attend appointments after regular working hours. "It is quite difficult for me to attend to a meeting or a counselling session at the child health clinic during working hours. It would be much easier for me to attend in the afternoon". (P3) Discussion In this study we asked Norwegian fathers what they experienced while trying to help and to prevent further weight gain in their children with overweight or obesity. In this section, we will discuss the results, and, in the light of these experiences, discuss how HPs may help fathers to prevent further weight gain in their children. The overall theme and the two themes will guide the discussion. The overall theme Balancing between assuming and avoiding responsibility for weight management with a desire to preserve the child's dignity reflects what the fathers in this study experienced when they tried to prevent further weight gain in their children. The fathers expressed a deep concern about their child's self-esteem and psychosocial health. They were highly sensitive towards their child's weight challenges and tried not to offend their child. The results suggest that, in essence, the fathers want their children to have dignity so that they can improve their selfimage and maintain their integrity. Dignity can be related to self-esteem, as it refers to the worth of human beings, and the right to be valued and respected. Dignity is the opinion of others about our worth, and dignity can be understood as our subjective fear of the opinions of others (Schopenhauer & Saunders, 2004). For all parents, their child's worth is important. To preserve the child's dignity, we suggest that the fathers in this study tried to create a balance between assuming and avoiding responsibility. This balancing act can be understood as an effort to manage the emotional burden of experiencing their children's challenges, like stigma and low self-esteem. In addition, some of the fathers recognized themselves in their children's issues, including their own previous and current feelings of guilt, shame or weight stigma. Parents may feel blamed or shamed for their children's weight (Wolfson et al., 2015). A previous study shows that mothers can be the target of weight stigma and can experience negative emotions like blame or shame due to their perceived role and responsibility regarding their children's weight (Gorlick et al., 2021). Based on the father's descriptions of their concern, helplessness, perceived responsibility and emotional distress, we suggest that this blame or guilt applies equally to fathers and may lead to an avoidance of responsibility. Avoidance of responsibility may also be seen as an emotional defence to their distress. Management and pride in an active involvement in their children's issues, as well as their wellbeing, may help the fathers to assume responsibility. Several of the fathers highlighted the importance of giving their children belief in themselves and increasing the children's self-efficacy, which is something they try to do. Both shame and self-efficacy are constructs closely tied to the foundation of the self (Lewis, 1971). Reducing feelings of shame or increasing self-efficacy is a two-sided process, and shame and self-efficacy are correlated (Baldwin et al., 2006). A new direction of treating shame "through the backdoor" is to improve selfefficacy (or the treatment of either aspect), and this could positively impact the other aspect. By helping people heal from shame, self-efficacy could be raised, and by helping to raise self-efficacy, shame could be reduced (Baldwin et al., 2006). To help fathers help their children and to prevent further weight gain, HPs need to address self-conscious feelings like shame, internalized stigma, as well as the feeling of responsibility related to dilemmas about how to talk about food restrictions with their children, for example. Several of the fathers found it difficult to talk about overweight and dietary restrictions with their child, fearing that they could inflict feelings of shame or guilt on them, or even contribute to their child's eating disorder. As our results revealed, the fathers were uncomfortable talking to their children about food restrictions and avoided this situation. Several of them said that they are permissive just to keep a peaceful atmosphere when they eat. This is in line with the previously mentioned literature review that showed differences in mothers and fathers feeding practices, where fathers were generally less likely to monitor children's food intake and to limit access to food compared to mothers (Khandpur et al., 2014). The fathers in our study have helped to contribute to deepening our understanding of this complex and difficult parental responsibility, the ambivalence they feel and the strong desire to preserve the child's dignity and to have a peaceful and harmonious everyday life for their family. We know that conflicts may generate more problems like tantrums or a refusal to eat (Khandpur et al., 2016). Previous studies (Eg et al., 2017;Salemonsen et al., 2022) show that fathers believed that agreement between parents and minimizing conflicts are important for managing lifestyle changes, and that the family climate affected the child's eating habits. Dysfunction in parental alliances and family functions could be a predictor of overweight and obesity in adolescence (Mazzeschi et al., 2013). These findings are in line with the perceptions and experiences of the fathers on the importance of support, collaboration and alliance between the parents (Salemonsen et al., 2022). The fathers in our study seemed to intuitively understand this association, believing that it would not benefit the child to have high conflicts and a warzone-like atmosphere at mealtimes, and they are permissive in order to protect the child's dignity. At the same time, this fear of conflict and almost an avoidance of responsibility could be a strong call for help to receive guidance on how to deal with food restrictions and how to approach this delicate and difficult challenge. In addition, this reflects the complexity in childhood weight management and shows how emotional barriers and ambivalence may lead to the alternation between concern, helplessness and assuming and avoiding responsibility. The second theme Needing acknowledgement, and flexible and tailored professional support reflects the fathers' need for help and guidance in their caregiving and efforts to prevent further weight gain in their children. Our findings illustrate that the fathers want to be involved, to help, contribute to a better life and protect their children. They wanted to be included and to receive guidance on discussing this problem with their children, without imposing an additional burden on their child. The PHNs and child health services are expected to follow up parents in primary care (The Norwegian Directorate of Health, 2017). Despite fathers' wishes to be included and their perception of their equal parental role and responsibility, this study reveals that the fathers do not have the same access to support at child health clinics. They are not included in the same way as mothers since they do not receive information and other correspondence from the child health services. Since the fathers described a lack of arenas for conversation outside the family and expressed a need to be more included and acknowledged by HPs, it will be necessary to facilitate greater flexibility and access to counselling and professional support to help them manage their responsibility. A previous study (Lowenstein et al., 2013) explored fathers' perceptions and experiences of communicating with their children's healthcare provider during visits to clinics regarding weight, diet and physical activity. They found that fathers were involved in their children's healthcare and that fathers found providers to be helpful partners, depending on the quality of this relationship. However, they felt "left out" during clinic appointments. Other studies found that fathers want to play a more active role during pregnancy, childbirth and follow-up at the child health clinic, however they felt excluded by HPs (Høgmo et al., 2021;Solberg & Glavin, 2018). This requires a change in child healthcare clinics and services and that they treat fathers as independent and equal carers, and provisions should be made for fathers to be included to a greater degree. To help involve and include fathers, HPs can provide them with a specific invitation to an appointment (Ahmann, 2006). Work conflicts for fathers and greater convenience for mothers accompanying children to healthcare visits is described (Ahmann, 2006). Flexibility in counselling may be a way to meet fathers' needs by, for example, trying to find a time at the beginning of the day or at the end of the clinic's opening hours, and by inviting the fathers to the clinic personally, and not just sending a message through mothers. HPs may increase the father's selfefficacy by facilitating support, both emotional and practical support. This may be an important therapeutic contribution to reduce blame or shame in parents that is related to their child's weight excess. Reducing the feeling of shame or blame may help parents assume responsibility. Most of the fathers in our study emphasized focusing on a healthy lifestyle, including a healthy diet and physical activity, and not putting so much focus on weight. According to Golden et al (Golden et al., 2016), guidance about messages around obesity and eating disorders should focus on a healthy lifestyle rather than on weight. Evidence suggests that obesity prevention and treatment, if done correctly, will not lead to someone developing an eating disorder (Golden et al., 2016). Questions from the fathers regarding the "right" or "correct" way to do this are appropriate questions to ask, as there is a general and established concern from both parents and health professionals regarding the uncertainty of what factors can contribute to eating disorders. One way of helping these families, especially the fathers, is for HPs to address the fathers' own feelings of guilt, shame and also their concerns, feelings of helplessness and discomfort. This could be done by boosting the fathers sense of self-worth through long-term support from competent and sensitive HPs. The support must be based on dialogue and a non-judgemental attitude. Studies exploring beneficial support in weight-management show that dialogue, a non-judgemental attitude and a shared responsibility is useful for self-management (Salemonsen et al., 2020). Using nonbiased language, empathic and empowering counselling techniques and addressing stigma and bullying in the clinic visits are also recommended (Pont et al., 2017). To facilitate weight management, continued support from HPs should be offered to parents (Nowicka et al., 2022). Inclusion, acknowledgement and support from HPs tailored to the fathers and the families' needs, may help the fathers in preventing further weight gain in their children. Tailored and professional support may focus on the recommendations from Neumark-Sztainer (Neumark-Sztainer, 2009), which emphasize promoting a positive body-image and encouraging more enjoyable meals. Puhl et al. (Puhl et al., 2023) found that most parents wanted guidance on how to navigate weight related topics, including promoting healthy behaviours and positive body-image, and that there is a need for education to help parents engage in supportive conversations about body weight. Strengths and limitations No studies have been found that explore Norwegian fathers' feeding practices or how Norwegian fathers experience helping prevent further weight gain in their children, and what they need in order to help their children. Our study is the first study to explore Norwegian fathers' experiences in weight management for their children, and adds to the field of parental experiences, especially fathers' experiences, of caring for their child in order to prevent further weight gain. The strength of the study is that it provides knowledge exclusively from the fathers' perspectives and gives a deeper understanding of the emotional challenges that influence weight management. This knowledge about fathers' experiences in childhood weight management could be helpful for HPs and PHNs when they provide help and support to families; to meet the fathers' needs in overweight and obesity management and to increase HPs own awareness of paternal concerns and needs. This insight into the fathers' own emotional barriers may help HPs to provide tailored guidance and support and to better understand the necessity of acknowledging, involving and including fathers. This paper meets the requirements of the COREQ checklist (Tong et al., 2007). However, some limitations should be addressed. Fathers self-selected to participate in our study and may have been more engaged and involved in caring for their children than other fathers with the same challenges. Several of the fathers had participated in group or individual consultations with a PHN, GP or paediatrician over a period of time prior to the interviews, and they exhibited a high degree of selfreflection and knowledge. This may reflect a selection bias and hence limit the transferability of the findings (Lincoln & Guba, 1985). The fathers have not been actively involved throughout the research process. Conclusion and implications for clinical practice The fathers in this study wanted to care for and help their children with overweight or obesity to prevent further weight gain. However, they found it difficult to talk about obesity and dietary restrictions with their child, fearing that they could inflict shame or guilt on them and even contribute to their child's eating disorder. The fathers expressed deep concerns about their children's self-esteem and psychosocial health. They felt that they were alternating between concern, helplessness and responsibility, and that emotional barriers, ambivalence and concerns prevented them from providing adequate weight management. The fathers tried to find a balance between assuming and avoiding responsibility for weight management with a desire to preserve the child's dignity. In order to help fathers prevent further weight gain in their children, fathers need guidance on how to talk to their children about food restrictions, for example, while at the same time emphasizing safeguarding their children's dignity, and without imposing additional burdens on the child. The fathers described a lack of arenas for conversation outside of the family and expressed a need to be included and acknowledged by HPs and have access to counselling and professional support. HPs should address parents' own emotional barriers and include fathers to a greater degree as a resource in family-centred counselling to help prevent and treat childhood obesity. To involve the fathers in counselling related to childhood overweight and obesity, HPs may need to provide tailored long-term emotional support, in addition to practical support and useful tools on how parents can communicate with their children on this topic. This will require competent and sensitive HPs who base their support on dialogue, a non-judgemental attitude and best practice. GP General Practitioners HP Healthcare professionals PHN Public health nurse
2023-07-15T06:17:34.057Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "4db7c8afa1fc2b5de175e1d78c737aceaffa937b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "94cff397aed9ed6c0cd9bc618b6d627dd24e6d1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233586715
pes2o/s2orc
v3-fos-license
Jet-installation noise reduction with flow-permeable materials This paper investigates the application of flow-permeable materials as a solution for re- ducing jet-installation noise. Experiments are carried out with a flat plate placed in the near field of a single-stream subsonic jet. The flat plate is modular and the solid surface near the trailing edge can be replaced with different flow-permeable inserts, such as a metal foam and a perforated plate structure. The time-averaged jet flow field is character- ized through planar PIV measurements at three different velocities ( M a = 0 . 3 , M a = 0 . 5 and M a = 0 . 8 , where M a is the acoustic Mach number), whereas the acoustic far-field is mea- sured with a microphone arc-array. Acoustic measurements confirm that installation effects cause significant noise increase, up to 17 dB for the lowest jet velocity, particularly at low and mid frequencies (i.e. St < 0 . 7 , with the Strouhal number based on the jet diameter and velocity), and mostly in the upstream direction of the jet. By replacing the solid trail- ing edge with the metal foam, noise abatement of up to 9 dB is achieved at the spectral peak for and the installed curves tend to collapse with the ones of the isolated jet, especially for the highest jet velocity. The results also show that the flow-permeable materials are effective in reducing jet-installation noise in all assessed directions, particularly upstream. This indicates that the dipole sources on the plate are mitigated. In the downstream direction, for the metal foam case, the levels reach those of the isolated jet for θ > 120 ◦ and M a > 0 . 5 , indicating that there is no change in the turbulence-mixing noise component due to the presence of the plate. the metal foam, there is an increase in amplitude, but no significant change to the spectral peak frequency, whereas for the perforated there is a low-frequency noise increase with a change in the spectral peak. It is believed that this difference is caused by the high permeability of the metal foam, which produces a new singularity and thus a new scattering region at the solid-permeable junction. These results show that a surface treatment with flow-permeable materials is a potentially promising mitigation solution for jet-installation noise. However, the mechanisms that provide such reductions are still unclear. Further work is required to investigate the phenomena happening at the junction region and inside the flow-permeable structure, particularly focusing on the change of impedance, pressure imbalance and the effect of permeability/resistivity of the flow-permeable structures, since it is possible to achieve substantial noise reduction with a perforated structure, even with a low porosity. (a) Sketch of the FAST facility layout with a nozzle mounted in the anechoic chamber and the air supply system at the basement below. Adapted from [17] . (b) Nozzle mounted with the flat plate inside the facility. Facility and models The experiments are performed in the Free jet AeroacouSTic facility (FAST) at the von Kármán Institute for Fluid Dynamics (VKI). This facility consists of a circular jet rig placed in a semi-anechoic room, as shown in Fig. 1 a, with a cut-off frequency of 350 Hz [17] . For the tests performed in this work, the air is supplied by a 7 bar pressure line located beneath the test chamber. The line is also bypassed to a seeding generator for PIV measurements. The seeded flow merges with the pressurized air in a buffer tank to ensure correct mixing. The jet blows vertically into an extractor equipped with a muffler [17] . In the anechoic chamber, a laser source and cameras are mounted for PIV measurements, whereas a microphone arcarray is present for the acoustic ones. The picture in Fig. 1 b shows the jet nozzle installed with the flat plate inside the facility. A circular convergent nozzle with an exit diameter D j = 50 mm and contraction ratio of 36:1 is designed based on the geometry of the SMC0 0 0 nozzle, which has been used for several investigations of isolated and installed subsonic turbulent jets [2,18,19] . This nozzle, manufactured in aluminium, is attached to a straight pipe with 300 mm of diameter. A cut view of the nozzle is shown in Fig. 2 along with its main dimensions. The origin of the coordinates system used in the analyses is positioned at the center of the nozzle exit plane. For the installed configuration, a stainless steel flat plate is mounted in the vicinity of the nozzle. The plate is realized with a modular structure, which allows for different surface lengths to be easily investigated. The length of each part is shown in Fig. 3 . The surface has a total dimension of 500 mm × 1140 mm × 10 mm. The large width is chosen to avoid side-edge scattering. The aft piece consists of a sharp trailing edge with a chamfer angle of 40 • . This modular design also allows for an easy replacement of the solid structure by the flow-permeable materials. Two pieces at the middle section (shown in blue in Fig. 3 ) can be replaced by the flow-permeable ones, allowing for the investigation of different porosity lengths ( L p = 1 D j and L p = 3 D j ). Different geometric cases are tested by changing the length L and height h of the plate. As shown in Fig. 4 , the length is defined as the distance between the trailing edge and the nozzle exit plane, and the height as the radial position with respect to the jet centerline. A baseline installed case is defined with L = 6 D j and h = 1 . 5 D j . The leading edge of the plate is mounted upstream of the nozzle exit plane to avoid scattering at that region. A different plate length is also investigated ( L = 8 D j ) for a fixed h = 1 . 5 D j , as well as a different radial position ( h = 2 D j ) for a fixed L = 6 D j . Due to set-up constraints, it is not possible to mount the plate at a radial position h < 1 . 5 D j . Therefore, the effect of the plate height is addressed by moving it away from the jet. Moreover, with a length shorter than L = 6 D j at that position, it is possible that the relative noise increase due to installation effects would be much lower, particularly for mid and high jet Mach numbers, which could compromise the parametric analysis of the flow-permeable treatment. The tests are performed at three jet flow velocities with different acoustic Mach numbers M a , where the jet velocity U j is divided by the ambient speed of sound c 0 . The flow characteristics such as the nozzle pressure ratio ( NPR ) and the static temperature ratio T R are reported in Table 1 , as well as the Reynolds number Re, based on the nozzle exit diameter. The measurements are conducted at static conditions, i.e. no flow external to jet, at average ambient conditions of p amb = 100 . 6 kPa and T amb = 294 K. Flow-permeable materials Two types of noise reduction solutions based on flow-permeable materials are investigated in this work. The first one is an open-cell NiCrAl foam manufactured by the company Alantum. The metal foam is manufactured through electrodeposition of pure Ni on a polyurethane foam, which is subsequently coated with high-alloyed powder [20] . This type of material consists of a homogeneous microstructure with a three-dimensional repetition of a dodecahedron-shaped cell [16] . Rubio-Carpio et al. [15] have investigated the application of this material with different cell diameters d c for airfoil TBL-TE noise reduction. A structure with nominal d c = 800 μm is chosen for this work since its porosity and permeability characteristics are available, and significant TBL-TE noise reduction was obtained with this structure [15] . Two inserts are manufactured for the plate, as shown in Fig. 5 a in order to assess the effect of porosity length on the noise reduction. The second type of flow-permeable material consists of a 3D-printed perforated insert with straight holes connecting the upper and lower side of the plate, as shown in Fig. 5 b. This insert is manufactured in R5, which is a liquid photopolymer that offers good surface finishing and strength properties [21] . The holes have a diameter d h = 800 μm and a spacing of l h = 2 mm. The flow-permeable materials are characterized by properties such as porosity σ and permeability K. The porosity is defined as the ratio between the volumetric densities of the flow-permeable material ρ p and of the solid structure ρ s , as shown in Eq. (1) : The permeability is obtained through the Hazen-Dupuit-Darcy equation ( Eq. (2) ), which prescribes the static pressure loss p across a homogeneous sample with thickness t [22] : where μ is the flow dynamic viscosity, ρ is the flow density, v d is the Darcian velocity (defined as the ratio between the volumetric flow rate and the cross-section area of the sample [22] ), and K and C are the permeability and form coefficients, which account for pressure losses due to viscous and inertial effects, respectively. For the metal foam, the porosity and permeability parameters were obtained by Rubio-Carpio et al. [15] . The former was obtained by measuring the density of small samples, whereas the latter was obtained from characterization experiments performed with a permeability rig [15] . The results are reported in Table 2 . A similar procedure has been carried out for the 3D-printed perforated material. The resistivity R ( R = μ/K) is also included in the table for comparison. Flow field measurements A two-dimensional jet velocity field is obtained through PIV measurements on the xy -plane (normal to the nozzle exit). This method allows for the measurements of time-averaged velocity components u and v (in the axial and radial directions, respectively), and the r.m.s. of their fluctuations u rms and v rms . The PIV measurements are performed only for the isolated jet configuration, since the investigated configurations (length and height) are chosen such to avoid grazing flow on the surface. It has been shown in a previous investigation that this does not affect the noise generated by turbulence mixing [6] . Seeding particles are produced by a PIVTEC Pivpart45 generator, comprised by 45 Laskin nozzles and using Shell Ondina 919 oil, with average size of 1 μm. These particles have a relaxation time of 1 μs [23] , which is suitable due to the flow acceleration in the nozzle. The illumination is provided by laser pulses generated with a double-cavity Quantel CFR200 Nd:YAG system. This equipment provides a laser wavelength of 532 nm, with a maximum energy of 200 mJ/pulse, and a pulse duration of 8 ns. Two LaVision Imager SX4M cameras (resolution: 2360 × 1776 pixel; frame rate: 31 Hz; pixel size: 5.5 × 5.5 μm; minimum time interval: 250 ns; digital output: 12 bit), positioned 0.5 m distant of the jet axis, are used for image recording. The cameras are equipped with two Nikkor f/1.8 lenses of 50 mm focal length. This configuration allows for measurements of two fields-of-view (FOV), in order to capture a larger portion of the jet development, as shown in Fig. 6 a. The FOVs of each camera are shown in Fig. 6 b with an overlap of 1 . 25 D j between them. The final FOV has a dimension of 12 D j × 4 D j (0.6 m × 0.2 m), and it is shown by the black lines. The resolution in the final FOV is approximately 6 pixel/mm. With this set-up, 10 0 0 pairs of particle images are acquired with a sampling rate of 15 Hz. The illumination and image acquisition are triggered synchronously by the LaVision DaVis 8.4 software, which is also used for the post-processing of the images. The separation time between paired images is tuned with respect to the jet velocity in order to obtain a maximum of 25 pixels displacement at the jet core. This value is chosen to ensure a displacement of at least 3 pixels at regions of lower velocity. A multi-pass cross-correlation algorithm [24] with window deformation [25] is applied. The final interrogation window size is 24 × 24 pixel 2 with an overlap factor of 75%, which provides a final spatial resolution of 4 mm and a vector spacing of 1 mm. Spurious vectors, on the order of 1% of the total amount, are discarded by applying a universal outlier detector and are replaced by interpolation based on adjacent data [26] . The main parameters of the PIV set-up are reported in Table 3 . The estimation of the uncertainty in the PIV measurements is performed following the method proposed by Wieneke [27] . This method provides the uncertainty of a PIV displacement field by projecting the particles from one point to another with the obtained vectors and checking the resultant disparity [27] . The calculations result in a maximum uncertainty of 0.03 U j for the mean velocity, and 0.04 u rms inside the potential core region. At the lipline ( y = 0 . 5 D j ), due to the strong flow unsteadiness, maximum uncertainty values of 0.06 U j and 0.08 u rms are obtained. Acoustic measurements The acoustic measurements are carried out with 12 Bruel & Kjaer 4938 1/4" microphones (frequency range: 4 Hz to 70 kHz; pressure-field response: ±2 dB; max. output: 172 dB ref. 2 × 10 −5 Pa). The microphones are integrated to Bruel & Kjaer 2670 -1/4" microphone preamplifiers, and a Bruel & Kjaer NEXUS Type 2690-A conditioner is also used to amplify the recorded signals. The microphones are mounted on an arc-array dimensioned for measurements at 1 m radius ( 20 D j , centered at the origin of the coordinates system). The polar angle follows the convention of θ = 0 • in the upstream direction of the jet axis. Therefore, the microphone at θ = 90 • is aligned with the nozzle exit. The microphones are mounted from θ = 40 • to θ = 150 • , spaced of 10 • , as shown in Fig. 7 . For the installed configuration, the arc-array is mounted on the reflected side of the plate (jet in between the plate and array), in order to assess the effect of the flow-permeable materials on the reflection of jet acoustic waves as well. The measurements are performed with a sampling frequency of 51.2 kHz for 20 s. For post-processing, the acoustic data are split into blocks of 2048 samples for each Fourier transform, and windowed with a Hanning weighting function with 50% overlap. These parameters result in a frequency resolution of 25 Hz. The spectra shown in the following sections have been also scaled to an observer at a distance of 100 D j from the origin, similarly as performed in the JIN benchmark studies at NASA Glenn [2] . Jet flow field In this section, the flow field of the isolated jet is discussed. The PIV measurements are performed for the 3 investigated acoustic Mach numbers and the results are displayed in terms of time-averaged axial velocity u and the r.m.s. of velocity fluctuations ( u rms ). The jet development for M a = 0 . 5 is shown in the contour plot in Fig. 8 . The region corresponding to the potential core and the downstream velocity decay can be detected, as well as the spreading of the jet and symmetry with respect to the centerline. The velocity profiles are extracted at the jet centerline and plotted in Fig. 9 . The quantities are non-dimensionalized by the respective jet nominal velocity U j . The potential core length X c , defined as the distance between the point where u = 0 . 98 U j and the nozzle exit, is reported in Table 4 for all jet velocities. These values are compared with results obtained where ρ j and ρ ∞ are the jet and ambient densities, respectively. A good agreement is obtained between the experimental and predicted results. The centerline velocity decay downstream of the potential core is also shown to follow the trend defined by Witze with the equation [28] : where α is a constant equal to 1.43 [19] . The increase in potential core length with the jet velocity is related to the change in the size of the structures in the mixing-layer with the jet Reynolds number [28] . For M a = 0 . 8 , the structures are likely to be smaller and thus, the merge of the shear layer at the centerline occurs further downstream. This is also confirmed by the r.m.s. of velocity fluctuations, plotted in Fig. 9 b, which are also lower for higher jet velocities. Velocity profiles in the radial direction are also obtained at two axial stations, corresponding to the trailing-edge positions of the investigated installed jet configurations ( x = 6 D j and x = 8 D j ). The profiles are plotted in Fig. 10 , along with a line at y = 1 . 5 D j , which is the radial position where the plate is closest to the jet, for M a = 0 . 3 . Similar results have been obtained for the other jet velocities. It is shown that the axial velocity is zero at y = 1 . 5 D j for x = 6 D j and, therefore, a plate with a trailing edge at this position is located outside of the plume. Conversely, for x = 8 D j , at y = 1 . 5 D j , the local axial velocity is non-zero and equal to 0 . 05 U j . However, due to the relatively low velocity at this point, it is not likely that the surface significantly changes the characteristics of the turbulent structures in the mixing-layer, i.e. no changes in the noise due to turbulence mixing are expected even for the longest surface. These results also allow for the calculation of the jet spreading angle δ. Values of δ = 9 • ( M a = 0 . 3 ); δ = 8 . 9 ( M a = 0 . 5 ) and δ = 8 . 6 ( M a = 0 . 8 ) are obtained. These results are consistent with those from the NASA Glenn tests [19] , and they confirm that the jet is fully turbulent. Far-field acoustic results In this section, the results of the acoustic measurements for the installed jet with flow-permeable materials are reported and compared with the isolated and installed (solid trailing edge) jets, initially for the baseline plate configuration ( L = 6 D j and h = 1 . 5 D j ). Two types of flow-permeable materials are investigated: a metal foam and a perforated plate with straight holes; both inserts have a length L p = 3 D j . The results are displayed in Fig. 11 Firstly, comparing the spectra for isolated and installed jets (solid plate), it is shown that installation effects are responsible for a strong noise increase at low and mid frequencies; for M a = 0 . 3 and θ = 40 • , there is a 17 dB increase in SPL with respect to the isolated case at the installed spectral peak ( St = 0 . 37 ). This strong noise amplification occurs up to St = 0 . 7 for this condition, and at higher frequencies there is a constant shift of approximately 3 dB from the isolated curve, which characterizes reflection of acoustic waves on the surface. In the sideline direction ( θ = 90 • ), the SPL increases for St < 0 . 3 , whereas for 0 . 3 < St < 0 . 6 there is a reduction with respect to the upstream direction. Therefore, for θ = 90 • , the spectral peak shifts to a lower frequency, possibly lower than the range where the measured data are reliable; this implies that the effect of the flow-permeable materials at the spectral peak might be not significant for a full-scale application, where the peak is likely located below the hearing range. Nonetheless, for a frequency of St = 0 . 25 , there is also a 17 dB increase with respect to the isolated case. In the downstream direction of the jet ( θ = 150 • ), there is a maximum amplification of 7 dB at St = 0 . 25 due to the dipolar directivity of the noise generated by the plate, as well as increased noise from turbulence mixing by the jet. For higher jet velocities, similar trends are obtained, but the relative amplification with respect to the isolated noise levels is lower due to increased significance of turbulence-mixing noise. For the plates with flow-permeable treatments, the spectra show considerable noise reduction with respect to the solid Comparing the two different treatments, the metal foam provides more benefits than the perforated inserts for all tested cases. Since the former has a higher permeability, it is likely that the differences in noise levels between the two cases can be attributed to a better pressure balance between the upper and lower sides of the plate for the metal foam case, thus reducing the surface pressure fluctuations near the trailing edge and, consequently, the noise due to scattering. The differences between the two flow-permeable configurations is more noticeable at low frequencies ( St < 0 . 4 ). This occurs because, for θ = 40 • , while the noise reduction with the perforated trailing edge is approximately constant for St < 0 . 5 , for the metal foam there is a change in the spectral shape, with a new distinct peak at St = 0 . 45 , in that direction. This is an indication that there is an additional noise source other than the trailing edge. Similar trends are obtained for higher jet velocities. For M a = 0 . 5 , there is a similar absolute noise abatement at the spectral peak as the previous case (10 dB reduction with the metal foam and 6 dB with the perforated). For this velocity, the noise increase due to installation effects is relatively lower when compared to the M a = 0 . 3 jet. Therefore, with the same absolute noise reduction provided by the flow-permeable materials, the spectra approach more the levels of the isolated configuration. This effects becomes more visible for the M a = 0 . 8 jet, where the curves of both treated surfaces practically collapse with the isolated one for θ > 90 • , indicating that the trailing-edge source has been completely mitigated in these cases. The Overall Sound Pressure Level (OASPL) for each case is calculated at all polar angles by integrating the SPL spectra in the range of 350 Hz < f < 20 kHz and the results are shown in the polar plots in Fig. 12 , for three jet velocities. The directivity plots show that the highest differences between isolated and installed (solid plate) cases are found in the upstream direction, which is consistent with noise from scattering at the plate trailing edge [4] . In the downstream direction, this difference is smaller and the installed curves tend to collapse with the ones of the isolated jet, especially for the highest jet velocity. The results also show that the flow-permeable materials are effective in reducing jet-installation noise in all assessed directions, particularly upstream. This indicates that the dipole sources on the plate are mitigated. In the downstream direction, for the metal foam case, the levels reach those of the isolated jet for θ > 120 • and M a > 0 . 5 , indicating that there is no change in the turbulence-mixing noise component due to the presence of the plate. The differences between the OASPL for flow-permeable and solid surfaces are reported in Table 5 , for a polar angle θ = 40 • . The overall increase due to installation effects with respect to the isolated jet is also included for reference. It can be seen that the metal foam provides higher noise reduction than the perforated structure, particularly for M a = 0 . 3 . From the 11.5 dB overall increase due to installation effects, it is possible to reduce 7.7 dB by applying the metal foam at the plate trailing edge. For higher jet velocities, the installation noise is practically eliminated with this porous material. Despite having a lower permeability, the perforated trailing edge still provides significant noise reduction, of approximately 4 dB for M a = 0 . 3 and M a = 0 . 5 . The dependence of the OASPL with the jet velocity for an angle θ = 40 • is also calculated and plotted in Fig. 13 for each case. Reference curves are also added for OASPL ∝ U 8 j , which is consistent for turbulence-mixing noise [29] , and OASPL ∝ U 5 j , consistent with scattering at the surface trailing edge [3] . By applying the permeable treatment, the exponent of noise levels with the jet velocity increases from n = 5 . 8 to n = 6 . 4 , for the perforated plate, and to n = 7 . 2 for the metal foam. The isolated jet has n = 7 . 9 . These results are in qualitative agreement with those from Geyer and Sarradj [13] . This confirms that, when flow-permeable treatments are applied to the surface, the scattering becomes less dominant with respect to other sources such as turbulence-mixing. The effect of the configuration geometry on the noise reduction that can be achieved using flow-permeable materials is investigated in the following. Firstly, the effect of the plate radial position is addressed by moving the plate in this direction. The spectra shown in Fig. 14 Since lower absolute levels are obtained in the spectra for the treated plate farther from the jet, it is interesting to plot the results in terms of noise reductions with respect to each solid case. The curves in Fig. 15 are given in terms of SPL for the respective plate height, and for each permeable configuration, for M a = 0 . 3 . Higher jet velocities are not shown since the turbulence-mixing noise becomes significant and it is not possible to properly assess the effect of the permeable materials. It can be seen that the curves are similar, with minor local deviations, indicating that the absolute noise reductions provided by the permeable materials are independent on the plate radial position, i.e. independent on the amplitude of impinging pressure waves. It is likely that this property is also the reason why the SPL for the installed jets with flow-permeable trailing-edges approach more the isolated jet levels for higher jet velocities. The effect of the plate length is investigated for a surface with L = 8 D j and h = 1 . 5 D j , as shown in Fig. 16 for θ = 40 • . The results show that, for this geometry, there is a significant noise increase at low frequencies ( St < 0 . 35 , for M a = 0 . 3 ). Moreover, the benefits provided by the flow-permeable materials are lower than in the previous cases (6 dB decrease at St = 0 . 35 , for M a = 0 . 3 and both types of inserts). At mid frequencies ( 0 . 35 < St < 0 . 7 , for M a = 0 . 3 ), the metal foam and perforated inserts provide similar noise reduction for this configuration. The main differences between the two of them occur in the range of noise increase due to the increment in the plate length. This is likely the result of the different permeability of the surfaces at the trailing edge, where large-scale pressure waves impinge on the plate; the metal foam provides a better pressure balance between the upper and lower sides of the plate, thus better reducing the surface pressure fluctuations at low frequencies. On the other hand, it is likely that the noise at 0 . 35 < St < 0 . 7 is generated by surface fluctuations upstream of the flow-permeable region, which is the same for both cases. Similar trends occur for the other jet velocities. This effect can be verified by analysing the influence of the flow-permeable insert length on the noise reduction, for a fixed plate length L = 6 D j and height h = 1 . 5 D j . Measurements are taken for inserts with length L p = 1 D j , and compared to the ones previously shown ( L p = 3 D j ). Spectra are plotted in Fig. 17 , for a polar angle θ = 40 • and three M a . The results show that, for the metal foam, the smaller insert still provides significant noise abatement, particularly for M a = 0 . 3 (6 dB reduction at the peak). For M a = 0 . 5 , similar absolute noise reductions are obtained and, for M a = 0 . 8 , the curves are more similar since turbulence-mixing noise is significant. Therefore, longer flow-permeable sections provide higher benefits since there is a shorter solid section of the plate subjected to strong surface pressure fluctuations. For the perforated structure, the small insert ( L p = 1 D j ) provides less noise reduction, of approximately 4 dB at St = 0 . 37 , for M a = 0 . 3 . The difference in amplitudes between the curves for the two insert lengths is also more significant at low frequencies ( indicating that the additional solid length, for the cases with a shorter insert, generates noise in this frequency range. This is a similar behaviour to that of increasing the overall plate length, as shown in Fig. 16 . Nonetheless, it can be concluded that even small sections of permeable treatment are sufficient for achieving noise reduction. This is important since those types of structures usually lead to performance degradation (loss of lift and drag increase) [12,16] . It is shown that the solid extension of the plate affects the final spectral shape and amplitude, also shifting the frequency of peak SPL. Therefore, it is also important to analyze the effect of changing the length of the porous insert, but keeping the size of the solid section of the plate constant. For that purpose, spectra of two cases are compared: L = 6 D j with L p = 1 D j and L = 8 D j with L p = 3 D j . Therefore, both cases have a solid section of 5 D j between the nozzle exit and the flow-permeable section. Results are shown in Fig. 18 , for the two types of permeable materials and three M a . The results are similar to those shown in Fig. 17 . The case with an overall longer plate has more noise generated at lower frequencies ( St < 0 . 3 for M a = 0 . 3 ), for both metal foam and perforated inserts; at St = 0 . 27 , there is a 5 dB difference between the metal foam curves and 4.4 dB for the perforated ones. Therefore, this is likely attributed to the difference in total plate length so that the noise is generated due to the impingement of high-amplitude and low-frequency pressure waves on the flow-permeable region of the plate. On the other hand, the noise at mid frequencies does not show significant change when comparing the two cases. Therefore, it is probable that the dominant source in this range is the same for both of them, and it is likely that the source is now located at the solid-permeable junction in the plate. It is speculated that the junction between solid and flow-permeable surfaces has become the dominant source location for the metal foam case. The effect of the junction has been described in the literature as an additional geometric singularity, and thus, as a new scattering region, as shown by Kisil and Ayton [30] . Scattering at the junction is then responsible for noise increase at mid and high frequencies, also changing the directivity pattern of the overall configuration [30] . Moreover, beamforming results from Rubio-Carpio et al. [16] showed that, for frequencies where TBL-TE noise reduction is achieved with flow-permeable materials, the dominant source is placed at the solid-flow-permeable junction [16] . Therefore, it is possible that there is an additional contribution from that region, particularly for the cases with the metal foam due to its high permeability. The junction effect would thus be the cause of the different spectral shape, as well as of the SPL peak at a higher frequency, relative to the fully solid and perforated cases. The results previously shown for the metal foam case are in agreement with this hypothesis; for the reduced insert length, the junction is placed at x = 5 D j (as opposed to x = 3 D j in the baseline case), and the spectral peak shifts towards a lower frequency ( Fig. 17 ). On the other hand, when the junction is placed at the same position and the porous extent is changed, there is simply an increase in amplitude, but the spectral peak frequency remains unchanged ( Fig. 18 ). This effect is likely not obtained with the perforated configuration, since the low permeability does not result in a strong impedance jump at the junction, and, consequently, scattering at that region. Further work is necessary to confirm these hypotheses. Conclusions An experimental study on the effect of flow-permeable materials on the noise produced by an installed jet is performed. The configuration is comprised by a single-stream subsonic jet and a nearby flat plate, placed in the jet near-field. Two types of flow-permeable structures are investigated: a metal foam and a perforated insert with straight holes normal to the axis. The metal foam has a higher porosity and permeability than the perforated structure and its channels are also interconnected. Planar PIV measurements are carried out to characterize the jet velocity field. Based on the potential core length and spreading angle, it is concluded that the jet has a turbulent behaviour for all tested velocities. Moreover, it is confirmed that there is no direct grazing of the jet on the plate, except for the longest surface tested. However, for this case, the surface is in a region of very low velocities compared to the potential core, and it is likely not affecting the noise generated by turbulence mixing. Acoustic measurements show that the installation effects are responsible for strong low-frequency noise increase with respect to isolated levels. This amplification is more significant at a low jet velocity, where the dipole sources on the surface are more acoustically efficient than the quadrupole sources from turbulent mixing. The spectral shape and amplitude are shown to be dependent on the geometry of the configuration; longer surfaces produce more noise at low frequencies, whereas moving the plate towards the jet in the radial direction results in noise increase, especially at mid frequencies. Significant noise reduction is achieved when the solid plate trailing edge is replaced by flow-permeable inserts, particularly in the low/mid frequency range, where the scattering is the dominant mechanism. Comparing the two types of structures, the metal foam is more effective in reducing JIN, likely due to a higher permeability, which can mitigate the pressure imbalance between the upper and lower sides of the plate, and thus reduce the noise generated by surface pressure fluctuations. For low jet velocities, a noise decrease of up to 10 dB is obtained at the spectral peak with the metal foam, but the installation noise is still visible. When the jet velocity is increased, the attenuation provided by the flow-permeable treatment brings the noise levels closer to the isolated case, and the trailing-edge source is no longer dominant with respect to the jet quadrupoles. It is worth mentioning that the highest noise levels for the investigated installed configurations occur at low frequencies ( St < 0 . 3 for M a = 0 . 3 ), particularly at the sideline direction ( θ = 90 • ). For a full-scale aircraft, these frequencies may not be of particular significance. However, the flow-permeable trailing edges assessed in this work also provide noise reductions at mid and high frequencies, including reflection effects on the surface, which would be significant in a full-scale configuration. The effect of surface treatment is also assessed for different configuration geometries. By moving the plate away from the jet, flow-permeable materials provide similar absolute noise reduction as the baseline case. Conversely, by increasing the plate length, lower noise abatement is obtained with the flow-permeable treatments, particularly at low frequencies ( St < 0 . 35 for M a = 0 . 3 ), with the metal foam still providing higher benefits. On the other hand, the noise at mid frequencies ( 0 . 35 < St < 0 . 7 for M a = 0 . 3 ) is similar for the two types of insert, indicating that it is generated by the impingement of pressure waves in the solid region of the plate, upstream of the flow-permeable treatments. For a fixed plate length, a shorter flow-permeable insert is shown to provide noise reductions with respect to the solid case, but in a lower degree compared to the larger insert. The main differences occur at low frequencies, which indicates that the increased noise is due to the additional solid length, compared to the case with the longer insert. The frequency of highest SPL also shifts towards low frequencies. On the other hand, when the plate length is changed, but the solidpermeable junction is kept at the same axial position, the flow-permeable materials behave differently. For the metal foam, there is an increase in amplitude, but no significant change to the spectral peak frequency, whereas for the perforated there is a low-frequency noise increase with a change in the spectral peak. It is believed that this difference is caused by the high permeability of the metal foam, which produces a new singularity and thus a new scattering region at the solid-permeable junction. These results show that a surface treatment with flow-permeable materials is a potentially promising mitigation solution for jet-installation noise. However, the mechanisms that provide such reductions are still unclear. Further work is required to investigate the phenomena happening at the junction region and inside the flow-permeable structure, particularly focusing on the change of impedance, pressure imbalance and the effect of permeability/resistivity of the flow-permeable structures, since it is possible to achieve substantial noise reduction with a perforated structure, even with a low porosity. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-05-04T22:05:05.954Z
2021-04-28T00:00:00.000
{ "year": 2021, "sha1": "a69e14fb18ff4911c308a1c2f768870b38f2f028", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jsv.2021.115959", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7095f97526e44660ad3d86459ac3a9de9b62e834", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
59038007
pes2o/s2orc
v3-fos-license
“The SoLoMo customer journey: a review and research agenda” The purpose of this paper is to develop a better understanding of the impact of (So) social media, (Lo) local marketing, and (Mo) mobile applications (SoLoMo) on consumer behavior. The paper is based on a literature review of peerreviewed articles, published books, trade publications, and doctorate dissertations. This paper examines the SoLoMo consumer, which is a concept that has not been widely discussed in the literature of digital marketing. A thorough literature review of the digital customer journey indicates an oxymoron. On the one hand, there is a vast range of studies in the literature to explore the impact of social media and mobile devices on marketing and consumer behavior. On the other hand, little has been said about the integration of social media, mobile application, and local marketing and how it shapes the profile of the SoLoMo consumer. This paper suggests three areas for further research: (1) the examination of the SoLoMo consumer behavior; (2) the exploration of the digital customer journey; and (3) the investigation of selected new technologies that can shape the future of marketing. The study contributes to the understanding of digital consumer behavior in a multichannel marketing environment. It also proposes a research agenda to explore the future of online consumer behavior in the digital multi-touchpoint market landscape. Introduction © Marketing's role in the evolution of business is essential to identify and address how people decide during their customer journey (Chaffey & Ellis-Chadwick, 2012). As a well-developed scientific field, marketing is continuously adapting its methods and strategies to fullymeet the new consumer needs. This paper contributes in the existing literature on consumer behavior by focusing on the impact of (So) social media, (Lo) local marketing, and (Mo) mobile applications (SoLoMo) on the digital customer journey. This paper focuses on the SoLoMo customer journey, which has not been widely studied and mentioned in the literature (Chaney, 2015). For this reason, this paper aims to discuss fundamental concepts and contemporary digital marketing issues on a rather unexplored topic. Shields and Rangarajan (2013) argue that review papers by nature avoid reaching definite conclusions. The latter authors support that such studies serve to map the patterns in the current literature to identify the different research tools for data collection and selection of participants in both qualitative and quantitative research studies. Therefore, the present exploratory study aims to provide the theoretical framework which will lead to new research studies to further the SoLoMo consumer behavior. Customers engage in a multi-platform journey which looks less like a funnel and more like a flight map (Lecinski, 2012). The new consumer starts the journey from online search engines. Other sources of influence such as traditional media (TV, radio, print, outdoor), as well as in-store marketing in brick-and-mortar stores play a supplementary role. The Ericsson Mobility Report (2017) showed an addition of more than 107 million new mobile subscriptions in Q1. Now, according to the latter report, the total number reaches the 7.6 billion of mobile subscriptions globally. What is more impressive is the report's forecast that there will be approximately 1.5 billion smart devices with cellular connections by 2022, which equals to the current population of China at the moment or the combined population of the USA, Indonesia, Brazil, Pakistan, Nigeria, Bangladesh, and Russia. Also, Ericsson Mobility Report (2017) forecasts the connected devices (smartphones, tablets, and phablets) will be 26 billion by 2020. The digital world will be based on mobility, as the numbers of mobile applications, and mobile data traffic are rapidly increasing. The global digital agency "We are Social" argues in its Digital Statshot Report (Kemp, 2017) that mobile devices will push internet penetration beyond 51% of the world's population during mid to late 2017. SoLoMo is an emerging marketing concept that can make use of modern digital marketing tools and explore the convergence of social media, local marketing applicability, and mobile connectivity. Modern consumers do not make a clear distinction between the digital and physical marketing landscapes, as long as they both lead towards customer satisfaction (Chan & Yazdanifard, 2014). Little has been said about the integration of social media, mobile application, and local marketing and how it shapes the modern consumer. SoLoMo is a concept that promotes new dimension in online consumer behavior. The next section offers a literature review regarding the SoLoMo customer journey to conclude to discuss the impact of new SoLoMo-friendly technologies on marketing. Mobile marketing, location-based services, e-mail marketing with opt-in mailing lists, affiliate marketing, online public relations, article syndication, advertorials, referrals, and backlinks create a new marketing landscape in which the brand needs to have a loud and consistent voice. Otherwise, brands and consumers will never manage to meet. This paper identifies critical concepts in the literature by suggesting three areas for further research: (1) the examination of the SoLoMo consumer behavior; (2) the exploration of the digital customer journey; and (3) the investigation of selected new technologies in today's marketing landscape. The following section in this paper's literature review provides an overview of the SoLoMo consumer. The SoLoMo consumer behavior Though a review on relevant literature, this paper aims to better understand the SoLoMo consumers. Particular focus should be given to the new generation of online users who are having a natural inclination towards technology. This generation of students and professionals is defined as digital natives who "spent their entire lives surrounded by and using computers, video games, digital music players, video cams, cell phones, and all the other toys and tools of the digital age" (Prensky, 2001, p. 1). Digital natives are both pioneers and early adopters of technological concepts (Papakonstantinidis, 2014). Digital natives will try first the new smartphones, and they will set the trends in technology and marketing. Digital natives were those who started downloading MP3-formatted songs from the web and then quickly moved to streaming applications of social entertainment such as Spotify for music and YouTube for videos. Also, digital natives have a natural inclination towards online games where they see it as a chance to increase their reputation among other online gamers. Finally, digital natives prefer meeting online with their friends and followers as they seem to be more open and expressive when using social media than in their face-to-face interactions (Papakonstantinidis, 2014). For this new breed of consumers and online users, SoLoMo is not a marketing strategy. It is a way of living. What makes the SoLoMo consumer different? The growth of the SoLoMo consumer has changed the dynamics of product marketing beyond recognition (Doyle, 2012). The fundamental means by which brands send and receive information, the way that customers make purchasing decisions and find out about products or services, is now completely different. The main shift has been the change in the information flow dynamic between the brand and the customer. Before the advent of social marketing, there was a fairly distinct process that most brands went through when marketing a product or service. The first step in the process was to conduct market research about potential customers, their wants and needs, previous purchasing habits, and so forth. This then allowed the brand to segment their marketing strategy depending on the customers they wanted to talk to and their selected method of engagement. They would then design some marketing collateral to achieve that aim, whether it can be by TV, radio, print advertisement, or whatever. The key is that the brand is in control of the marketing process at all times. SoLoMo has changed that notion completely; the brand is not in control of the process, but is merely a participant in a wider array of information sharing and exchange that now determines how people decide to buy something (Tuten & Solomon, 2014). There is exceptionally clear evidence to show that people use social engagement to seek recommendations and feedback on brands from people whom they trust, friends or other people in their networks whose opinions they trust or believe to be a neutral arbiter. Therefore, people will ask their Facebook friends for recommendations on their purchases, or whether someone has had a good or bad experience at a particular restaurant, for example. Sometimes this is quite passive; someone might simply read that someone has had a great night at a new restaurant, and decide to check it out themselves. As a wider element of social engagement, websites like TripAdvisor or feedback systems such as those found on eBay have become important parts of the marketing mix. These systems are built around usergenerated reviews of products and services, and consumers tend to place a good deal of store in the information that they get from them. Therefore, no marketer can ignore this kind of social engagement and reflexive feedback in their plans. Fundamentally the dynamic has changed so that consumers may now know more about products and services than the marketers themselves; they can compare and contrast multiple providers very easily, and with no input from the brands themselves. The local marketing is the second element of the SoLoMo (social, local, mobile) marketing concept. Consumers are adopting new technologies and social media interfaces that allow them to behave in a different way than they did in the past. Nowadays consumers have more power than before regarding pre and post purchase attitudes (Chaney, 2015). The new consumers do not waste time going through the traditional distribution channels through which their level of control was extremely low. Now, they are experimenting with new channels through which their voice can be heard. The key search engine companies have realized this new era, and they are trying to adopt it. Papakonstantinidis et al. (2016) argue that a satisfied SoLoMo consumer can be the most efficient, free of charge brand ambassador, while a dissatisfied one will write negative reviews online to harm the brand intentionally. The current and expected growth of connected devices in online social networks shapes a new marketing landscape. Nowadays, brands act as humans and consumers act as brands. Both brands and consumers engage in public discussions on the major social networking platforms such as Facebook, YouTube, Twitter, Linkedin, Snapchat, and Instagram. Bolen (2015) argues social media is becoming people's primary source of information, communication, and entertainment. As such, marketers have found a fertile environment full of opportunities to approach consumers in a more humane way than mass media advertising. SoLoMo consumers are becoming attached to their mobile devices. Mobile digital media time has already overtaken desktop and other media internet access (Kemp, 2017). Smartphone penetration has increased for two reasons. First, wireless networks have become faster and ubiquitous. Second, mobile devices are nowadays more affordable. Mobile marketing can provide consumers with personalized information based on their location and the time of receipt (Papakonstantinidis et al., 2016). In other words, consumers are more attached to their phones than their personal computers, providing marketers with new tools and opportunities to target their audiences with the highest accuracy as ever before. Further studies of the convergence of social media expansion, location-based services, and mobile usage should explore the new customer journey in marketing. The next section discusses how the SoLoMo consumer affects the digital customer journey. The digital customer journey Advertising blooms from the early existence of humanity. People need to influence other people, using any means they have. Until recently brands relied heavily on traditional marketing tools such as television ads, print ads, brochures, posters, and radio ads to communicate with their target markets (Chan & Yazdanifard, 2014). Now, with the rapid development of the internet, brands communicate directly with their consumers seeking immediate and accurate feedback. The desire of every business or brand is primarily to approach new customers, then progressively to build relationships with them, and finally to convert them to loyal customers and lead them to purchase. In that purpose, digital marketing has distinct differences from traditional marketing. In digital marketing, the achievement of acquisition, conversion, and retention may be fulfilled with different manners (Chaffey & Ellis-Chadwick, 2012). It depends on the product or service that someone wants to promote, the needs, or even their target audience. There is not an ideal marketing plan in the digital world; there are always different solutions. We used to say that a marketing plan needs to master the 4Ps: Product, Place, Price, and Promotion. Today, in addition to what traditional marketing dictates, we need to know what the customer thinks and how he or she behaves before, during, and after the purchase. Knowing the variety of different channels that can be used in the new marketing landscape, a consumer can be approached through a well-structured website, social networking sites, blogs, or mobile apps. For a digital marketing campaign to be successful, it is crucial that both the online and offline marketing techniques have to integrate correctly. Rolling out a digital marketing campaign can be challenging. Thus, a selection of acquisition, conversion, and retention tools is essential (Chaffey & Ellis-Chadwick, 2012) to guide modern consumers to their digital and physical customer journey. The acquisition tools are used for starting the customer cycle, focusing on selecting the right target audience and emphasizing on establishing the relationship between the customer and the product. Conversion tools, on the other hand, aim to persuade customers to act by proceeding on the purchase of the product (Chaffey & Ellis-Chadwick, 2012). Finally, after the customer-product relationship has started, the brand's primary goal is to keep its existing customers and turn them into loyal, returning customers. Retention tools are used so that the company's products will always be on the radar of the existing customers. Today's customer journey is significantly affected by social media, local marketing, and mobile applications. As smartphones and mobile telecommunication companies offer constant and high-speed online access, 92% of teenagers answered that they go online daily, and 24% almost invariably (Pew Research, 2016). Much of teenagers' (aged 13-17) access is facilitated through mobile devices (smartphones, tablets), with Facebook (71%) being the most popular social media platform among them. According to Pew Research (2016), other than Facebook, the most popular social media platforms for teenagers are Instagram (52%), Snapchat (41%), Twitter (33%), Google+ (33%), Vine (24%), and Tumblr (14%). It is quite impressive that YouTube is not one of the teenagers' options. Nevertheless, 71% of the logs into more than one social networking sites. The terms social networking sites and Web 2.0 are widely discussed in a range of industries, such as advertising, marketing, web development, and human resources. Both terms, however, are elusive since they are continually adjusting to new realities that online users shape. The digital customer journey includes a variety of marketing touchpoints where brands and consumer can meet and discuss. Nowadays, consumers use not only the popular social networking platforms such as Facebook, Twitter, and LinkedIn, but it also consists of online journals (weblogs), community forums, collaborative platforms (wikis), and virtual gaming worlds. Nevertheless, as Murugesan (2007) argues, the main characteristic of social networks is that online users can generate content and to promote it through the web by sharing links with the online Web 2.0 communities. The main characteristic of a social networking site is the sense of community that the traditional websites cannot develop. Many examples of social networks prove that today's administrations are gradually using it for their benefit. Social networking sites, blogs, wikis, and forums are low-cost advertising and communication platforms that further engagement with customers and users of the Internet (Miller & Lammas, 2010). The primary objective of social networks is to strengthen the social bonds among friends. As such, the social networks became widely known as channels of interpersonal and group communication. The social networking sites have significantly affected the way the Internet users can communicate with friends, make new friends, get informed, and share links with the public (Safko & Brake, 2009). In the traditional ways to communicate, such as the face-to-face interaction, or the phone conversation, new forms of communication have been added. Nowadays, the SoLoMo customer journey involves instant messaging services such as WhatsApp, Skype, and Viber. The SoLoMo consumers expect to reach each brand at any time they want (Papakonstantinidis et al., 2016). The SoLoMo consumers treat the brand as a friend, who is always available and accessible. Younger generations consider e-mail and chat as primitive ways to communicate. Twitter posts, Facebook status updates, and Skype conversations are now the main element of their communication process. The SoLoMo consumer is more receptive and open when a brand communicates within context (Chaney, 2015). For example, further research could explore to what extent brands can communicate with consumers by providing them with tips and experience points to help them increase their online gaming status. Digital customers are getting more familiar and comfortable with the Internet, and they expect from brands to approach them and interact with them. A website is a portrait of business, is the beginning of all. Customers may explore the website and learn about a brand everything they need. Through a website, a brand may also provide access to social networks accounts. An online profile of business, product, or service may rise positively in the digital world, to manage its purposes (Doyle, 2012). Social media is becoming the crossroad where people's information, communication, and entertainment intersect. Also, the number of mobile users who access social media to search for local offers is considerable increasing daily. Given the information as discussed in the present paper, the SoLoMo consumer and how he/she experiences the digital customer journey needs to be further explored. The next section in the current literature review highlights the significance of investigating some critical new technologies that will shape tomorrow's customer digital journey where more touchpoints will be added. New technologies This section highlights essential technologies that have the potentiality to shape the future of marketing. The first technology that future research needs to explore is the NFC (Near Field Communication) Technology. The NFC technology offers vast possibilities for brands to communicate with their consumers. Through wireless technology, NFC enables two devices that are close to each other to exchange bits of information (Dutot, 2015). NFC technology can assist consumers to connect to their bank account and communicate this information to the retailer to make a payment (Pham & Ho, 2015). The NFC technology can allow marketers identify users' personal preferences and shopping habits to develop faster and easier purchase experience during the customer journey. Consumers want to be able to communicate with the firms in the most natural possible way. They usually forget their passwords of their different accounts, and sometimes they do not even remember that they have an account in a specific firm (Dutot, 2015). How many times have we tried to register on a specific website and get back a message saying "This email is already being used; If you forgot your password, please click here." Firms need to realize that consumers are in need of secure but straightforward ways of access when they are buying products online. They are in need of instant, intuitive connectivity, zero configuration, and quick essential access, and NFC technology could offer that. Retailers could make use of NFC technology mainly through four categories of NFC applications (Dutot, 2015). The first one is the "Touch & Go". For the consumers to make use of this application, they need to put the device reader close to the access code. Another category of NFC application is the "Touch & Confirm". This category includes any mobile payment that the consumer needs to confirm just by simply entering a password to accept the transaction. The third category is called "Touch & Connect" that allows data transfer to occur between two mobile devices (Levesque et al., 2015). Consumers exchange music, images, and other bits of information. The fourth category is the "Touch & Explore", and consumers explore different services that are being offered by the same retailer without even typing the URL in their browser. In other words, the retailers could directly inform the local marketing oriented consumers by sending them special offers, discounts, and other types of information through their digital touchpoints (i.e., smart posters, smart billboards, and augmented reality layers). The second technology that requires further exploration is the internet of things (IoT). Wearable technologies are offering to the retailers a fantastic opportunity to understand their SoLoMo consumers. Sensors and display technologies are being embedded into clothing, and they could identify in real time the emotional state of the consumer. These sensors could illustrate the different types of mood and levels of stress by displaying light or a different color to inform both the wearer and those that are connected with him or her. The new technology incorporated with the wearable items could help retailers to capture things like a heart rate and then try to interpret that in their benefit. For example, if consumers are excited about a brand then their heart rate will increase. If retailers now the exact moment that this will take place they could reinforce that feeling with an extra offer for example, or they could suggest some cross-sales through the app. Firms need to make consumers feel comfortable to give their data on the spot (i.e., detect emotions). The amount of data that is being gathered from the wearable items could open an entirely new field for the consumer behavior analysis (Park & Skoric, 2015). Further research can explore the brands' intention to use the personal data generated from wearable items such as people's health condition such as someone's heart rate or levels of blood pressure. The fact that firms have an enormous amount of data it does not mean that they can make full use of it. Modern marketing experts need to perform significant data analysis and work with algorithms (Kinnunen et al., 2016). With this new technology, the amount of data is going to be enormous, and firms need to be able to interpret it. The natural extension of these devices is to become from wearable to implantable. In the next years, we will move from glasses to lenses and from clothes that show our mood to e-ink tattoos that light up and express our feelings in the current situation. While marketers are leveraging location-based marketing, the internet of things is becoming more popular and is growing today's marketplace (Bruno, 2015). Many firms are launching new accessories that either work as stand-alone products or need to be connected to the smartphone of the consumer. Either way, it is becoming a huge trend. In general wearable items are quite expensive breakthrough products that are using incredibly sophisticated technology (Kinnunen et al., 2016). Firms need to differentiate, and retailers need to make use of this new trend. Before analyzing the benefits and how retailers could incorporate into their marketing strategies the wearable technology lets identify the new trends in the wearable market. The constantly changing environment, especially in the wearable market, is pushing for changes in the social media platforms. The current platforms are not yet optimized to receive information from the wearable items. The main reason for that is because most of the social media are designed for interaction to take place on screen-based devices, which is not the case for most of the new wearable items. The new devices have limited screens, and in some case, they have not at all, which means that the social media platforms will have eventually to adapt to facilitate meaningful interactions on wearable technology. Wearable items will be able though to communicate with social network platforms automatically and share information related to its users (Levesque et al., 2015). For example, if a smartwatch detects that a user ran 5 km and burned 500 calories in a specific location, the wearable device can compare this information with other users in the community and share the comparative statistics. Another feature which is underused is the voice recognition. In the wearable items where the screen is tiny or even there is no screen, the voice recognition is going to play a significant role. Due to this fact, a new era of social media is going to arise, and new social media platforms will be developed just for voice. If both marketers and retailers could understand the consumer's reasonable emotional state and could see in real time when it is getting out of normal range, then they could have a fantastic opportunity to intervene with some sort of experience that the person might be receptive to. In other words, retailers could provide some real-time help or feedback at the moment. Retailers could use wearable technology to provide consumers with convenience marketing, making the life of the consumer as comfortable as possible. Generalization of the main statements This paper aims to provide the reader with a systematic literature review of today's online consumer behavior in the digital multi-touchpoint market landscape. It contributes to the understanding of digital consumer behavior and highlights the concepts that need to be explored in future research. The purpose of this paper is to review the literature on the digital customer's journey. For this reason, this section aims to generalize the paper's main statements by identifying three areas of marketing literature that need further exploration. First is the examination of the SoLoMo consumer behavior who demonstrates a unique mindset in consumption. Second is the exploration of the digital customer journey which has been characterized by the plethora of multitouch points in both the digital and the physical environment. Third, further investigation should explore certain new technologies that will shape tomorrow's marketing landscape. The SoLoMo customer is a term that has not been widely discussed in the literature. Customers use social media for communication, information, and entertainment. However, as literature argues, consumers do not make a clear distinction between the online and the offline world. Marketing is becoming more and more "phygital", both physical and digital. For this reason, further research needs to explore how consumers decide when they are standing in the intersection between the two sides of marketing. At the same time, local marketing applications with the use of NFC technologies and mobile wearable devices widen the possibilities of reaching the consumers in multiple touchpoints during their customer journey. Today's digital customer journey seems endless and continuous. Consumers are fascinated with the new technologies such as the wearable and virtual reality devices and that will allow them to live the brand in an alternative marketing environment. Also, critical studies could explore the brand-consumer relationship in the new customer journey to develop a better understanding regarding the issues of privacy and confidentiality. The paper is based on a literature review of key peer-reviewed articles, published books, trade publications, and one doctorate dissertation to examine an under-discussed term in digital marketing which is the SoLoMo consumer. It aims to highlight alternative concepts in literature to point out the most suitable research methods for datacollection and selection of subjects. This review paper avoids reaching definite conclusions; rather it suggests an agenda for further research in consumer behavior. Conclusion Technology is creating a new intelligent ecosystem that affects the way people communicate and do business. However, what is the role of the business in this new environment where the technology is moving so fast that many firms are not able to keep up? These firms should aim to understand the SoLoMo customer fully. In other words, academic scholars and professional researchers need to explore the new consumer needs. The new system that is arising is no doubt flooded with smart technology and big data. With the use of the new technologies, the products will be able to sell themselves because in the minds of the consumers they are positioned as helpful and needed. The business environment is changing, and firms need to be prepared to take advantage of the new opportunities that are created in a new ecosystem that accelerates creativity, innovation, and of course focuses a lot on entrepreneurial spirit (Ankeny, 2013). The new idea is not only creating applications that the user needs to give the input every time but creating applications that will learn from experience (Chaney, 2015). In this way, they will be able to improve with every single interaction, and with no doubt, they will be able to assist the consumer in making simple and more complex purchase decisions. Further research can explore the applications of cognitive computing that is characterized by machine learning and selflearning systems. Modern marketers will benefit from the use of data mining and social habit pattern recognition applications. One of the main issues that consumers are facing today is that they have at their disposal an infinite amount of information, but unfortunately they have limited time to access it. Most of the devices that we are using today are relying on humans to receive the initial information. As Oh et al. (2014) argue, new types of innovation shortly will help consumers to manage massive amounts of consumer data. Since more and more devices are being connected to the Internet, more and more information will be shared among the users. That will inevitably lead to sharing of knowledge and experience. Firms are already focusing toward such extensions where the sharing of the shopping experience holds a central role and more consumer data are available for brands. The new environment is also demanding wearable technologies (Bruno, 2015). Until now most manufacturers have been focusing on wearable technology which relates to health and fitness. However, wearable technology can provide data based on consumers' social habits and emotions. The new environment is without question characterized by mobile development. More and more new features are being incorporated into smartphones. This trend will not slow down since the mobile market has not settled yet. Following this trend, the location-based mobile commerce has the potential to affect industries in various but still unexplored ways in the future. At the same time, the new environment could incorporate some technologies that could change the traditional way of shopping, and that will make firms to rethink their online and offline strategies. Of course in this turbulent environment, consumers crave for recognition of their individuality, even though they are pleading for respect of their private life. This opposite approach is quite intriguing for both marketing and communication scholars. Consumers are engaging themselves in platforms where they are becoming part of the business model (McKinsey, 2013). This new type of trend in the business environment has to do mostly with thinking a new way of how different types of services are managed and consumed (Ericson et al., 2014). New types of technologies give the firms opportunity to create a market where people with same goals could be connected. The ever-present technology can challenge marketers in a plethora of ways that are still unexplored. The new digital customer journey may no longer be about the consumers' physical location, but more about their emotional state. Wearable technologies are offering to the retailers an opportunity to truly understand their consumers and go beyond social media and other forms of digital marketing. Through a systematic, but not exhaustive, literature review, this paper explains the implications developed by the integration of social media, mobile applications, and local marketing in marketing. The review paper calls for a need to further investigate the SoLoMo consumer, the SoLoMo customer journey, and the new marketing technologies such as the NFC and the Internet of Things. The suggested research agenda could be the basis of future exploration in consumer behavior.
2018-12-15T06:05:08.659Z
2017-12-21T00:00:00.000
{ "year": 2017, "sha1": "a294c26db7efa8975283081ff4642c243a9da4f3", "oa_license": "CCBY", "oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/9796/IM_2017_04_Papakonstantinidis.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "21663498b6d2eac1094943fcb58c8720ef59c662", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
95695403
pes2o/s2orc
v3-fos-license
Exploration of oxide-based diluted magnetic semiconductors toward transparent spintronics A review is given for the recent progress of research in the field of oxide-based diluted magnetic semiconductor (DMS), which was triggered by combinatorial discovery of transparent ferromagnet. The possible advantages of oxide semiconductor as a host of DMS are described in comparison with conventional compound semiconductors. Limits and problems for identifying novel ferromagnetic DMS are described in view of recent reports in this field. Several characterization techniques are proposed in order to eliminate unidentified ferromagnetism of oxide-based DMS (UFO). Perspectives and possible devices are also given. Introduction Diluted magnetic semiconductor (DMS) is expected to play an important role in interdisciplinary materials science and future spintronics because charge and spin degrees of freedom are accommodated into single matter and their interplay is expected to explore novel physics and new devices. Among them, Mn-doped II-VI [1] and III-V [2] compound semiconductors have been extensively studied. The former include a variety of compounds consisting of various combinations of II-group cations (Zn, Cd, and Hg) and VI-group anions (S, Se, and Te), some of which have been applied to magneto-optical devices [3]. On the other hand, the latter can be ferromagnetic materials, where one can control the ferromagnetic properties with electric field or light and one can inject spin-polarized carriers into heteroepitaxial semiconductor device [4][5][6]. However, the ferromagnetic Curie temperature T C has been much lower than room temperature, e.g. T C ~110 K in (Ga,Mn)As, hence new DMS having T C beyond room temperature is desired for future devices. Generally, oxide semiconductors have wide bandgap, i.e. transparent for visible light, and can be doped heavily with n-type carrier. This feature serves an important role as transparent conductor that is used for various applications [7]. From the viewpoint of DMS, this feature can be promising for strong ferromagnetic exchange coupling between localized spins due to carrier induced ferromagnetism such as Ruderman-Kittel-Kasuya-Yosida interaction and double exchange interaction when localized spin is introduced in the oxide semiconductor. This situation prompted us to make oxide-based DMS based on a representative oxide semiconductor ZnO [8]. However, our intensive research on combinatorial thin film preparation revealed that none of transition metal (TM) doped ZnO was ferromagnetic down to ~3 K [9,10], whereas band calculation studies suggested the possible ferromagnetism in case of p-type ZnO host [11][12][13]. On the other hand, we have found that Co-doped TiO 2 is ferromagnetic for both anatase and rutile phases above room temperature [14,15]. Our efforts to explore transparent ferromagnet triggered intensive research in this field by many other groups employing traditional, i.e. not combinatorial, thin film growth techniques. Some of them reveal important characteristics of this class of materials, whereas some claim ferromagnetism through rather crude characterization of their specimens. Here, we overview a short history of oxide-based DMS to reexamine various 3 characterization techniques which have been frequently exploited to claim ferromagnetic oxide-based DMS and to propose much better analysis to make the claims clear. DMSs based on various oxide semiconductors There are many non-oxide semiconductors. Compared to them, the advantages of oxide semiconductors are: (1) wide bandgap suited for applications with short wavelength light, (2) transparency and dyeability with pigments, (3) high n-type carrier concentration, (4) capability to be grown at low temperature even on plastic substrate, (5) ecological safety and durability, (6) low cost, etc. In addition, large electronegativity of oxygen is expected to produce strong p-d exchange coupling between band carriers and localized spins [16]. Such advantages make oxide semiconductors attractive. Actually, many studies on oxide-based DMS have been reported as summarized in Table 1, where most of the researches employed ZnO and TiO 2 as host semiconductors. b. TiO 2 The anatase and rutile phases of Co-doped TiO 2 were reported to be ferromagnetic as the first trial of these compounds [14,15]. Several subsequent reports agree with the results [28][29][30][31], whereas the other studies report that the precipitation of Co metal is the origin of ferromagnetic signal [32][33][34]. There have been few reports on the definite value of T C because T C is too high to be measured by conventional tools such as magnetometer employing superconducting quantum interference devices (SQUID). Doping of TM elements except for Co has been scarcely reported. c. SnO 2 Mn-doped SnO 2 shows a large magnetoresistance and the magnetization behavior is paramagnetic down to 5 K [35]. There have been very few reports on this compound so far. Experimental evidences for ferromagnetism in oxide-based DMS a. Extrinsic effects There has been controversy on the magnetic properties of oxide-based DMS as stated above. Most probable reason is originated from insufficient characterization of the samples. Deduction of magnetic property only from magnetization measurements without careful examination of possible extrinsic effects, such as ferromagnetic precipitate and impurity phase, often misleads us into creating unidentified ferromagnetic oxides (UFO). In order to evaluate the magnetism correctly, various characterization techniques have to be employed. Table 2 shows a list of characterization techniques to eliminate UFO. X-ray diffraction measurement is indispensable to detect impurity phases, but the sensitivity may not be good enough to identify a small amount of precipitation in the sample. Such precipitation can be detected with other techniques. Scanning electron microscope and reflection high energy electron diffraction can examine the precipitation at the sample surface and transmission electron microscope can even identify the precipitation in the sample on the scale of nanometer. Electron probe microanalysis and secondary ion mass spectrometry examine uniformity of dopant distribution along lateral and perpendicular directions of the sample, respectively. Most of the papers in Table 1 have been reported with employing only part of these techniques. b. Magnetic properties Ferromagnetic DMS has various properties, some of which are unique for identification of the ferromagnetism in oxide-based DMS. Table 3 Magneto-optical spectroscopy probes the magneto-optical signal as a function of photon energy. Particularly, magnetic circular dichroism (MCD) spectroscopy is useful for thin film study because effect of substrate on the spectrum is negligible contrary to magnetization measurement [36]. The MCD signal is generally enhanced at the absorption edge of host semiconductor of DMS [37][38][39], as a result of carrier mediated exchange interaction between localized spins. Figure 1(a) shows the absorption spectrum at 300 K and MCD spectrum at 5 K for Co-doped ZnO. The MCD signal appears at the bandgap (3.4 eV) and d-d transition of Co 2+ ion (~2 eV), representing proper substitution of Co ion with Zn site. The MCD signal is proportional to applied magnetic field indicating paramagnetic behavior. One of the groups reported ferromagnetic Co-doped ZnO based on solely magnetization measurement [19]. However, MCD spectrum of their sample was large for wide range of photon energy without change in sign. Such behavior is similar to ferromagnetic metal. This feature concludes that the ferromagnetism in their Co-doped ZnO is most likely caused by ferromagnetic precipitation [40]. Hall effect is not seen [41]. The anomalous Hall effect or the hysteretic magnetoresistance seems to be an evidence of ferromagnetism, whereas the appearance of anomalous Hall effect or hysteretic magnetoresistance has not been reported for Co-doped TiO 2 . The tunneling measurement such as tunneling magnetoresistance in ferromagnetic tunneling junction [42] and differential tunneling current spectroscopy of tunneling junction with superconducting counter-electrode [43] can be used to evaluate spin polarization of the sample that has to be different from the possible ferromagnetic precipitation. It is noted that the above magnetic properties are not necessarily required because those are mainly deduced from the studies of typical compound semiconductors having zinc blend or wurtzite crystal structures, where the physics has been understood deeply. The oxide-based DMSs have various crystal structures, hence they have various energy band structures. Therefore, it might not be strange that the magnetic properties of oxide-based DMS are different from those of conventional DMS. Conclusion and remarks The research field of oxide-based DMS is rapidly growing up and many kinds of
2019-04-05T03:33:02.777Z
2003-05-19T00:00:00.000
{ "year": 2003, "sha1": "9c0e8f221c3da71c67ad6c2b78ff747c68bd2f79", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0305435", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9c0e8f221c3da71c67ad6c2b78ff747c68bd2f79", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
2393919
pes2o/s2orc
v3-fos-license
IMAAAGINE: a webserver for searching hypothetical 3D amino acid side chain arrangements in the Protein Data Bank We describe a server that allows the interrogation of the Protein Data Bank for hypothetical 3D side chain patterns that are not limited to known patterns from existing 3D structures. A minimal side chain description allows a variety of side chain orientations to exist within the pattern, and generic side chain types such as acid, base and hydroxyl-containing can be additionally deployed in the search query. Moreover, only a subset of distances between the side chains need be specified. We illustrate these capabilities in case studies involving arginine stacks, serine-acid group arrangements and multiple catalytic triad-like configurations. The IMAAAGINE server can be accessed at http://mfrlab.org/grafss/imaaagine/. INTRODUCTION A number of tools now exist to search the Protein Data Bank (PDB) (1) for similar patterns to existing protein structures both at the fold level e.g. (2) and at the level of clusters of amino acids (3)(4)(5)(6)(7)(8)(9). These methods use query patterns that are derived from atomic structures solved by crystallography and nuclear magnetic resonance spectroscopy. However, in contrast to these searching methods, in this article, we describe a server that is designed to allow the interrogation of the PDB for hypothetical side chain patterns that may or may not occur in reality. This method is therefore not limited to existing coordinate sets and does not require pre-constructed atomic models. Crucially, a reduced structural representation is built into the program to facilitate more fluid openended searching. The program IMAAAGINE presents an easy-to-use interface that permits the specification of queries consisting of between three and eight residues, with the necessity of defining only a subset of the possible distances between them. The Ullmann subgraph isomorphism algorithm (10) is then used to search the PDB for matching structures. The use of the minimal side chain description provides a simple but effective mechanism for specifying generic queries and thus for enhancing the recall of a search. Generic side chains such as acid, base and so forth can also be deployed in the search query. As illustrated in the case studies section, these capabilities enable the user to use his or her imagination to create novel patterns and then to discover whether such side chain arrangements exist within the database of known structures or whether an 'unusual' side chain arrangement is actually unprecedented. PROGRAMS AND METHODS The underlying concept behind the search methodology for IMAAAGINE is based on that used in SPRITE and ASSAM (8), but with major differences designed to allow much more freedom in the search process. Briefly, in all three programs, the protein structure is represented as a graph with the nodes representing individual amino acid side chains and the edges representing the inter-node geometric relationships (in terms of distances in 3D space). In ASSAM and SPRITE, each node consisted of two pseudo-atoms (representing the Start and End of each side chain, with a Midpoint position also available) that were used to generate a vector, and each such vector corresponded to one of the nodes in a graph (8). Up to five distances between pairs of vectors representing each pair of side chains therefore allowed definition of the relative angular rotations between the side chains. We also showed that the ASSAM and SPRITE representation, which was based on side chain positions rather than main chain positions, had distinct advantages in detecting side chain-side chain motifs over main chain-based representations that did not (8). In IMAAAGINE, however, a single Key pseudo-atom is used to represent the functional part of each side chain. The position of this pseudo-atom is designed to emphasize the most important functional part of the side chain, and a diagram of the Key positions is shown in Figure 1. The Key position is intended to focus on the most characteristic part of that particular kind of amino acid side chain. Thus, for the basic residues Lys, Arg and His, we chose pseudo-atoms centred on their regions of positive charge, namely, the ends of the Lys and Arg side chains and at the centre of the imidazole moiety of a His. For acid groups, we chose to place the pseudo-atom at the centre of negative charge on the end of the carboxyl group and for amides at the equivalent position on the amide. For serine, we chose the OG atom where the hydroxyl is situated, for cysteine the SG atom and for threonine the average of OG1 and CG2. These choices enable us also to define mutually compatible generic basic, acidic, amide, charged and hydrophilic groups, which can be equivalenced to one another within the search tolerances. For hydrophobic residues, we chose pseudo-atoms in the centres of their hydrophobic part, once again allowing generic hydrophobic or aromatic residue types. This means that only the distances between single points on each side chain are specified, thereby allowing significant degree of rotational freedom between side chains. Moreover, so long as every side chain has at least one specified distance to at least one other, it is not necessary to define all or even most of the inter-residue distances, as the distances that are not defined become 'wildcard' distances that are able to adopt any value. This ability to allow the majority of inter-residue distances to be undefined permits great freedom to the search procedure as illustrated in the case study examples. ASSAM (8) on the other hand had a more intricate representation, which reflected the angular as well as distance orientation of residues, and all distances had to be defined. To facilitate the use of partially defined distance queries, we used the Ullmann subgraph isomorphism algorithm (10) in IMAAAGINE rather than the Bron and Kerbosch maximal common subgraph algorithm (11) used in the ASSAM webserver (8). As will be illustrated in the 'Case Studies' section, IMAAAGINE allows great flexibility in searching, which is enhanced by the use of a search tolerance on those distances that are specified. In addition, we believe that the interface is sufficiently intuitive and clear to be useable by non-structural biologists with an interest in structural information. IMAAAGINE: INPUT INTERFACE The IMAAAGINE query interface offers the user 11 initial options for initiating the design of a query pattern. These options range from a three-residue pattern to an eight-residue pattern in which some or all possible inter-residue distances can, if wished, be defined ( Figure 2A). The 4-8 residue searches include an extra option that simplifies the design of a query where one residue or residue type is surrounded by other residues or residue types, and the number of definable distances is reduced (Figure 3). Once the number of residues for a query has been selected, the user can then proceed to design the arrangement pattern for amino acid side chains for searching against the PDB. The current search database is a manually curated non-redundant version of the PDB consisting of 66 575 structures that have been converted into the Key pseudo-atom representations. The search database reported in this article was generated on 6 February 2013 and will be periodically updated with newly available structures on a monthly basis. The IMAAAGINE interface enables a user to envision a hypothetical arrangement of side chains by specifying either the residue type or a generic residue type for each amino acid side chain in the pattern using dropdown menus ( Figure 2A). In addition to the standard amino acids, a number of generic amino acid types are available, facilitated by the use of the single Key atom description. They are as follows: aromatic (Tyr, Phe or Trp); basic (Lys, Arg or His); basic excluding His; acidic (Glu or Asp); amide (Gln or Asn); small hydroxyl (Ser or Thr); medium hydrophobic (Leu, Ile or Val); and hydrophobic (Leu, Ile, Val, Ala, Pro, Met, Phe, Trp or Tyr). Once the residue or residue types have been specified, the user can then input the distances that define the arrangement. Although the user can input any value into the query field, the interface also allows the possibility of using pre-set distances for interactions such as hydrogen bonds (pre-set at 3 Å ), Van der Waals contact (pre-set at 4.5 Å ) and disulfide bonds (pre-set at 1 Å ) by entering the letters H, V or S, respectively, into the search box. In cases when a search question arises that requires a less tightly defined search, the user can leave the search box empty, and this will result in that particular distance being able to take any value in the retrieved structures. Those distances that are defined have a default search tolerance of 1.5 Å applied to them, which gives flexibility to the search whilst allowing for the positioning of the Key atoms in the side chain. However, users have the option of lowering or increasing this value from the default via the form field provided in the search interface. Lower tolerances can allow a more precise definition of a query, whereas larger ones may be appropriate where larger side chains or longer inter-residue distances are involved. The principle behind IMAAAGINE is to allow the creation and investigation of 'broad brush' queries of the kind 'give me all circular arrangements of six alternating serine and carboxylic acid residues' (see Case Study 1 and Figure 2) or 'show me all instances of an arginine guanidinyl group surrounded by three other guanidinyl groups' (see Case Study 2 and Figure 3). For such queries, the relative positions of the groups are important, but their precise angular orientations are not. It is therefore advantageous-through the use of appropriate tolerances-to allow for this and also for the fact that some residues may be able to undergo rotation or motion in different structures of the same protein. The time taken to execute an IMAAAGINE search varies and depends on the input specified, the size of the query and the server load at the time the query is submitted. A search carried out on a light server load for a six residue test example took 7 min to complete. IMAAAGINE: OUTPUT AND VISUALIZATION INTERFACE Examples of the outputs from IMAAAGINE runs are shown in Figures 2 and 3 and are discussed in more detail in the Case Studies. In essence, there is a summary of the input pattern followed by a list of hits detected in the PDB. A summary of the results is presented to provide information such as the total number of hits that have root mean square deviation (RMSD) values <2 Å and the number of hits where the residues occur only in the same chain. The results presented have been screened to filter out repeat occurrences of the same hits in different sequence order that are present in the raw output as a result of the program's search approach, which is non-sequential. Because the search is subject to a tolerance (by default of 1.5 Å ), hits may match the pattern to a greater or lesser extent, and the goodness of fit is expressed as an RMSD value between the distances defined in the pattern and the equivalent distances found in the matched structure; the output results are hence sorted in ascending order of RMSD so that the earlier hits in the list will be better matches to the query pattern than will the later ones. Each hit is listed by PDB identity code with a link to that entry in the RCSB PDB (1), the name of the protein, the RMSD, the resolution at which the structure was determined, the residues identified as matches, the shortest distance from a hit residue to the nearest nonwater hetero atom and what those hetero atoms are; finally, there is a link that allows the user to view the hits in a Jmol (http://www.jmol.org/) viewer window. The structure resolution information is provided owing to the fact that higher resolution structures are likely to be better defined, and this allows users to restrict their analyses to those structures only. Additionally, users can opt to view a list of hits that are entirely within the same chain of the PDB structure. We have demonstrated how both these features are applied during the analysis of IMAAAGINE search results in the 'Case Studies' section. CASE STUDIES A circular serine-carboxyl triplet As previously mentioned, IMAAAGINE searches can be targeted at partially defined curiosity-driven queries, thus addressing the paucity of computational tools that are able to carry out such searches. To test the capacity of IMAAAGINE to carry out a search for such a hypothetical arrangement, we queried the PDB for a six-residue arrangement that consists of three serine residues alternating with three acidic residues in a circular arrangement where each serine is separated by a 3 Å distance from at least two other acidic residues in the query pattern, and vice versa. All the other distances were left as wildcard values (Figure 2A). Such a query will in effect identify circular arrangements of S-[D/E]-S-[D/E]-S-[D/E] that satisfy the pre-set distances of 3 Å plus or minus the search tolerance. Only distances between potential neighbouring residues are defined, whereas the others have been left blank and may therefore adopt any value. This means that as long as those short range distances are satisfied (within the pre-set tolerance of ±1.5 Å , or by a userdefined tolerance), the actual arrangement retrieved need not be strictly circular but could be elliptical or could be some more complex 3D arrangement of serines and acidic groups. This search returned five hits, of which four were for SDSDSD ( Figure 2B), whereas one was found for a mixture of Ds and Es. These results were then visualized using the Jmol plug-in ( Figure 2C). The closest hitalthough the contacts between the serine OG and the carboxyl oxygens are all >3 Å -is the only example of an SESESD fitting the query and is found clustered on one chain in a sodium/calcium ion exchanger [PDBID: 3V5S (12)] where the three acidic groups and two serine hydroxyls are all in contact with a Cd 2+ ion. This is one of a cluster of metal-binding sites that appear to be important in the function of this protein (12). The second hit was found to be an SDSDSD arrangement that is repeated in three different chains of a trimeric phage lyase structure [PDBID: 2X3H, (13)]. These SD pairs appear to be interfacing residues between the three identical subunits ( Figure 2D). Owing to this six residue tertiary motif being formed by the trimer assembly, such a motif would most likely be undetectable at the sequence level. The same is true of another trimeric hit in a metalbinding phage baseplate assembly protein [PDBID: 3AQJ (14)] where the aspartates coordinate a Ca 2+ ion. This example demonstrates that the systematic design of queries, followed by computational screening of IMAAAGINE outputs, has the potential of yielding novel amino acid tertiary motifs that are not detectable via currently available sequence database searching methods. Four arginine cluster Recently, Neves et al. (15) published a survey of unusual structures involving stacked arginine guanidinium groups, thus discovering instances of rings of arginines with four to eight members (usually on symmetry axes in oligomeric proteins), stacks of three guanidinium groups, strings of stacked arginines and also 'planar stacking' of arginines bridged by hydrogen bonds to other ligands. These patterns are all based on underlying motifs of one arginine stacked against one or two others, as originally identified by Scheraga and colleagues (16). We therefore attempted to extend this analysis by designing a search pattern intended to discover if there were any instances of one arginine guanidinium group surrounded by three others ( Figure 3A). Surprisingly, an IMAAAGINE search found a number of such arginine clusters ( Figure 3B-E). The lowest RMSD hit was in the 2 and 1.9 Å resolution structures of uridine/cytidine monophosphate (UMP/CMP) Kinase [PDBID: 2UKD and 3UKD, (17)] where the guanidinyl group of R137 is surrounded by those of R42, R131 and R148 ( Figure 3B). R42, R137 and R131 form a stack, and R148 has a planar stack against R137/R131. These residues are in close proximity to the cytidine 5 0 monophosphate (C5P) and ADP binding sites with R42 being 3.7 Å from C5P and R131 being 3.8 Å from ADP. The second hit was in the 2.7 Å structure of Escherichia coli bacterioferritin [PDBID: 3GHQ (18)], where four copies of the same arginine (R155) are arranged around a non-crystallographic 4-fold channel through the bacterioferritin shell ( Figure 3C). In this case, the arginines are not stacked, but the NH1 of each one is only 2.1 Å from the NH2 of the next. This rather unusual mode of interaction may warrant a confirmatory re-examination of parts of the model derived from this medium resolution refinement. Therefore, restricting ourselves to hits in structures at resolutions better than 2.0 Å , an IMAAAGINE search hit in the structure of a thermostable mutant of Bacillus subtilis adenylate kinase at 1.8 Å (PDBID: 2QAJ, Figure 3D) revealed a cluster of arginines where R36, R160 and R127 form a stack and the fourth arginine (R171) is to one side. This arrangement is similar to that found in the UMP/CMP kinase and is therefore a combination of a three-arginine stack (residues 42, 137, 141 in 2UKD and 36 160 127 in 2QAJ) and a planar stacking of a fourth arginine residue (residue 148 in 2UKD and 171 in 2QAJ) to one in the stack with a linking hydrogen bond from the ligand ( Figure 3B and D). Essentially similar hits were also found in several other adenylate kinase structures from other species. A similar arrangement but with the fourth arginine in a plane with the second in the three-arginine stack was found in guanylate kinase (PDBID: 1LVG). In the 2.1 Å structure of malic enzyme 2 [PDBID: 1QR6, (19)], a different arrangement was found in which four arginines (91:A, 1091:B, 1128:B, 128:A) are clustered together, but none are stacked. Another unstacked arrangement can also be observed in the 2.05 Å structure of Streptococcus pneumoniae LytR-Cps2a-Psr family protein [PDBID: 3TFL, (20), Figure 3E]. This unstacked arginine quadruple is found interacting with the diphosphate group of an octaprenyl pyrophosphate lipid. Three arginines (R364 and R374, which are stacked, and R267) make hydrogen bonds to the terminal phosphate where the fourth (R244) bridges between the two phosphates ( Figure 3E). One protein, two Asp-His-Ser triads? Because search patterns can contain wildcard (i.e. undefined) distances between residues, it is possible using the IMAAAGINE input interface to devise patterns where one subgroup of residues in the pattern has no defined spatial relationship to the others. This can be valuable in carrying out searches to find two different patterns in the same protein or the simultaneous presence of two similar patterns. As an example of this, a search was carried out for structures in the PDB containing two copies of a chymotrypsin-like Asp-His-Ser triad (21). This search motif is shown schematically in Figure 4A. The distances defined are based on the approximate distances in the chymotrypsin catalytic triad (21) allowing for the fact that a tolerance of ±1.5 Å will be applied in the search process. However, no distances are defined between the triad on the left and the one on the right (Figure 4A), and therefore there are no constraints on the relative positions of any pair of triads retrieved. One hit for the double triad query was in the E. coli KatE catalase structure [PDBID: 4ENP, (22)] where the serine and aspartic acid residues of one triad are on one chain (S421:C, D417:C), whereas the histidine is on another chain (H119:B; Figure 4B). This arrangement is repeated for the other triad as well (S421:D, D417:D, H119:A; Figure 4B). This further illustrates the importance of such amino acid arrangements when considered as a tertiary motif that is not an obvious motif candidate when viewed only from a sequence level perspective. However, as might be anticipated, in a great majority of cases, the pairs of triads retrieved were simply from multiple copies of the same molecule in the asymmetric unit of the crystal. The IMAAAGINE results browser has an option to remove these by screening for hits where all the matches occur only in a single chain. Therefore, to exclude these mostly uninteresting instances, only hits where all six residues were found in the same chain of the protein were examined. A number of hits were found in certain beta propeller structures where many of the repeating blades were found to contain a triad-like motif, thereby occasioning multiple hits. These proteins included the F-box/WD repeat protein 7 [complexed with S-phase kinase-associated protein 1 A, PDBID: 2OVP, (23)], the guanine nucleotide-binding protein subunit beta-like protein (ASC1, RACK1) in the structure of the eukaryotic ribosome [PDBID: 3U5C, (24)] and in other WD40 domains including those of histonebinding protein RBBP4 [PDBID: 3GFC, (25)] and WD40 protein Ciao1 (PDBID: 3FM0). In the context of a beta propeller and in the absence of a candidate for the oxyanion hole structure that is necessary for stabilizing a catalytic intermediate in serine proteases (21), these triads are unlikely to be catalytic in function, and no catalytic function has yet been observed in a WD40 protein; instead, they act as rigid scaffolds in the molecular recognition of other protein or nucleic acid molecules (25). Nevertheless, by analogy with the serine proteases, the presence of the triad-like motif may create a stronger partial negative change on the serine OG or a stronger positive charge on the imidazole of the histidine, thereby possibly strengthening inter-blade interactions. The highest resolution of the beta propeller structures retrieved in the search is the 1.3 Å structure of WDR5, a component of the mixed lineage leukemia complex [PDBID: 3EMH, (26)], and one of the multiple hits in this protein is shown in Figure 4C. In each case, the serine and the aspartate are on one blade of the propeller, and they are linked by the histidine from a loop connecting the previous blade. The histidine is from a GH dipeptide characteristic of WD40 proteins and is known to participate in a hydrogen bonding network and to strengthen the inter-blade interaction (25). However, although this is a seven-bladed propeller, there are 10 hits indicating that there is a total of only five distinct triads ( Figure 4C). As this suggests, two of the seven inter-blade regions lack this interaction: although an aspartate is present at approximately the correct position, the serines and histidines are absent. These differences, together with insertion elements (25), may play a role in bestowing ligand-binding specificity on the surface of this otherwise highly symmetrical structure. The serine in each triad in turn forms a hydrogen bond to a tryptophan in a different strand of the same blade ( Figure 4C). Clearly, the tryptophan could also be added to the search pattern if a more specific query were required, and searches could be conducted for either single or double occurrences of this new motif, illustrating how IMAAAGINE results can suggest further searches that can be readily defined and carried out. Other non-propeller chains contained pairs of triads often separated by long distances. In the hyperthermophilic carboxylesterase from the Archaeon Archaeoglobus fulgidus [1JJI, (27)], a number of triads were found, two of which involve the sequence consecutive residues Asp 159 and Ser 160 participating in two different triads to form a linked cluster of two triads ( Figure 4D). One triad, D159, H98, S97, has no known function but can be expected to play a role in stabilizing the structure of this hyperthermophilic organism, but the other triad, D255, H285, S160, is the catalytic triad itself (27). This final example illustrates the fact that patterns submitted to the IMAAAGINE server need not be purely conceptual but can be designed on the basis of known patterns from real proteins-in this case, the Asp-His-Ser triad-that can either be simplified, or made more generic, or placed into novel contexts. Comparisons with other methods It is useful to compare the IMAAAGINE service with other 3D protein-searching web servers. A key point here is that most such programs, e.g. ASSAM/SPRITE (8), RASMOT-3D PRO (3), SPASM (6), SA-Mot (28), allow neither searching for motifs with partially defined distances nor angular variation in the relative dispositions of side chains. However, PDBeMotif [formerly known as MSDmotif (29)] is an exception to this. PDBeMotif is a powerful and general relational database-based program The pair of triads found in a hyperthermophilic carboxylesterase (PDBID: 1JJI) that involve two consecutive residues D159 and S160 in different triads. The triad labelled in red is the catalytic triad, the other triad is labelled in green. that enables the construction of complex and precise queries relating sequence motifs in 3D structures, in addition to much other detailed structural information. These sequence motifs can include single amino acids, and the user can define a subset of distances between them so that it is possible to specify searches in PDBeMotif that parallel those described here for IMAAAGINE. However, direct comparisons proved impossible because PDBeMotif only finds substructures if all the amino acids are in the same chain because its database only stores the pre-computed distance information within each single protein chain (29) in a PDB file. PDBeMotif therefore did not retrieve any of the structures we have described earlier in the text where amino acids come from different subunits, e.g. the serine-carboxyl triplets from three different subunits in 2X3H and 3AQJ, the fourarginine patterns from 3GHQ and 1QR6 or the double triad example from 4ENP. Moreover, the output of PDBeMotif does not distinguish between side chain:side chain, side chain:main chain and main chain:main chain contacts. We also found it would not accept queries where the spatial relationship between two different parts of the pattern is left undefined, as in the double triad example discussed earlier in the text. Therefore, although PDBeMotif is a powerful and general program capable of answering complex queries, IMAAAGINE offers important advantages in the specific application of side chain:side chain searching, and therefore its search facilities constitute a valuable new addition to the existing capabilities for protein side chain motif searching. SUMMARY The IMAAAGINE server allows users to easily propose and then investigate the possible occurrence of novel, hypothetical patterns of amino acid side chains in 3D structures without the need of a known precedent and without requiring the building or creation of a 3D model before the search can be conducted. A simple open-ended searching regime is facilitated by the use of a minimal side chain representation, the use of generic side chain types and the need to specify only a subset of interresidue distances.
2016-05-12T22:15:10.714Z
2013-05-28T00:00:00.000
{ "year": 2013, "sha1": "07bcf61e4a6b21af5749ca77e6537534f45d5d01", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nar/article-pdf/41/W1/W432/3857425/gkt431.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ce4ae37c02b5ccd17f2a8d7f4c84881f295b7aa", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
220486897
pes2o/s2orc
v3-fos-license
Class of images of Abel maps on normal surface singularities In this paper we investigate Abel maps on normal surface singularities described in \cite{NNI}. We investigate the affine version of the class of the images of Abel maps on normal surface singularities. More precisely we consider the projective clousure of the image of an Abel map, its dual projective variety and we substract from its degree the multiplicity of the infinite hyperplane on the dual variety. In the case of generic singularities we prove explicit combinatorial formulas of this invariant, in the general case we prove an upper bound. Introduction In this paper we investigate Abel maps on normal surface singularities described in [NNI], which were useful in studying invariants like multiplicity or geometric genus of generic analytic structures of normal surface singularities in [NNII] and [NNM]. In [NNAD] the author and A. Némethi studied the image varieties of Abel maps in the corresponding Picard groups focusing mostly on the dimension of these varieties, and computed these dimensions algorithmically from analytic invariants of the singularity, like cohomology numbers of cycles getting also explicit combinatorial formulae from the resolution graph, when the analytic type is generic. The reason of our interest in these image varieties is that these are irreducible components of Brill-Noether stratas in the corresponding Picard groups with the value of h 1 equal to its codimension (see [NNAD]). In the classical case of smooth curves the computation of dimensions of Brill Noether stratas is also a cruical problem, however the dimension of images of Abel maps is a special case of it and one can easily see, that if d ≥ 0 and C is a smooth curve of genus g, then the dimension of the Abel map Symm d (C) → Pic d (C) is min(d, g). In the case of normal surface singularities these are already intresting invariants which vary if we move the analytic type of the singularity. In this paper we investigate the affine version of the class of the images of Abel maps on normal surface singularities. More precisely we consider the projective clousure of the image of an Abel map, its dual projective variety, and we substract from its degree the multiplicity of the infinite hyperplane on the dual variety (it is 0 if the infinite hyperplane is not on the dual variety), we denote this invariant by τ throughout the paper. In the case of generic singularities we prove the following main theorem (the technical condition Z = C min (Z, l ′ ) is explained later): Theorem. Let T be an arbitrary resolution graph and X a generic singularity corresponding to it. Let's have a Chern class l ′ ∈ −S ′ and an integer effective cycle Z ≥ E, such that Z = C min (Z, l ′ ), notice that this is a combinatorial condition computable from the resolution graph if the singularity J. Nagy is generic, and in particular we know that the map ECa l ′ (Z) → Im(c l ′ (Z)) is birational. With these notations we have the following: 1) The dual projective variety of the projective clousure Im(c l ′ (Z)) has got dimension h 1 (O Z ) − 1. 2) Let's have the line bundle L Z = O Z (K + Z) on the cycle Z, we have H 0 (Z, L Z ) reg = ∅ and it hasn't got base points at intersection points of exceptional divisors. Furthermore let's have a vertex v ∈ |l ′ | * , so a vertex such that (E v , l ′ ) < 0 , then the line bundle L Z hasn't got a base point on the exceptional divisor E v . 3) For an arbitrary vertex v ∈ V let's denote t v = (−Z K + Z, E v ), with this notation we have got τ (Im(c l ′ (Z))) = v∈|l ′ | * tv (l ′ ,Ev) . For an arbitrary singularity the situation is more complicated because although the existence of the cycle C min (Z, l ′ ) is ensured by [NNAD] we can't even determine combinatorially for which cycles and Chern classes Z = C min (Z, l ′ ) holds, although we prove the inequality part of the previous theorem also in the general case: Theorem. Let T be an arbitrary resolution graph and X a singularity corresponding to it. Let's have a Chern class l ′ ∈ −S ′ and an integer effective cycle Z ≥ E, such that Z = C min (Z, l ′ ), in particular we know that the Abel map ECa l ′ (Z) → Im(c l ′ (Z)) is birational. For an arbitrary vertex v ∈ V let's denote t v = (−Z K + Z, E v ), with this notation we have got τ (Im(c l ′ (Z))) < v∈|l ′ | * tv (l ′ ,Ev) . In section 2) we summarise the necessary background on normal surface singularities. In section 3) we recall the necessary definitions and results about effective Cartier divisors and Abel maps from [NNI]. In section 4) we recall our working definition about generic normal surface singularities and the main cohomological results from [NNII]. In section 5) we recall from [NR] the results about relatively generic analytic structures on normal surface singularities. In section 6) we explain the invariant τ we investigate in this article and it's relation to the class of the projective clousure and the multiplicity of the infinite hyperplane in the dual projective variety. In section 7) we recall the necessary results from [H] about base points of canonical line bundles and hyperelliptic involutions. In section 8) we recall the structure theorems about images of Abel maps from [NNAD]. In section 9) we prove our main theorems about the τ invariant of the varieties Im(c l ′ (Z)). Preliminaries 2.1. The resolution. Let (X, o) be the germ of a complex analytic normal surface singularity, and let us fix a good resolution φ : X → X of (X, o). We denote the exceptional curve φ −1 (0) by E, and let {E v } v∈V be its irreducible components. Set also E I := v∈I E v for any subset I ⊂ V. For the cycle l = n v E v let its support be |l| = ∪ nv =0 E v . For more details see [?, N99b]. 2.2. Topological invariants. Let Γ be the dual resolution graph associated with φ; it is a connected graph. Then M := ∂ X, as a smooth oriented 3-manifold, can be identified with the link of (X, o), it is also an oriented plumbed 3-manifold associated with Γ. We will assume (for any singularity we will deal with) that the link M is a rational homology sphere, or, equivalently, Γ is a tree with all genus decorations zero. We use the same notation V for the set of vertices. The lattice L := H 2 ( X, Z) is endowed with a negative definite intersection form I = ( , ). It is freely generated by the classes of 2-spheres {E v } v∈V . The dual lattice L ′ := H 2 ( X, Z) is generated by the (anti)dual classes {E * v } v∈V defined by (E * v , E w ) = −δ vw , the opposite of the Kronecker symbol. The intersection form embeds L into L ′ . Then H 1 (M, Z) ≃ L ′ /L, abridged by H. Usually one also identifies L ′ with those rational cycles l ′ ∈ L ⊗ Q for which (l ′ , L) ∈ Z (or, L ′ = Hom Z (L, Z) ≃ H 2 ( X, Z)), where the intersection form extends naturally. All the E v -coordinates of any E * u are strict positive. We define the Lipman cone as S ′ := {l ′ ∈ L ′ : (l ′ , E v ) ≤ 0 for all v}. It is generated over Z ≥0 by {E * v } v . We also write S := S ′ ∩ L. There is a natural partial ordering of L ′ and L: we write l ′ 1 ≥ l ′ 2 if l ′ 1 − l ′ 2 = v r v E v with all r v ≥ 0. We set L ≥0 = {l ∈ L : l ≥ 0} and L >0 = L ≥0 \ {0}. We will write Z min ∈ L for the minimal (or fundamental, or Artin) cycle, which is the minimal non-zero cycle of S [A62, A66]. 2.3. Some analytic invariants. The group Pic( X) of isomorphism classes of analytic line bundles on X appears in the (exponential) exact sequence where c 1 denotes the first Chern class. Here Pic 0 ( X) = H 1 ( X, O X ) ≃ C pg , where p g is the geometric genus of (X, o). (X, o) is called rational if p g (X, o) = 0. Artin in [A62, A66] characterized rationality topologically via the graphs; such graphs are called 'rational'. By this criterion, Γ is rational if and only if χ(l) ≥ 1 for any effective non-zero cycle l ∈ L >0 . The epimorphism c 1 admits a unique group homomorphism section l ′ → s(l ′ ) ∈ Pic( X), which extends the natural section l → O X (l) valid for integral cycles l ∈ L, and such that c 1 (s(l ′ )) = l ′ [O04]. We call s(l ′ ) the natural line bundles on X. By the very definition, L is natural if and only if some power L ⊗n of it has the form O X (l) for some l ∈ L. Pic(Z). Similarly, if Z ∈ L >0 is a non-zero effective integral cycle such that its support is |Z| = E, and O * Z denotes the sheaf of units of O Z , then Pic(Z) = H 1 (Z, O * Z ) is the group of isomorphism classes of invertible sheaves on Z. It appears in the exact sequence where Pic 0 (Z) = H 1 (Z, O Z ). If Z 2 ≥ Z 1 then there are natural restriction maps, Pic( X) → Pic(Z 2 ) → Pic(Z 1 ). Similar restrictions are defined at Pic 0 level too. These restrictions are homomorphisms of the exact sequences (2.3.1) and (2.3.3). Furthermore, we define a section of (2.3.3) by s Z (l ′ ) := O X (l ′ )| Z , they also satisfy c 1 • s Z = id L ′ . We write O Z (l ′ ) for s Z (l ′ ), and we call them natural line bundles on Z. We also use the notations Pic l ′ ( X) 2.3.4. Restricted natural line bundles. The following warning is appropriate. Note that if X 1 is a connected small convenient neighbourhood of the union of some of the exceptional divisors (hence X 1 also stays as the resolution of the singularity obtained by contraction of that union of exceptional curves), then one can repeat the definition of natural line bundles at the level of X 1 as well (as a splitting of (2.3.1) applied for X 1 ). However, the restriction to X 1 of a natural line bundle of X (even of type O X (l) with l integral cycle supported on E) is usually not natural on X 1 : is the natural cohomological restriction), though their Chern classes coincide. Therefore, in inductive procedure when such restriction is needed, we will deal with the family of restricted natural line bundles. This means the following. If we have two resolution spaces X 1 ⊂ X with resolution graphs T 1 ⊂ T and we have a Chern class l ′ ∈ L ′ , then we denote by Furthermore if L is a line bundle on X 1 , then we denote L(l ′ ) = L ⊗ O X (l ′ ). Similarly if Z is an effective integer cycle on X and L is a line bundle on Z, then we denote L(l ′ ) = L ⊗ O Z (l ′ ). 2.3.5. The analytic semigroups. By definition, the analytic semigroup associated with the resolution X is It is a subsemigroup of S ′ . One also sets S an := S ′ an ∩ L, a subsemigroup of S. In fact, S an consists of the restrictions div E (f ) of the divisors div(f • φ) to E, where f runs over O X,o . Therefore, if s 1 , s 2 ∈ S an , then min{s 1 , s 2 } ∈ S an as well (take the generic linear combination of the corresponding functions). In particular, for any l ∈ L, there exists a unique minimal s ∈ S an with s ≥ l. Similarly, for any h ∈ H = L ′ /L set S ′ an,h : {l ′ ∈ S an : [l ′ ] = h}. Then for any s ′ 1 , s ′ 2 ∈ S an,h one has min{s ′ 1 , s ′ 2 } ∈ S an,h , and for any l ′ ∈ L ′ there exists a unique minimal s ′ ∈ S an,[l ′ ] with s ′ ≥ l ′ . For any l ′ ∈ S ′ an there exists an ideal sheaf I(l ′ ) with 0-dimensional support along E such that If l ′ ∈ S ′ an and the divisor of a generic global section of O X (−l ′ ) intersects E v , then (l ′ , E v ) < 0. In particular, if p ∈ E v is a base point then necessarily (l ′ , E v ) < 0. Choose a base point p of O X (−l ′ ), and assume that it is a regular point of E, and that I(l ′ ) p in the local ring O X,p is (x t , y), where x, y are some local coordinates at p with {x = 0} = E (locally), and t ≥ 1. Then we say that p is a t-simple base point. In such cases we write t = t(p). Furthermore, p is called simple if it is t-simple for some t ≥ 1. Let's have a Chern class l ′ ∈ S ′ an and let's have a base point p ∈ E v,reg of a natural line O X (−l ′ ), which is simple, there is another interpretation of the positive integer t, such that p is t-simple. Let's have a generic section in s ∈ H 0 (O X (−l ′ )) and D = |s|, then we know, that D has a cut D ′ , which is transversal at the base point p. Let's blow up the exceptional divisor E v along the cut D ′ sequentially, so let's blow up first at the point p and let the new exceptional divisor be E v1 and let's denote the strict transform of the cut D ′ with the same notation. Then let's blow up E v1 at the intersection point E v1 ∩ D ′ and let the new exceptional divisor be E v2 and so on. Let's denote the given resolution at the i-th step by X i with the blow up map b i : X i → X and let's look at the natural line bundle Let t be the minimal number, such that L t hasn't got a base point along the excpetional divisor E vt . Equivalently t is the maximal integer, such that H 0 ( X t , L t ) = H 0 (O Xt (−b * t (l ′ ))) and In this case p is a t-simple base point ot the natural line bundle O X (−l ′ ). Effective Cartier divisors and Abel maps In this section we review some needed material from [NNI]. We fix a good resolution φ : X → X of a normal surface singularity, whose link is a rational homology sphere. 3.1. Let us fix an effective integral cycle Z ∈ L, Z ≥ E. (The restriction Z ≥ E is imposed by the easement of the presentation, everything can be adopted for Z > 0). Let ECa(Z) be the space of effective Cartier (zero dimensional) divisors supported on Z. Taking the class of a Cartier divisor provides a map c : ECa(Z) → Pic(Z). Let ECa l ′ (Z) be the set of effective Cartier divisors with Chern class l ′ ∈ L ′ , that is, ECa l ′ (Z) := c −1 (Pic l ′ (Z)). We consider the restriction of c, c l ′ : ECa l ′ (Z) → Pic l ′ (Z) too, sometimes still denoted by c. For any Z 2 ≥ Z 1 > 0 one has the natural commutative diagram As usual, we say that L ∈ Pic l ′ (Z) has no fixed components if is non-empty. Note that H 0 (Z, L) is a module over the algebra H 0 (O Z ), hence one has a natural ac- This second action is algebraic and free. Furthermore, Therefore, it is convenient to modify the definition of ECa in the case l ′ = 0: we (re)define ECa 0 (Z) = {∅}, as the one-element set consisting of the 'empty divisor'. We also take c 0 (∅) := O Z , then we have If l ′ ∈ −S ′ then ECa l ′ (Z) is a smooth variety of dimension (l ′ , Z). Moreover, if L ∈ Im(c l ′ (Z)) (the image of c l ′ ) then the fiber c −1 (L) is a smooth, irreducible quasiprojective variety of dimension 3.1.5. Consider again a Chern class l ′ ∈ −S ′ as above. The E * -support I(l ′ ) ⊂ V of l ′ is defined via the identity l ′ = v∈I(l ′ ) a v E * v with all {a v } v∈I nonzero. Its role is the following. Besides the Abel map c l ′ (Z) one can consider its 'multiples' {c nl ′ (Z)} n≥1 as well. It turns out (cf. [NNI,§6]) that n → dim Im(c nl ′ (Z)) is a non-decreasing sequence, and Im(c nl ′ (Z)) is an affine subspace for n ≫ 1, whose dimension e Z (l ′ ) is independent of n ≫ 0, and essentially it depends only on I(l ′ ). We denote the linearisation of this affine subspace by Moreover, by [NNI, Theorem 6.1.9], where Z| V\I(l ′ ) is the restriction of the cycle Z to its {E v } v∈V\I(l ′ ) coordinates. If Z ≫ 0 (i.e. all its E v -coordinated are very large), then (3.1.6) reads as where X(V \ I(l ′ )) is a convenient small neighbourhood of ∪ v∈V\I(l ′ ) E v . Let Ω X (I) be the subspace of H 0 ( X \ E, Ω 2 X )/H 0 ( X, Ω 2 X ) generated by differential forms which have no poles along E I \ ∪ v ∈I E v . Then, cf. [NNI,§8], Similarly let Ω Z (I) be the subspace of H 0 (O X (K + Z))/H 0 (O X (K)) generated by differential forms which have no poles along E I \ ∪ v ∈I E v . Then, cf. [NNI,§8], We have also the following duality from [NNI] supporting the equalities above: Theorem 3.1.10. [NNI] Via Laufer duality one has V X (I) * = Ω X (I) and V Z (I) * = Ω Z (I). Analytic invariants of generic analytic type For a precise working definition of a generic analytic type see [NNII], [NNM], [NR], in a slightly simplified language we can regard the generic analytic structure in the following way as well. Fix a graph Γ. For each E v (v ∈ V) the disc bundle with Euler number E 2 v is taut: it has no analytic moduli. The generic X is obtained by gluing 'generically' these bundles according to the edges of Γ (as an analytic plumbing). 4.1. Review of some results of [NNII]. The list of analytic invariants, associated with a generic analytic type (with respect to a fixed resolution graph), which in [NNII] are described topologically include the following ones: ) (with certain restriction on the Chern class l ′ ),this last one applied for Z ≫ 0 provides h 1 (O X ) and h 1 (O X (l ′ )) too -, the analytic semigroup, and the maximal ideal cycle of X. See above or [CDGZ04, CDGZ08, Li69, N99b, ?, ?, O08, Re97] for the definitions and relationships between them. The topological characterizations use the Riemann-Roch expression χ : L ′ → Q. In the next theorem the bundles O X (−l ′ ) are the 'restricted natural line bundles' associated with some pair X ⊂ X top . In particular, it is valid even if X top = X and the bundles are natural line bundles. The theorem (and basically several statements regarding generic analytic structure and restricted natural line bundles) says that these bundles behave cohomologically as the generic line bundles in Pic −l ′ ( X) (for more comments see [NNII]). Theorem 4.1.1. [NNII, Theorem A] Fix a resolution graph T (tree of P 1 's) and let's have a generic analytic type X corresponding to it. Then the following identities hold: (a) For any effective cycle Z ∈ L >0 , such that the support |Z| is connected, we have (e) For l ∈ L set h(l) = dim(H 0 ( X, O X )/H 0 ( X, O X (−l))). Then h(0) = 0 and for l 0 > 0 one has Assume that Γ is a non-rational graph and set M = {Z ∈ L >0 : χ(Z) = min l∈L χ(l)}. Then the unique maximal element of M is the maximal ideal cycle of X. The relative setup. In this section we wish to summarise the results from [NR] about relatively generic analytic structures we need in this article. We consider an effective integer cycle Z on a resolution X with resolution graph T , and a smaller cycle Z 1 ≤ Z, where we denote |Z 1 | = V 1 and the subgraph corresponding to it by T 1 . We have the restriction map r : Pic(Z) → Pic(Z 1 ) and one has also the (cohomological) restriction operator R 1 : For any L ∈ Pic(Z) and any l ′ ∈ L ′ (T ) it satisfies c 1 (r(L)) = R 1 (c 1 (L)). In particular, we have the following commutative diagram as well: By the 'relative case' we mean that instead of the 'total' Abel map c l ′ (Z) we study its restriction above a fixed fiber of r. Theorem 5.0.1. [NR] Fix an arbitrary singularity X a Chern class l ′ ∈ −S ′ , an integer effective cycle Z ≥ E and a subcycle Z 1 ≤ Z and let's have a line bundle L ∈ Pic R(l ′ ) (Z 1 ). Assume that Let's recall from [NR] the analouge of the theroems about dominance of Abel maps in the relative setup: Definition 5.0.2. [NR] Fix an arbitrary singularity X, a Chern class l ′ ∈ −S ′ , an integer effective cycle Z ≥ E, a subcycle Z 1 ≤ Z and a line bundle L ∈ Pic R1(l ′ ) (Z 1 ) as above. We say that the pair (l ′ , L) is relative dominant on the cycle Z, if the closure of r −1 (L) ∩ Im(c l ′ (Z)) is r −1 (L). Theorem 5.0.3. [NR] One has the following facts: (1) If (l ′ , L) is relative dominant on the cycle Z, then ECa l ′ ,L (Z) is nonempty and h 1 (Z, L) = h 1 (Z 1 , L) for any generic line bundle L ∈ r −1 (L). (2) (l ′ , L) is relative dominant on the cycle Z, if and only if for all 0 < l ≤ Z, l ∈ L one has , where we denote (Z − l) 1 = min(Z − l, Z 1 ). Theorem 5.0.4. [NR] Fix an arbitrary singularity X, a Chern class l ′ ∈ −S ′ , an integer effective cycle Z ≥ E, a subcycle Z 1 ≤ Z and a line bundle L ∈ Pic R1(l ′ ) (Z 1 ) as in Theorem 5.0.3. Then for any L ∈ r −1 (L) one has Furthermore, if L is generic in r −1 (L) then in both inequalities we have equalities. In the following we recall the results from [NR] about relatively generic analytic structures: Let's fix a a topological type, in other words a resolution graph T with vertex set V, we consider They define two (not necessarily connected) subgraphs T 1 and T 2 . We call the intersection of an exceptional divisor from V 1 with an exceptional divisor from V 2 a contact point. In the following for the sake of simplicity we will denote r = r 1 and R = R 1 . Furthermore let's have a fixed analytic type X 1 for the subtree T 1 (if it is disconnected, then an analytic type for each connected component). Also for each vertex v 2 ∈ V 2 which has got a neighbour v 1 in V 1 we fix a cut D v2 on X 1 , along we glue the exceptional divisor E v2 . This means that D v2 is a divisor, which intersects the exceptional divisor E v1 transversally in one point and we will glue the exceptional divisor E v2 in a way, such that E v2 ∩ X 1 equals D v2 . If for some vertex v 2 ∈ V 2 , which has got a neighbour in V 1 we don't say explicitely what is the fixed cut, then it should be understood in the way that we glue the exceptional divisor E v2 along a generic cut. Let's plumb the tubular neihgbourhoods of the exceptional divisors E v2 , v 2 ∈ V 2 with the above conditions generically to the fixed resolution X 1 , we get a singularity X with resolution graph T and we say that X is a relatively generic singularity corresponding to the analytical structure X 1 and the cuts D v2 , for the more precise explanation of genericity look at [NR]. We have the following theorem with this setup from [NR]: Theorem 5.0.5. [NR] Let's have the setup as above, so two resolution graphs and a fixed singularity X 1 for the resolution graph T 1 , and cuts D v2 along we glue E v2 for all vertices v 2 ∈ V 2 , which have got a neighbour in V 1 . Assume that X has a relatively generic analytic stucture on T corresponding to X 1 and the cuts D v2 . Furthermore let's have an effective cycle Z on X and let's have , furthermore let's denote L = L|Z 1 , then we have the following: We have H 0 (Z, L) reg = ∅ if and only if (l ′ , L) is relative dominant on the cycle Z or equivalently: for all 0 < l ≤ Z. 2) Let's have the same setup as in part 1), then we have: where L gen is a generic line bundle in r −1 (L) ⊂ Pic l ′ m (Z), or equivalently: . 3) Let's have the natural line bundle is relative dominant on the cycle Z, or equivalently: Remark 5.0.6. In the theorem above in any formula one can replace l ′ with l ′ m , since for every Class of irreducible affine varieties Let's have a complex vector space V and an irreducible affine subvariety X ⊂ V . Let's pick a generic element w ∈ V * , and let's denote the number of smooth points p of X, such that w vanishes on T p (X) by τ (X). Let's denote the projective closure of the affine variety X in the projective closure of V by X, this is a projective subvariety, and let's denote it's dual projective variety by (X) * . One can see easily that τ (X) > 0 if and only if (X) * is a hypersurface in the dual projective space and τ (X) = 0 otherwise let's assume that τ (X) > 0 in the following. The degree of the projective variety (X) * is the class of the projective variety X, let's denote it by cl(X). The number cl(X) is closely related to τ (X) as explained in the following: Let µ be the multiplicity of the infinite hyperplane in the dual variety (X) * (if it doesn't contain the infinite hyperplane, then µ = 0), we claim that cl(X) = µ + τ (X). Indeed the affine hyperplanes, on which w ∈ V * vanishes and the infinite hyperplane form a generic pencil of projective hyperplanes through the infinite hyperplane, and τ (X) is exactly the number of intersection points of this pencil with (X) * , where we don't count the multiplicity of intersection at the infinite hyperplane, but this is exactly µ and this yields the statement. We have the following easy florklore lemma about the behaviour of the invariant τ with respect to direct sums: . Now the points p in X, such that w vanishes on T p X are exactly the points p = (p 1,j1 , · · · , p n,jn ), where 1 ≤ j i ≤ τ (X i ) for all 1 ≤ i ≤ n, this proves the statement. Base points of canonical line bundles and hyperelliptic involutions on generic singularities In the following we recall some nessacary material from [H] about generic analytic types which will be crucial in the proof of our main theorem. In the following we recall a few lemmas about the base points of the line bundle O Z (K + Z), where Z is an integer effective cycle on a generic singularity and H 0 (O Z (K + Z)) reg = ∅: Lemma 7.0.1. [H] Assume that T is a resolution graph and X is a generic singularity corresponding to it, and let's have a cycle Z on it, such that |Z| = V, and got a basepoint at the intersection points of exceptional divisors. Lemma 7.0.2. [H] Assume that T is an arbitrary resolution graph and X a generic singularity corresponding to it, and let's have a cycle Z on it, such that |Z| = V, and In the following let's recall the two main theorems from [H] about hyperelliptic involutions on generic normal surface singularities: We consider an integer effective cycle Z on the resolution X and investigate the existence of a complete linear series g 1 2 on it, we have the following two main theorems: Theorem. [H] Let's have an arbitrary resolution graph T and a generic resolution X corresponding to it, and let's have an effective integer cycle Z ≥ E such that H 0 (O Z (K + Z)) reg = ∅ and two vertices u ′ , u ′′ , such that Z u ′ = Z u ′′ = 1 and assume that e Z (u ′ , u ′′ ) ≥ 3. With these conditions for every line bundle Theorem. [H] Let's have an arbitrary resolution graph T and a generic resolution X corresponding to it, and let's have an effective integer cycle Assume furthemore that e Z (u) ≥ 3, with these conditions for every line bundle L ∈ Im(c −2E * u (Z)) one has h 0 (Z, L) = 1. Differential forms and fibration theorem In this subsection we recall some nessecary background from [NNAD] and we use it to reduce the investigation of the invariant τ in the case of images of Abel maps of normal surface singularities to some special cases. Let's have a normal surface singularity and an effective integer cycle Z on it, and a Chern class l ′ ∈ −S ′ , in the following we denote d Z,l ′ = dim(Im(c l ′ (Z)). Let's recall the following theorem about the numbers d Z (l ′ ) from [NNAD]: For l ′ ∈ −S ′ and Z ≥ E one has: Let's recall also the following results from [NNAD]: The following three sets of cycles coincide (for fixed Z ≥ E and l ′ ∈ −S ′ as above): (I) the set of cycles Z 1 with 0 ≤ Z 1 ≤ Z realizing the minimality in Theorem??, that is: is birational onto its image, and (ii) the generic fibres of the restriction of r, r im : (That is, the fibers of r im have maximal possible dimension.) (III) the set of cycles Z 1 with 0 ≤ Z 1 ≤ Z such that for the generic element L im gen ∈ Im(c l ′ (Z)) and arbitrary section s ∈ H 0 (Z 1 , L im gen ) reg with divisor D (i) in the (analogue of the Mittag-Lefler sequence associated with the exact sequence 0 The lemma above has the following geometric interpretation from [NNAD]: Fix a resolution X, a cycle Z ≥ E and a Chern class l ′ ∈ −S ′ as above. (a) There exists an effective cycle Z 1 ≤ Z, such that: is birational onto its image, and (ii) the generic fibres of the restriction of r, r im : In particular, for any such Z 1 , the space Im(c l ′ (Z)) is birationally equivalent with an affine fibration with affine fibers of dimension (c) The set of effective cycles Z 1 with property as in (a) has a unique minimal and a unique maximal element denoted by C min (Z, l ′ ) and C max (Z, l ′ ). Furthermore, C min (Z, l ′ ) coincides with the cohomology cycle of the pair (Z, L im gen ) (the unique minimal element of the set {0 ≤ Z 1 ≤ Z : h 1 (Z, L im gen ) = h 1 (Z 1 , L im gen )) for the generic L im gen ∈ Im(c l ′ (Z)). In this article we want to investigate the invariants τ (Im(c l ′ (Z))) in the cases, when the dual projective variety (Im(c l ′ (Z))) * is a hypersurface. By the results above we can assume that Z = C min (Z, l ′ ), indeed if Z > C min (Z, l ′ ) and Im(c l ′ (Z)) = Im(c l ′ (C min (Z, l ′ ))), then Im(c l ′ (Z)) is an affine fibration over Im(c l ′ (C min (Z, l ′ ))) with nontrivial fibers, and then (Im(c l ′ (Z))) * is not a hypersurface. So we can assume in the following that Z = C min (Z, l ′ ), which means in particular that the Abel Notice furthemore that if we investigate the ivariants τ (Im(c l ′ (Z))), we can assume that |Z| is connected. Indeed assume otherwise that |Z| is not connected and let the connected components of the cycle and Im(c l ′ (Z)) = ⊕ 1≤j≤i Im(c l ′ (Z j )). Now by lemma6.0.1 we get that τ (Im(c l ′ (Z ′ ))) = 1≤j≤i τ (Im(c l ′ (Z j ))). 9. The τ invariant of the varieties Im(c l ′ (Z)) In the following we restrict our attention first only to generic singularities, we prove the following main theorem with the setup explained above: Theorem 9.0.1. Let T be an arbitrary resolution graph and X a generic singularity corresponding to it. Let's have a Chern class l ′ ∈ −S ′ and an integer effective cycle Z ≥ E, such that Z = C min (Z, l ′ ), notice that this is a combinatorial condition computable from the resolution graph if the singularity is generic, and in particular we know that the map ECa l ′ (Z) → Im(c l ′ (Z)) is birational. With these notations we have the following: 1) The dual projective variety of the projective clousure Im(c l ′ (Z)) has got dimension , and so by [NNI] we also have For part 1) we will prove that H 0 (Z, L Z ) reg = ∅ and if we have a generic element ω ∈ H 0 (O Z (K + Z)) reg , then there is a generic divisor D ∈ ECa l ′ (Z) in the sense described above and another divisor Let's see first that part 1) follows from this statement. Indeed we have to prove that τ (Im(c l ′ (Z))) ≥ 1, so if we have a generic element in the dual space ω ∈ H 1 (O Z ) * , then there is a smooth point p ∈ Im(c l ′ (Z)), such that ω vanishes on T p (Im(c l ′ (Z))). However by Seere duality we have We show that ω hasn't got a pole along the divisor D, or in a more precise way that ω vanishes on Im(T D (c l ′ (Z))). Let's write D = ∪ 1≤j≤i D j , where the cuts D j are disoint and transversal to the exceptional divisor E at its smooth points. If D is an enough generic divisor, we have Im(T D (c l ′ (Z))) = ⊕ 1≤j≤i Im(T Dj (c)), so we have to prove that ω vanishes on each Im(T Dj (c)). Assume that the divisor D j is supported on the smooth part of the exceptional divisor E u . We have to prove that if we have a tangent vector v ∈ T Dj ECa −E * u (Z) and we have an arbitrary curve f : f (0)))) = 0. Let's have a local chart (x, y) of the space X near the intersection point E u ∩ D j such that E u = (x = 0) and D j = (y = 0). Let's realise the tangent vector v by an aproppriate deformation of the divisor D j of the form f (t) = [y + t · 0≤k≤Zu−1 a k · x k ], and let's denote f (t) = D t . We can express a representative of the differential form ω in local cordinates as ω = ( 1≤i,−Zu≤j a i,j y i x j )dx∧ dy, so by Laufer integration formula we get: However this is zero, which can be easily seen by the residuum formula. We will prove first that if D ∈ ECa l ′ (Z) is a generic divisor, and we denote the distinct intersection points of D with some exceptional divisor E v by p 1 , p 2 , · · · , p m (where m = (l ′ , E v )), then H 0 (O Z (Z + K − D)) reg = ∅, and the line bundle O Z (Z + K − D) hasn't got a base point at the points p 1 , · · · , p m . Notice that H 0 (O Z (Z + K)) reg = ∅ also follows from this statement. , however this is impossible by the assumption Z = C min (Z, l ′ ). By simmetry we only have to prove that the line bundle O Z (Z + K − D) hasn't got a base point at the point p 1 . There are two cases, assume first that Z v > 1: Let's blow up E v at the point p 1 and let's denote the blow up map by π p1 and the new exceptional divisor by E p1 and Z p1 = π * p1 (Z) + (Z v − 1)E p1 and let D ′ be the strict transform of the pullback This is equivalent to that there isn't an integer effective cycle If for some u ∈ V one has Z ′ u < Z u , then we know it, since by the assumptions on Z for every cycle Z ′′ < Z one has h 1 (O Z ′′ (D)) < h 1 (O Z (D)). Now we know that Z p1 ≥ E p1 because of Z v > 1 and it remains to prove that Notice that since the map c l ′ (Z) : ECa l ′ (Z) → Im(c l ′ (Z)) is birational and D ∈ ECa l ′ (Z) is a generic divisor, the line bundle O Z (D) has got base points at the points p 1 , · · · p m , so we have On the other hand it is obvious that In the following we have again two cases: where Z * is the cycle with same coefficents as Z, but on the blowup singularity (notice, that D ′ is also a generic divisor on the blown up singularity). By the definition of the cycle Z we know that Indeed from [NNAD] we know that we have: where the connected components of |Z ′ | are |Z ′ 1 |, · · · , |Z ′ n | and D(Z i , l ′ ) = 1 if the Chern class l ′ is not dominant of the cycle Z 1 , and D(Z i , l ′ ) = 0 otherwise. Also the maximum is attained for a cycle Z ′ , such that D(Z i , l ′ ) = 1 for each 0 ≤ i ≤ n. ), which is impossible by the fact Z = C min (Z, l ′ ). It means that we indeed have , because the restriction of the divisor D ′ + E p1 to the cycle Z * is a generic divisor in Im(c l ′ (Z * )). This means that there is a cycle where the connected components of |Z ′ | are |Z ′ 1 |, · · · , |Z ′ n |. Let's look at the cycle Z ′′ ≤ Z which has got the same coefficients as Z ′ , but on the singularity before blown up, and let's denote the connected components of |Z ′′ | by |Z ′′ 1 |, · · · , |Z ′′ n |. We immediately get the following: On the other hand by the condition and equality happens if and only if On the other hand if Z = Z ′′ and Z v ≥ 2, then we have ) and we are done. Now let's assume in the following that Notice, that Z p1 − 2E p1 is an effective integer cycle on a generic resolution and D ′ is a generic divisor on it. Let's denote the Chern class of D ′ by l ′′ = π p1 (l ′ ) − E p1 , now we know that there exists a cycle Z ′ ≤ Z p1 − 2E p1 with connected components |Z ′ 1 |, · · · , |Z ′ n |, such that we have: If there isn't any component ) and we are done. So assume in the following on the other hand that v ∈ |Z ′ 1 |. Let's have the cycles Z ′′ 2 , · · · Z ′′ n , which have got the same coeficcients as Z ′ 2 , · · · , Z ′ n , but on the singularity before the blown up. Let's have also the cycle Z ′ 1 = A ′ + tE p1 and let's denote by A ′′ the cycle, which has got the same coeficcients as A ′ , but on the singularity before the blown up and let's have which yields by an easy calculation that χ(−l ′′ ) − χ(−l ′′ + Z ′ 1 ) + D(Z ′ 1 , l ′′ ) ≤ 1. Indeed we have: From this we get that On the other hand we have This means that indeed we have: and we are done. Now assume in the following that D(A ′ , l ′ ) = 1, then by the inequality On the other hand we know that and equality happens if and only if Z ′′ = Z. If Z ′′ < Z it yields the statement immediately, so assume that A ′′ = Z ′′ = Z in the following. In this case χ(−l ′′ )−χ(−l ′′ +Z ′ )+D(Z ′ , l ′′ ) ≤ χ(−l ′ )−χ(−l ′ +Z)+1, so this yields the statement again. Indeed we have D(Z ′ , l ′′ ) ≤ 1 and on the other hand , which means that: Assume in the following on the other hand that Z v = 1: Let's have a generic divisor D ∈ ECa l ′ (Z), we know that H 0 (O Z (Z + K − D)) reg = ∅, so let's have a divisor D ′ ∈ ECa Z−ZK −l ′ (Z), such that O Z (D + D ′ ) = O Z (K + Z) and D and D ′ are disjoint on the exceptional divisors E u , where Z u ≥ 2. We know that such a divisor D ′ exists by the fact that the line bundle O Z (Z + K − D) hasn't got a base point at the intersection points of D with E u , where Z u ≥ 2. By the local value distribution from 1-dimensional complex analysis one easily gets that if t ∈ (C, 0) is enough small, then we can write |ω We know that if t is enough small and D ∈ ECa l ′ (Z) is a generic divisor, then also D t is a generic divisor and we know that the line bundle O Z (Z + K − D t ) hasn't got base points in |D t |, which yields our statement. We have proved that if D ∈ ECa l ′ (Z) is a generic divisor, and we denote the distinct intersection points of D with some exceptional divisor E v by p 1 , p 2 , · · · , p m (where m = (l ′ , E v )), then For a small enough number t ∈ (C, 0), t = 0 we know that t · s ′ + s is a generic section in H 0 (O Z (Z + K)) reg and since |D| ∩ |D ′ | = ∅ we know that there exist divisors D(t), D ′ (t), such that We also know that for small value of t D(t) is a generic divisor in ECa l ′ (Z), so part 1) is proved. Notice that the fist statement of part 2) is immediate from lemma7.0.1 since we have proved above that H 0 (O Z (Z + K)) reg = ∅, so it follows that the line bundle O Z (Z + K) hasn't got base points at intersection points of exceptional divisors. Notice that similarly the second statement of part 2) follows in the case, when Z v = 1 from lemma7.0.2 since H 0 (O Z (Z + K)) reg = ∅. In the following we prove the second statement of part 2) in the case Z v > 1: In fact we will prove somewhat more in the following lemma, which we state here in its full generality, so we can use it also in the proof of part 3): Lemma 9.0.2. Let's have an arbitrary resolution graph T and a generic singularity X corresponding to it. Let's have furthermore an integer effective cycle Z ≥ E and a cycle Z ′ ≤ Z and an arbitrary vertex v ∈ V, and assume that Z v > 1 and Z ′ v ≥ 1. Let's blow up the divisors E u , u ∈ |Z ′ | in r u generic points q u,1 , q u,2 , · · · , q u,ru and let the new divisors be E u,1 , · · · , E u,ru and let's denote l = u∈V,1≤i≤ru E u,i and Z new = π * (Z) − l and Z ′ new = π * (Z ′ ) − l. Assume that Z ′ = Z and H 0 (O Z (Z + K)) reg = ∅ or if Z ′ = Z, then |Z ′ | = |Z| and there exists a vertex s ∈ |Z| \ |Z ′ |, such that s is a neighbour to the subgraph |Z ′ | and for every vertices w = s, v on the unique string between s and v we have r w = 0. Assume furthermore that This means that for a generic section s ∈ Im(H 0 (L) → H 0 (E v , L|E v )) the section s vanishes in p of order m. In the following it's easy to see that we can assume that Assume on the other hand that Z = Z ′ and h 1 ( for every cycle 0 ≤ Z ′′′ < Z ′′ , then we can restrict everything to the cycle Z ′′ . We get that H 0 (O Z ′′ new (K new + Z new − l)) reg = ∅ and the dimension of the image of the map ) is bigger, than 1, and the line bundle has got the base point p of multiplicity m on the regular part of E v , in particular we get Z ′′ ≥ E v . The other conditions of our lemma holds trivially, indeed there was a vertex s ∈ |Z| \ |Z ′ |, such that s is a neighbour to the subgraph |Z ′ | and for every vertices w = s, v on the unique string between s and v we have r w = 0. If we have the uniqe vertex s ′ on the string between s and v, such that s ′ is a neighbour to the subgraph |Z ′′ |, then we also get that for every vertices w = s ′ , v on the unique string between s ′ and v we have r w = 0. This indeed means that the conditions of the lemma holds for the cycles Z ′′ , Z. So this means that we can assume that Z ′′ = Z ′ , which also means that There are two cases, assume first in the following that Z ′ v = Z v , in particular this means, that Z ′ v ≥ 2: Let's blow up E v sequentially in generic points t = Z ′ v − 1 times, starting with q, and let the new > 0, because we have blown up E v sequentially in generic points and there is a differential form in H 1 (O Z ′ ) * , which has got a pole of order Z ′ v along the exceptional divisor E v . Let the new vertex set of the blown up singularity be V 1 , and let's look at the line bundle π * (L) = O Z ′ 1 (K 1 + Z 1 − l), we know that it has got a base point at p with multiplicity m on E v and we have H 0 (Z ′ 1 , π * (L)) reg = ∅. Let's denote the restriction of the cycle Z ′ 1 to the vertex set On the other hand we have Z ′′ 1 ≤ Z 1 , and we know that the dimension of the map 1 hasn't got a base point on the regular part of E v , because we can see easily that the conditions of the induction hypothesis hold. Indeed we have the vertex v ′ t ∈ |Z 1 |, which is a neighbour of |Z ′ 1 | and for every vertices w = v ′ t , v on the unique string between s ′ and v we have r w = 0. We know by Seere duality that has got a base point at p, which will be a contraditcion. Now we have two cases, first assume that Z ′ = Z, it means obviously that Z ′ 1 = Z 1 . In this case we know that Let's denote u∈V|Zu=1,1≤i≤ru E u,i = l 2 and u∈V|Zu>1,1≤i≤ru E u,i = l 1 . Since for every pair (u, i), such that Z u > 1 we know that (l, E ui ) < 0 we know that On the other hand we have to prove that J. Nagy Similarly ). It means that we have to prove the following: ). Let's look at the following exact sequence: We know that the map ). Similary let's look at the following exact sequence: We know that the map ). These two equations immediately prove the claim in the case Z ′ = Z. Assume in the following, that Z ′ = Z, this means by our condition, that |Z| is strictly bigger, than |Z ′ | and there is a vertex s ∈ |Z| \ |Z ′ |, such that s is a neighbour of the subgraph |Z ′ | and for every vertices w = s, v on the unique string between s and v we have r w = 0. Let's have the pairs (u, i), ). It means that we know: ). On the other hand we have to prove that ). It means that we have to prove: ). Let's have the string u 1 = s, u 2 , · · · , u q = v, then by the conditions on s, there is a Laufer sequence From these Laufer sequences we get that both , which proves the statement in the case Z ′ v = Z v . Now let's see the more interesting case in the following so assume that Z ′ v < Z v : Let's denote again t = Z ′ v − 1 and let's blow up the exceptional divisor E v sequentially in generic points and let the new exceptional divisors be where perhaps we have t = 0. We know that every differntial form in H 1 (O Z ′ ) * has got a pole on the exceptional divisor E v ′ t of order at most 1. i · E i and let the new vertex set of the blown up singularity be V 1 , with this notations we have that , we know that it has got a base point at p with multiplicity m on E v . Let's denote the vertex set V 1 \ E v ′ t by V s and the restriction of the cycle Z ′ 1 to V s by Z ′ s . We know that if t > 0, then the dimension of the image of the map H 0 (Z ′ s , L) → H 0 (E v , L) is greater than 1 and the induction hypothesis holds for Z 1 and Z ′ s , so this means that the line bundle L|Z ′ s hasn't got a base point on the exceptional divisor E v . On the other hand if t = 0, then in N distinct generic points p 1 , · · · p N , and let the new exceptional divisors be E w1 , · · · , E wN . Let's denote the new singularity by X b and let's denote its subsingularity with vertex set V b \ w 1 , · · · , w N by X u , we have p g ( X u ) = p g ( X 1 ) and , since none of the differential forms in H 1 (O Z ′ ) * has got a pole along the exceptional divisors E w1 , · · · , E wN . Let's denote furthermore the line bundle We know that the dimension of the image of the map H 0 (Z ′ u , L u ) → H 0 (E v , L u ) is bigger then 1, and we should prove that L u hasn't got a base point on the regular part of E v . Indeed, this is enough, since h 1 (O Z ′ u ) = h 1 (O π * (Z1) ) and H 0 (O π * (Z ′ 1 ) (π * (K 1 + Z 1 − l)) reg = ∅, so we get that O π * (Z ′ 1 ) (π * (K 1 + Z 1 − l) also hasn't got a base point on the regular part of E v , which is what we want to prove. Now we know that V s = V u \ v ′ t , and the corresponding subsingularity is X s . We have the line bundle L s = L u |Z ′ s on the cycle Z ′ s , let's denote c 1 (L u ) = l ′ u and c 1 (L s ) = l ′ s . Notice that the coefficient of Notice also that every differential form in H 1 (O Z ′ ) * has got a pole along the exceptional divisor Let's move in the following the intersection points E wi ∩ E v ′ t generically and the analytic type of the singularity as well. Notice that if the original singularity was enough generic and we move the plumbing of the tubular neighborhood of the exceptional divisors E wi , 1 ≤ i ≤ N with X u generically, then we get generic analytic types. Notice that the restriction L u |Z ′ s remains the same line bundle L s if we move the intersection restricts to the zero divisor on X s . On the other hand we know that dim(Im(c ) and the coefficients of E w1 , · · · , E wN are positive in K b + π * (Z 1 ) − 1≤i≤N E wi − l, which means that if we move the contact points p 1 , · · · , p N , then the line bundle L u cover an open set in r −1 It means that for a generic choice of the contact points p 1 , · · · , p N the line bundle L u is a generic line bundle in r −1 s (L s ). We know that H 0 (Z ′ u , L u ) reg = ∅ for generic analytic types, which means that the pair (l ′ u , L s ) is relative dominant on the cycle Z ′ u , which means by Theorem5.0.3 that: . Now we have the following lemma: Lemma 9.0.3. Let's have the setup above and assume that the dimension of the image of the map is more than 1 and let q ∈ E v be a generic point and let's blow up E v in q. Let the new divisor be E q , the new singularity X u,q and Z ′ u,q = π * q (Z ′ u ). 1) Assume that t = 0, which means that v ′ t = v, we claim that the pair (π * q (l ′ u ) − E q , L s ) is relative dominant on the cycle Z ′ u,q . 2) Assume that t > 0, which means that v ∈ V s and let's have the line bundle L s, Proof. By Theorem5.0.3 for part 1) we have to prove that for all cycles 0 < π * q (l d ) where L u is a generic line bundle in r −1 s (L s ) one has: . s ∈ H 0 (Z ′ s , L s ) has got (l u , E v ) disjoint arrows on E v , which means indeed that if q is a generic point, then H 0 (Z ′ s,q , π * q (L s )(−E q )) reg = ∅. Let's finish the proof of our main lemma with the help of the lemma before. We have to prove that a generic line bundle L u ∈ r −1 s (L s ) hasn't got a base point on the exceptional divisor E v . Let's look at the space Let's denote the new singularity by X q and the line bundle L q = π * q (L u ) − E q , and the cycle Assume first that t = 0: This Indeed if the map ECa π * q (l ′ u )−Eq,Ls (Z q ) → r −1 (L s ) were a submersion at a point D ′ ∈ U 1 , then On the other hand we know that Im(r • T D ′ (c π * q (l ′ u )−Eq (Z q )) = Im(T D ′ (c l ′ s (Z s ))), which indeed would give that h 1 (O Zq (D ′ )) = h 1 (Z s , L s ). However we know that ECa π * q (l ′ u )−Eq,Ls (Z q ) is irreducible from [NR] and by our previous lemma we know that the map ECa π * is open. Now assume in the following that t > 0, in this case we have v ∈ V s : If we look at a small open neighborhood This means that cannot be a submersion in any of the points D ′ ∈ U 1 , because otherwise we would have similarly as in the previous case that h 1 where the second equality follows from the fact that the line bundle L s hasn't got a base point on the exceptional divisor E v at the generic point q. On the other hand by our previous lemma we know that the map ECa π * q (l ′ u )−Eq,Ls,q (Z q ) → r −1 (L s,q ) is dominant, and this is a contradiction, since U 1 ⊂ ECa π * q (l ′ u )−Eq,Ls,q (Z q ) is open. Now we want to prove part 3), so first let's see that τ (Im(c l ′ (Z))) ≤ v∈|−l ′ | tv (l ′ ,Ev) , where t v = (K + Z, E v ) for an arbitrary vertex v ∈ |Z|. Let's have a generic differential form w ∈ H 0 (O Z (K+Z)) reg , where w has got t v disjoint transvesal arrows D v,i , 1 ≤ i ≤ t v along the regular parts of exceptional divisors E v , such that (l ′ , E v ) > 0, because the line bundle O Z (K + Z) hasn't got any base points on these exceptional divisors. Furthermore if w is enough generic, then we have exactly m = τ (Im(c l ′ (Z))) distinct smooth point of Im(c l ′ (Z)), p 1 , p 2 , · · · , p m such that w vanishes on the tangent spaces T pi (Im(c l ′ (Z))). Also if w is enough generic, then we can also assume that p i are also generic points of Im(c l ′ (Z)) in the sense that dim(c l ′ (Z) −1 (p i )) = 0, the map c l ′ (Z) : ECa l ′ (Z) → Pic l ′ (Z) is a submersion at the unique point D i ∈ c l ′ (Z) −1 (p i ) and the unique divisor D i ∈ c l ′ (Z) −1 (p i ) has got (l ′ , E v ) disjoint arrows at the regular part of the divisors E v , v ∈ |Z|. Since the map c l ′ (Z) : ECa l ′ (Z) → Pic l ′ (Z) is a submersion at the unique point D i ∈ c l ′ (Z) −1 (p i ), one has Im(T D (c l ′ (Z))) = T pi (Im(c l ′ (Z))) and the points p i are regular values of the map c l ′ (Z). This means that the differential form w has got no pole on each D i ∈ c l ′ (Z) −1 (p i ). Notice however that the differential form w has got a pole of order Z v on each exceptional divisor E v , v ∈ |Z| and disjoint transvesal arrows Let's have the following lemma: is the unique divisor, which have got (l ′ , E v ) disjoint arrows at the regular part of the divisors E v , v ∈ |Z| and u ∈ |Z|, then D i has got (l ′ , E u ) arrows D u,a1 , · · · , D u,a (l ′ ,Eu) supported on the exceptional divisor E u , where 1 ≤ a 1 < · · · < a (l ′ ,Eu) ≤ t u . Proof. Let the divisor D i ∈ |p i | have an arrow at the exceptional divisor E u , u ∈ |Z| and let's denote it by D ′ , we have to prove that D ′ is one of the arrows D u,i , 1 ≤ i ≤ t u . We know that the differential form w has got a pole of order Z u on the exceptional divisor E u and w hasn't got a pole along D ′ . It means that D ′ ∩ E u = D u,i ∩ E u for some 1 ≤ i ≤ t u , if Z u = 1 then this proves the statement. We claim in the following that D ′ = D u,i : So assume that Z u ≥ 2 and let's blow up E u at the point D ′ ∩ E u and let the new exceptional divisor be E 1 and let the strict transforms of the divisors D ′ , D u,i be D ′ 1 , D u,i,1 . We know that the differential form w has got a pole of order Z u − 2 on E 1 and w hasn't got a pole along the exceptional divisor D ′ , which means that w must vanish on We prove by induction that if j ≤ Z u − 1 and we blow up the exceptional divisor E u sequentially j times along the divisor D ′ and we denote the strict transforms of D ′ , D u,i by D ′ j , D u,i,j , and the new exceptional divisors by E 1 , · · · E j , then D ′ j ∩ E j = D u,i,j ∩ E j . If we apply this for j = Z u − 1, then we get D ′ = D u,i,j and it proves the statement. Now if j ≤ Z u − 1 we know that w has got a pole on E j of order Z u − 2j, however w hasn't got a pole on the divisor D ′ , which means that w has got a pole on the divisor D ′ j of order at most −j. We know, that j ≤ Z u − 1, so we have Z u − 2j > −j and this means, that w should have an arrow at E j ∩ D ′ j and we indeed get E j ∩ D ′ j = E j ∩ D u,i,j and it proves the statement of the lemma. J. Nagy Remark 9.0.5. Notice that there is a much easier statement in the other direction, namely if D = v∈|Z|,1≤i≤(l ′ ,Ev) D v,av,i , where 1 ≤ a v,1 < · · · < a v,(l ′ ,Ev) ≤ t v , then the differential form w hasn't got a pole on the divisor D. We got that if w is enough generic, such that p i are also generic points of Im(c l ′ (Z)) in the sense that the unique divisor D i ∈ |p i | has got (l ′ , E v ) disjoint arrows at the regular part of the divisors E v , v ∈ |Z| and the points p i are regular values of the map c l ′ (Z) ,then if D i ∈ |p i |, then D has got (l ′ , E u ) arrows D u,a1 , · · · , D u,a (l ′ ,Eu) supported on the exceptional divisor E u , where 1 ≤ a 1 , · · · , a (l ′ ,Eu) ≤ t u . This immediately gives the first part of part 3), so the desired inequality τ (Im(c l ′ (Z))) ≤ v∈|−l ′ | tv (l ′ ,Ev) . In the following we want to prove the equality part: We should prove, that if w 0 is an enough generic differential form w 0 ∈ H 0 (O Z (K + Z)) reg and we choose for each vertex u ∈ |Z| numbers 1 ≤ a u,1 < · · · < a u,(l ′ ,Eu) ≤ t u , then D = u∈|Z|,1≤i≤(l ′ ,Eu) D u,au,i ∈ ECa l ′ (Z) is a generic divisor in the sense, that c l ′ (Z)(D) is a regular value of the map c l ′ (Z) and the point c l ′ (Z)(D) is a smooth point of Im(c l ′ (Z)). Let's have a generic differential form w 0 ∈ H 0 (O Z (K + Z)) reg and an anallytically open subset w 0 ∈ U ⊂ H 0 (O Z (K + Z)) reg , such that all differential forms in U are generic, and we have holomorphic Now choose for each vertex u ∈ |Z| numbers 1 ≤ a u,1 < · · · < a u,(l ′ ,Eu) ≤ t u , then we have to prove that the image of the map D = u∈|Z|,1≤i≤(l ′ ,Eu) D u,au,i : U → ECa l ′ (Z) contains an open subset of ECa l ′ (Z). Let's have the map g : U → ECa l ′ (Z) → Pic l ′ (Z), we are enough to prove that g(U ) contains an open subset of Im(c l ′ (Z)) since the map ECa l ′ (Z) → Pic l ′ (Z) is birational. Let's denote the contact points d u,i (w) = D u,i (w) ∩ E u , where 1 ≤ i ≤ t u . We claim that we are enough to prove that if w is an enough generic differential form in U and we choose for each vertex u ∈ |Z| numbers 1 ≤ a u,1 , · · · , a u,(l ′ ,Eu) ≤ t u , then D(w)|E = u∈|Z|,1≤i≤(l ′ ,Eu) d u,au,i (w) ∈ ECa l ′ (E) is a generic divisor in ECa l ′ (E), so D(w)|E covers an open subset of ECa l ′ (E), while w ∈ U . Indeed assume that this holds and let's denote this open set by U ′ ⊂ ECa l ′ (E). Let's choose generic points q u,i ∈ E u , 1 ≤ i ≤ (l ′ , E u ), such that u∈|Z|,1≤i≤(l ′ ,Eu) q u,i ∈ U ′ . Let's blow up the exceptional divisors at these points and let the new divisors be E u,i , 1 ≤ i ≤ (l ′ , E u ) and let's denote l ′ new = − u∈V,1≤i≤(l ′ ,Eu) E * u,i and Z new = π * (Z). We know that there is a generic divisor D new ∈ ECa l ′ new (Z new ), such that the line bundle O Z (Z K + Z − π * (D new )) hasn't got base points at the points q u,i , u ∈ V, Let's have a differential form w ′ ∈ H 0 (O Z (K + Z)) reg , such that |w ′ | = π * (D new ) + D ′ . On the other hand by the fact that if w is an enough generic differential form w ∈ U ⊂ H 0 (O Z (K + Z)) reg , then D(w)|E = u∈|Z|,1≤i≤(l ′ ,Eu) d u,au,i (w) ∈ ECa l ′ (E) is a generic divisor in ECa l ′ (E) we know that if the points q u,i ∈ E u , 1 ≤ i ≤ (l ′ , E u ) are enough generic, such that u∈|Z|,1≤i≤(l ′ ,Eu) q u,i ∈ U ′ , then there is a generic differential form w ∈ H 0 (O Z (K + Z)) reg , such that d u,au,i (w) = q u,i . Now let's have the divisor of the section w + tw ′ ∈ H 0 (O Z (K + Z)) reg , where t ∈ (C, 0) is enough small and let's denote We see that if t ∈ (C, 0) is enough small, then D ′′ t has got transversal arrows at the points q u,i and D t = u∈|Z|,1≤i≤(l ′ ,Eu) D u,au,i,t is a generic divisor in ECa l ′ (Z). It means that we are indeed enough to prove that if w is an enough generic differential form w ∈ H 0 (O Z (K + Z)) reg and we choose for each vertex u ∈ |Z| numbers 1 ≤ a u,1 , · · · , a u,(l ′ ,Eu) ≤ t u , then D(w)|E = u∈|Z|,1≤i≤(l ′ ,Eu) d u,au,i (w) ∈ ECa l ′ (E) is a generic divisor in ECa l ′ (E), so D(w)|E covers an open subset of ECa l ′ (E), while w ∈ U . Let's have the contact points of the disjoint divisors D u,au,i (w), d u,au,i (w) and assume that q has got a r u,au,i -simple base point at d u,au,i (w). Since Z u = 1 for each vertex u ∈ V 1 we have r u,au,i = 1 if q has got a base point at d u,au,i (w), and r u,au,i = 0 if q hasn't got a base point at d u,au,i (w). Let's blow up X r u,au,i times along the divisors D u,au,i (w)|u ∈ V 1 , and let's denote the new singularity by X new and the new exceptional divisors by E u,i,new where u ∈ V 1 , 1 ≤ i ≤ (l ′ 1 , E u ) and r u,au,i = 1. Let's denote Z new = π * (Z) − u∈V1,1≤i≤(l ′ 1 ,Eu),ru,a u,i =1 E u,i,new , notice that Z new is the same cycle as Z, just on the blown up singularity. On the other hand if q runs over W , then the base point locus of the line bundle q moves in a r ′ -dimensional family. This means that if q ∈ W was enough generic, then there is an analytical subvariety q * ∈ W ′ ⊂ Pic l ′ 1,new (Z new ), such that for each L ∈ W ′ we have L = c l ′ 1,new (Z new )(y), where y = u∈V1,1≤i≤(l ′ 1 ,Eu),ru,a u,i =0 D u,au,i (w) for some w ∈ U , where w has the same base points as q and y ∈ ECa l ′ new (Z new ), and dim(W ′ ) ≥ d − r ′ . Now there are two cases, assume first that r u,au,i = 1 for all u ∈ V 1 , 1 ≤ i ≤ (l ′ 1 , E u ), this means that r ′ = (l ′ 1 , Z) and q * = O Znew , which means that r = h 1 (O Z ) − (l ′ 1 , Z) so we have d ≥ (Z, l ′ 1 ), however this contradicts the assumption d < (l ′ 1 , Z). So we can assume in the following that q * is not the trivial line bundle and so there are two independent sections s 1 , s 2 ∈ H 0 (Z new , q * ) reg such that |s 1 | ∩ |s 2 | = ∅. We know that for a generic point . This means that dim(T q * (W r+r ′ (Z new , l ′ 1,new ))) ≥ d − r ′ . Now let's recall the following theorem, which is the analouge of the similar theorem in the classical Brill-Noether theory about the Zariski tangent spaces of Brill-Noether Stratas: Theorem 9.0.6. Let X be an arbitrary resolution of a normal surface singularity and let's have a Chern class l ′ ∈ −S ′ on it and a cycle Z, and let's have a line bundle L ∈ W r (Z, l ′ ) \ W r+1 (Z, l ′ ). Let's look at the bilinear map µ : Let's use this theorem in our situation in the following: Let's look at the map µ : H 0 (Z new , q * )⊗H 0 (Z new , O Znew (K new +Z new )⊗(q * ) −1 ) → H 0 (O Znew (K new + Z new )), we have T q * (W r+r ′ (Z new , l ′ 1,new )) = ker(Im(µ)). Notice that on one hand we have H 0 (Z new , q * ) reg = ∅ obviously, and on the other hand H 0 (Z new , O Znew (K new + Z new ) ⊗ (q * ) −1 ) reg = ∅. Let's have an element (u ′′ , i 1 ) ∈ M u ′′ and another element (u ′′ , i 2 ) / ∈ M u ′′ , we know that the monodromy is 1-transitive on the set D u ′′ ,i , 1 ≤ i ≤ t u ′′ , so we get that there is a minimal element M ′ ∈ F such that (u ′′ , i 2 ) ∈ M ′ u ′′ and M ′ contains all the elements (u ′ , i), 1 ≤ i ≤ t u ′ . However this is a contradiction because M = M ′ are minimal elements in F , but M ∩ M ′ = ∅. In the other case assume that q M (w) = O Z ( u∈Y,1≤i≤tu D u,i (w)), where Y ⊂ |Z|, however in this case t u = (l ′ , E u ) for each vertex u ∈ Y , since M ⊂ Q and the line bundle q M (w) is constant if w ∈ U . Notice on the other hand that there are inidices 1 ≤ r u,1 , · · · , r u,(l ′ ,Eu) ≤ t u for every vertex u ∈ |Z| such that the line bundle O Z ( u∈|Z|,1≤i≤(l ′ ,Eu) D u,ru,i (w)) covers an open set in Im(c l ′ (Z)), which has got dimension (l ′ , Z). On the other hand the line bundle q M (w) = u∈Y,1≤i≤(l ′ ,Eu) D u,i (w) is constant and the line bundle O Z ( u∈|Z|\Y,1≤i≤(l ′ ,Eu) D u,ru,i (w)) covers a set of dimension at most (l ′ , Z |Z|\Y ) < (l ′ , Z) which is a contradiction. So we have proved our claim that for each vertex u ∈ |Z| M can't contain all the elements (u, i), 1 ≤ i ≤ t u . Next we claim that for each vertex u ∈ |Z| one has |M u | ≤ 1, indeed assume to the contrary that u ′ ∈ |Z| and |M u ′ | ≥ 2, we already know however that |M u ′ | < t u ′ . So we can assume now that (u ′ , 1), (u ′ , 2) ∈ M u ′ and (u ′ , 3) / ∈ M u ′ . Let's blow up now E u ′ in a generic point p, let's denote the new singularity by X new , the cycle Z new = π * (Z) − E new and let's look at the line bundle O Znew (Z new + K new − E new ). We know that (l ′ , E u ′ ) ≥ 2, so it follows from the minimality of the cycle Z that e Z (u ′ ) ≥ 3. It means by lemma7 that the line bundle O Znew (Z new + K new − E new ) hasn't got a base point on the exceptional divisor E u ′ . It means that if we move around in (c −ZK +Z (Z)) −1 (O Z (K + Z)), then the monodromy is 2transitive on the set D u ′ ,i , 1 ≤ i ≤ t u ′ . We give the sketch of this easy arguement: Let's have two indices 1 ≤ i 1 , i 2 ≤ t u ′ , we know that if w ∈ H 0 (O Z (K + Z)) is generic then the contact points d u ′ ,1 (w), d u ′ ,2 (w) are two generic points in E u ′ and also d u ′ ,i1 (w), d u ′ ,i2 (w) are two generic points in E u ′ . It means that we can have two generic points p, q ∈ E u ′ and two generic diferential forms w 1 , w 2 ∈ H 0 (O Z (K + Z)) reg such that d u ′ ,1 (w 1 ) = p, d u ′ ,2 (w 1 ) = q and d u ′ ,i1 (w 2 ) = p, d u ′ ,i2 (w 2 ) = q. The family of differential forms t · w 1 + (1 − t)w 2 , t ∈ C gives the desired two-transitivity of the monodromy. However this means that there is a minimal element M ′ ∈ F such that (u ′ , 1), (u ′ , 3) ∈ M ′ u ′ , but this is a contradiction because of M ′ ∩ M = ∅ and M ′ = M . Thus we have concluded that for each vertex u ∈ |Z| one has |M u | ≤ 1. Because the monodromy is 1 transitive we obviously know that there are at least two vertices u ∈ |Z|, such that |M u | = 1, let's denote two such vertices by u ′ , u ′′ . Let's denote the subset of |Z| consisting of vertices u ∈ |Z|, such that |M u | = 1 by G, we can assume that (u ′ , 1), (u ′′ , 1) ∈ M . Now let's blow up the singularity at d u ′ ,1 (w) for some generic w ∈ U , let's denote the new singularity by X new , the cycle Z new = π * (Z) − E new and let's look at the line bundle O Znew (Z new + K new − E new ), we claim that it has got a base point at d u ′′ ,1 (w). Indeed assume that it hasn't got a base point at d u ′′ ,1 (w), then we can find two generic points p ∈ E u ′ , q ∈ E u ′′ and w ∈ U , such that d u ′ ,1 (w) = p, d u ′′ ,1 (w) = q. Remark 9.0.7. In the general case it can happen that τ (Im(c l ′ (Z))) = 0, so the dual projective variety of the projective clousure Im(c l ′ (Z)) has got dimension less than h 1 (O Z ) − 1. Let's denote the set of base points of the line bundle O Z (K + Z) by B, for a generic section w ∈ H 0 (O Z (K + Z)) reg we can write |w| = v∈|Z|,1≤i≤av D v,i + D ′ where a v ≤ t v and D v,i are disjoint transversal cuts on the exceptional divisor We can also assume that w is such generic that the points p i ∈ Im(c l ′ (Z)) satisfy dim((c l ′ (Z)) −1 (p i )) = 0 and if we denote D i = (c l ′ (Z)) −1 (p i ), then D i consists of (l ′ , E) disjoint transversal cuts along the smooth part of E. If w is enough generic we can assume that the only 0-dimensional divisor D i ∈ |p i | satisfies We can also assume that the Abel map is submersion in the points D i ∈ ECa l ′ (Z) so T pi (Im(c l ′ (Z)) = Im(T Di (c l ′ (Z))) We know that the differential form w vanishes on the subspace Im(T Di (c l ′ (Z))), so the differential form w hasn't got a pole on the divisors D i , 1 ≤ i ≤ τ .
2020-07-13T01:00:29.505Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "5950a1ddd44836144507a616365d7ec3dad8eaec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5950a1ddd44836144507a616365d7ec3dad8eaec", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
220128262
pes2o/s2orc
v3-fos-license
Tilted outer and inner structures in edge-on galaxies? Tilted and warped discs inside tilted dark matter haloes are predicted from numerical and semi-analytical studies. In this paper, we use deep imaging to demonstrate the likely existence of tilted outer structures in real galaxies. We consider two SB0 edge-on galaxies, NGC4469 and NGC4452, which exhibit apparent tilted outer discs with respect to the inner structure. In NGC4469, this structure has a boxy shape, inclined by $\Delta$PA$\approx$3$^{\circ}$ with respect to the inner disc, whereas NGC4452 harbours a discy outer structure with $\Delta$PA$\approx$6$^{\circ}$. In spite of the different shapes, both structures have surface brightness profiles close to exponential and make a large contribution ($\sim30$%) to the total galaxy luminosity. In the case of NGC4452, we propose that its tilted disc likely originates from a former fast tidal encounter (probably with IC3381). For NGC4469, a plausible explanation may also be galaxy harassment, which resulted in a tilted or even a tumbling dark matter halo. A less likely possibility is accretion of gas-rich satellites several Gyr ago. New deep observations may potentially reveal more such galaxies with tilted outer structures, especially in clusters. We also consider galaxies, mentioned in the literature, where a central component (a bar or a bulge) is tilted with respect to the stellar disc. According to our numerical simulations, one of the plausible explanations of such observed"tilts"of the bulge/bar is a projection effect due to a not exactly edge-on orientation of the galaxy coupled with a skew angle of the triaxial bulge/bar. Another important process, disc tilting, is a change of the overall angular momentum (vector) of the disc with time whereas disc warping implies that the direction of the angular momentum vector changes with galactocentric radius. In essence, disc tilting manifests itself as a slewing of the disc plane with time, with respect to its current orientation in the space. There are different processes which can cause galactic discs to tilt (see e.g. introduction in Earp et al. 2019): minor merging events (Ostriker & Tremaine 1975;Huang & Carlberg 1997;Kazantzidis et al. 2009;Bett & Frenk 2012), tumbling of dark matter haloes (Bailin & Steinmetz 2004;Bryan & Cress 2007;DeBuhr et al. 2012), and gas cooling onto the disc plane (Debattista et al. 2015;Earp et al. 2019). If a galaxy experiences an infall of a satellite, its disc also exhibits two other dynamical responses: warping and thickening (Huang & Carlberg 1997;Read et al. 2008;Kazantzidis et al. 2009;Sadoun et al. 2014). Tilting cannot be observed directly due to its extremely low tilting rate, even as compared to a typical galaxy rotation period (∼ 5 • Gyr −1 , Bailin & Steinmetz 2004;Yurin & Springel 2015). However, the existence of some distinctive features in the disc structure, such as conspicuous S-shape warps or strong disc thickening may indirectly point to disc tilting or the result of a satellite accretion. It was found that the angular momentum of the hot gas corona around galaxies, shock heated due to the falling of external gas into the dark matter's potential well, is usually misaligned with that of their stellar disc (van den Bosch et al. 2002;Roškar et al. 2010;Velliscig et al. 2015;Stevens et al. 2017). When the hot coronal gas cools, it settles into the disc (Fall & Efstathiou 1980;Kereš et al. 2005) and contributes misaligned angular momentum to the disc, which results in the disc tilting (Debattista et al. 2015;Earp et al. 2019). Consequently, it was found that the principal axes of dark matter haloes in blue star-forming galaxies are generally misaligned with the principal axes of their discs (Yang et al. 2006;Wang et al. 2008). According to kinematic studies of nearby lenticular galaxies, misalignments between their stellar discs and gaseous components (from 40% to 60% of early-type galaxies have cool and ionized gas, Welch & Sage 2003;Sage & Welch 2006;Welch et al. 2010;Davis et al. 2011;Serra et al. 2012) are frequent (see e.g. Bertola et al. 1992;Kuijken et al. 1996;Sil'chenko & Moiseev 2006;Sil'chenko et al. 2009;Katkov et al. 2011;Davis et al. 2011;Katkov et al. 2013Katkov et al. , 2015Proshina et al. 2020). Davis et al. (2011) showed that misaligned gaseous subsystems are four times more frequent in field lenticular galaxies than in those residing in a cluster. Among strictly isolated nearby S0s, Katkov et al. (2014) found that half of all ionized-gas discs counterrotates the stars (see also Katkov et al. 2015). Recently, Sil'chenko et al. (2019) studied a sample of 18 S0 galaxies and found five galaxies with strongly inclined ionized gas discs. Some fraction of the gas observed in many S0s seems to be accreted in recent events, either due to tidal disruptions of massive gas-rich satellites (Kaviraj et al. 2009(Kaviraj et al. , 2011 or gas accretion from cosmological filaments (Thakar & Ryden 1996, 1998Kereš et al. 2005;Dekel & Birnboim 2006). Another interesting effect of disc tilting might be misalignment between inner and outer disc structures in edgeon galaxies. In this paper by inner and outer structures we mean that they are separated not only radially, but also vertically, so that an inner structure is enclosed in an outer structure. By tilting of a component we will call an observational effect where this component, as a whole, shows a different position angle than the general galaxy plane. Note here that we distinguish tilting from warping, which manifests itself in a vertical distortion of the disc starting at some radius, though both can have the same origin. The difference between disc warps and tilts is illustrated in Fig. 1. To this day, based on deep imaging, there has been no convincing observational evidence that the rather faint outer stellar structure in edge-on galaxies may exhibit tilting relative to the inner region in the sense that the inner and outer structures lie in different planes (see, however, our note below). Edge-on galaxies, as the best targets to study the vertical galaxy structure and disc tilting, were extensively studied in the optical and near-infrared (de Grijs & van der Kruit 1996;Kregel et al. 2002;Mosenkov et al. 2010;Comerón et al. 2011;Martín-Navarro et al. 2012;Bizyaev et al. 2014;Ciambur & Graham 2016;Comerón et al. 2018), including deep observations (Martínez-Delgado et al. 2010;Miskolczi et al. 2011;Morales et al. 2018;van Dokkum et al. 2019;Gilhuly et al. 2019), but no systematic difference between the position angles in the inner and faint outer galaxy regions was noticed, except for Mosenkov et al. (2020) where they studied the outer shape of edge-on galaxies and found several galaxies with tilted outer isophotes. The current paper is a follow-up study of Mosenkov et al. (2020). In the present paper, we consider a surface photometry of two SB0 galaxies with an apparent tilt of the outer structure with respect to the inner one as it represents a different disc which has a different position angle than the inner disc. NGC 4469 has been noted in Mosenkov et al. (2020), where they investigated deep imaging of 35 edgeon galaxies selected from the Halos and Environments of Nearby Galaxies (HERON) survey (Rich et al. 2019). We should note that M 82, the Cigar galaxy, which was also observed in the framework of the HERON survey, likewise shows tilted isophotes in the periphery (see Fig. 2), which may be associated with its spiral arms (Mayya et al. 2005) or a disc warp (Notni & Bronkalla 1983). Also, this galaxy shows evident signs of interaction with its neighbour M 81 (Yun et al. 1993). Taking into account the foregoing arguments and the peculiar nature of this well-known starburst galaxy (Rieke et al. 1980;Heckman et al. 1990), we decided not to consider M 82 in our study. Another galaxy with a prominent tilted outer structure, NGC 509, which was classified by Mosenkov et al. (2020) as an edge-on galaxy, is in fact not so highly inclined, therefore we rejected it as well. In this study we do not consider another remarkable edge-on galaxy from the HERON survey with an apparent tilt of the outer structure, NGC 4638, which exhibits a tilt of the bright stellar halo with respect to the disc. We are about to consider this galaxy among other edge-on galaxies with tilted bright haloes in our further paper, whereas in this study we mainly focus on tilted disc components. In addition to NGC 4469, NGC 4452 was chosen for this study incidentally, during a visual inspection of the EGIS catalogue of edge-on galaxies (Bizyaev et al. 2014). Using the technique described in Sect. 2 (see also Morales et al. 2018), we are likely to find more objects with such faint outer structures in our further studies. Kormendy (1982) discussed isophote twists in non-edgeon galaxies which can be the result of tidal effects due to a close (or close in the past) companion. The best example is the spheroidal galaxy NGC 205 which exhibits an isophote twist toward M 31. Michard & Marchal (1993) (see also Michard & Marchal 1994) found small twists of the isophotes for 10 nearly edge-on S0 galaxies, which are manifested in small trends in their major axis position angles and one of the Fourier harmonics responsible for the isophote asymmetry. They attributed this change of the position angle to a small difference of the major axis orientation between the central component (a bulge and/or bar) and an outer component (a disc or a halo). They estimate this tilt of a few degrees between the equatorial planes of these components, and claim that it is not associated with spiral arms and disc warps. Here we should note, however, that we revisited a photometry of these galaxies using the data preparation method described in Sect. 2 and established that all galaxies, except for NGC 4638, show either an inner twist of an inner triaxial structure (possibly, a bar) or 'a tilt' of the halo, but the estimated edge-on orientation of these galaxies is either questionable (e.g. NGC 2732 and NGC 7332) or not true (e.g. NGC 4036 and NGC 4251). It is evident that when there are triaxial components in a flat disc that has one tilt angle to the line of sight, then isophotes at different surface brightnesses will necessarily have different position angles. But how important this effect is when we consider almost edge-on galaxies? Do we observe edge-on galaxies with inner components with position angles that are different from that of the main disc? For a pedagogical purpose, in this paper we consider several not exactly edge-on galaxies (including NGC 509 in Appendix A) which exhibit tilting of the inner structure similar to those reported by Michard & Marchal (1993). We show that these structures may be, in fact, bars which appear tilted due to the projection effect. This paper is organized as follows. In Sect. 2 we describe the data and image preparation for the two selected galaxies with possible titled outer discs. In Sect. 3, we consider in detail each of the selected galaxies. We discuss our results in Sect. 4 and make final conclusions in Sect. 5. THE DATA Here we describe the data for NGC 4452 and NGC 4469, which we use in our photometric analysis in Sect. 3. For NGC 4469, its deep image has been considered in the framework of the HERON survey (Rich et al. 2019;Mosenkov et al. 2020). However, here we analyse observations of both galaxies obtained from the Sloan Digital Sky Survey DR16 (SDSS, York et al. 2000;Ahumada et al. 2019) and the DESI Legacy Imaging Surveys DR8 (hereafter Legacy, Dey et al. 2019). This allows us to 1) explore the faint outer structure of these galaxies down to the same level of depth, and 2) study their colour maps. Also, the resolution in these surveys is better than in the HERON survey, although their imaging is shallower (below, however, we describe how we increase their depth by stacking galaxy images in several optical wavebands, as proposed in Miskolczi et al. 2011 andMorales et al. 2018). In principal, we could limit ourselves to use SDSS but Legacy has deeper imaging and a better resolution (see below), therefore we decided to use both data sources. Our image preparation is organized as follows 1 . Using a special Python script to download and concatenate adjacent fields from the SDSS archive 2 , we retrieve galaxy images for both galaxies in the gri bands (the u and z bands are rather shallow, Fukugita et al. 1996, so we do not use them) in an automated regime. Also, we download corresponding Point Spread Function (PSF) images (extracted from the respective psFields files) in the same wavebands. These tiny images describe the core PSF (seeing), but for the wings of the scattered light we use an extended PSF from Infante-Sainz et al. (2020), which we rotate to align the drift scanning direction of the image with the drift scanning direction of the PSF. We then merge the inner (core) and outer (extended) PSFs by normalising them in an annulus 5 in width where the two PSFs overlap. After that, the galaxy images in the g and i bands are resampled to the r band using the created extended PSFs by means of the pypher 3 package (Boucaud et al. 2016) suited for PSF matching. After that, for each galaxy image we create a segmentation map using SExtractor (Bertin & Arnouts 1996) and increase the size of the regions of the general mask by a factor of two, to ensure that our mask covers all faint scattered light in the image. Then we apply the created mask and fit the background with a first order polynomial and subtract it from the initial image. Finally, we stack the galaxy images in all three bands -this increases the signal-to-noise ratio and, consequently, reduces the root-mean-square (rms) of the background. For this purpose we use the IRAF/IMCOMBINE procedure. In the resultant image we select good PSF (unsaturated) stars and measure the total fluxes of these stars using a procedure from the Python package photutils. Then we cross-correlate the selected stars with the SDSS database to retrieve their apparent magnitudes in the r band. By so doing, we are able to perform photometric calibration of the stacked image and estimate its zero-point in the r band. The procedure of stacking the three wavebands together allows us to improve the depth of the imaging by 1.1 mag, on average: 26.5 mag/arcsec 2 for the individual bands versus 27.6 mag/arcsec 2 for the stacked images, calculated at the 3σ level in a box of 10 × 10 arcsec 2 (we make use of this notation from Fliri & Trujillo 2016 throughout the paper). In our final step, we mask out all objects which do not belong to the target galaxy. To do this, we use a special Python script from the IMAN package which is based on the mtobjects tool 4 (Teeninga et al. 2015): it effectively identifies all objects in an image and is robust for masking even in an automatic mode without a special tuning of input parameters. The same pipeline is applied to the Legacy images in the gr z bands, except that we do not do image resampling, as, at the moment, no extended PSFs have been produced for the Legacy imaging. We should note, however, that the image resampling is not a necessary step for enhancing the depth of the images and we only do this to be able to carry out photometric decomposition of the stacked SDSS galaxy images (see Sect. 3) using the matched PSFs. The image stacking of the Legacy data allows us to create final images with a depth of 28.3 mag/arcsec 2 calibrated to the r band. We also use observations in the NUV band from the Galaxy Evolution Explorer (GALEX, Martin et al. 2005;Bianchi et al. 2014) and observations in the W1 band (3.4µm) from the the Wide-field Infrared Survey Explorer (WISE, Wright et al. 2010) to estimate the NUV−r colour and the total stellar mass of the galaxies, respectively (see Sect. 3.2). The images were prepared using the above described routines except for image stacking. For each galaxy, we carry out an IRAF/ELLIPSE (Jedrzejewski 1987) fit of the galaxy isophotes. We consider the distributions of the position angle PA, ellipticity = 1 − b/a (where a and b are the semi-major and semi-minor axes, respectively) and the B 4 coefficient, which characterises the shape of the isophotes, see below. Also, we provide a 2D photometric model, consisting of several structural components, using the IMFIT code (Erwin 2015). We use its standard Levenberg-Marquardt algorithm for finding the optimal fit parameters. To estimate indicative uncertainties on our fit parameters, we vary the sky level using a set of Monte-Carlo simulations (10 attempts) with the sky rms determined in Sect. 2. For uniformity, to fit the 2D intensity distribution for each structural component of the galaxies under study, we use a Sérsic function (Sérsic 1963(Sérsic , 1968 with the following major-axis intensity profile: where I e is the surface brightness at the effective (half-light) radius r e and n is the Sérsic index controlling the shape of the intensity profile. The function b n is calculated via the polynomial approximation in Ciotti & Bertin (1999) In our fits we consider generalised ellipses (Athanassoula et al. 1990;Erwin 2015) with the following free parameters: position angle PA, ellipticity , and C 0 , which controls the discy/boxy view of the isophote (when C 0 < 0 the isophotes look discy, whereas they become boxy if C 0 > 0 and C 0 = 0 if the isophotes can be represented by pure ellipses). Note that one of the output Fourier coefficients of the IRAF/ELLIPSE procedure, the B 4 coefficient, also characterises the discyness/boxyness of the isophotes, so that when B 4 > 0 the isophotes are discy and they look boxy if B 4 < 0. Apparently, there is a link between C 0 and B 4 , but as they are derived using different approaches, the relation between them is not direct. Thus, we use both when needed. We denote by f the fraction of light in a given component according to our models. We should note here that we neglect the influence of dust on the estimated parameters of the photometric com-ponents. Although this assumption can be rather strong (as we can see below, NGC 4469 shows some traces of dust in the central galaxy region), the fraction of bolometric luminosity absorbed by dust in early-type (Hubble stage T < 0.5) galaxies is estimated to be 7.4 ± 0.8%, on average, versus 24.9 ± 0.7% for late-type (T ≥ 0.5) galaxies (Bianchi et al. 2018). Smith et al. (2012) also show that the stellar discs in S0s contain much less dust than the discs in late-type spirals. General information on the galaxies In Table 1, we list some general characteristics for both galaxies using their stacked images calibrated with the corresponding zero points in the r band. We measured the total (asymptotic) magnitudes of the galaxies, along with the optical radii r 25 at the isophote 25 mag/arcsec 2 , using their curves of growth based on the performed IRAF/ELLIPSE fitting. As one can see, both galaxies are classified as SB0. The two galaxies do not differ significantly in luminosity and represent quite modest by mass and size disc galaxies (see e.g. Shen et al. 2003) with possible recent star formation: Kaviraj et al. (2007) showed that early-type galaxies with NUV−r < 5.5 might have experienced recent star formation (both NGC 4452 and NGC 4469 show colours close to this boundary value). However, NUV−r < 5.5 may also be related to the presence of evolved hot stars, the UV-upturn (Greggio & Renzini 1990). We also present the mean colours (g − z) for the inner region, outlined by an isophote of 24 mag/arcsec 2 , and the outer region between the isophotes 24 mag/arcsec 2 and 26 mag/arcsec 2 (columns (9) and (10) in Table 1). As one can see, the outer region is slightly bluer than the inner one for both galaxies, but this difference is insignificant. Description of each galaxy Below we consider each of the selected galaxies in detail. We compare the created deep images for NGC 4452 (SDSS, Legacy) and NGC 4469 (HERON, SDSS, Legacy) and conclude that both galaxies show a tilted outer structure with respect to the inner one. Below we show the stacked images, isophote and colour maps, as well as the results of the isophote analysis, based on the Legacy data. We use the stacked SDSS images to perform multicomponent decomposition, mainly to derive the parameters for the outer component, which is of great interest in the current study. The HERON image for NGC 4469 can be found in fig. A1 in Mosenkov et al. (2020). Both galaxies show the same difference between the total magnitude at the isophote 25 mag/arcsec 2 and 28 mag/arcsec 2 , according to their curves of growth, as 0.09 mag, or an increase of 8.6% of the flux within the optical radius. NGC 4452 NGC 4452 is an SB0 edge-on galaxy close to the centre of the Virgo cluster. Kormendy & Bender (2012) classified it as S0c, according to their updated van den Bergh's (van den Bergh 1976) parallel-sequence classification of galaxies. In Fig. 3, one can see an inner disc of high surface brightness with a sharp edge. The very thin inner structure gives us a hint that this galaxy is exactly edge-on or very close to it. The outer disc exhibits a remarkable tilt and warping increasing towards the periphery. Michard & Marchal (1994) observe a twist of the isophotes which they attribute to a warped thick disc as in NGC 4762. Sandage & Bedke (1994) and Kormendy & Bender (2012) also find NGC 4452 similar to NGC 4762. Our deep images do not reveal any interesting LSB features around NGC 4452. In Fig. 4, one can see a steady increase of the position angle where the inner structure (disc or ring) is cut off: from r ≈ 37 to the periphery the position angle changes up to 19 • . According to figure 8 in Kormendy & Bender (2012), who used a high-resolution HST ACS F475W image to study the photometry of this galaxy in great detail, the inner structure is very flat ( 0.9), whereas due to the poor resolution of our stacked image the inner structure is smeared out and shows a lower flattening ( ≈ 0.77). The advantage of this work is that we can study the galaxy out to very low surface brightness where the outer structure becomes very thick (up to ≈ 0.5 for r 190 in Fig. 4). The galaxy shows discy or oval isophotes at all radii (B 4 0). Its outer structure is remarkably discy. Ferrarese et al. (2006) find that there is no significant colour difference between the different components of the galaxy, except that its inner (nucleus) disc is bluer than the outer discs. Our lower-resolution colour map in Fig. 5 shows that the inner structure is red whereas the colour distribution above and below the galaxy plane is slightly bluer. Also, we note a prominent colour gradient (see Fig. B1 in Appendix B): the galaxy is getting bluer at larger radii from the centre. Consolandi et al. (2016) studied radial profiles of galaxies in the Coma and Virgo superclusters and found that early-type galaxies show no colour gradients. However, as no internal extinction correction was applied to NGC 4452, its colour gradient may naturally arise from the non-negligible presence of dust which affects the colour distribution especially along the galaxy plane, though we see no sign of dust in this galaxy. Also, we can clearly see that the red inner disc has a prominent flaring and warping. The two more reddish regions, located symmetrically with respect to the galaxy centre in the plane (depicted by two yellow ovals) probably point to a lens (a shelf-like feature in the surface brightness distribution) or, less likely, a ring structure. As shown in many studies, inner rings are not common in SB0 galaxies, whereas bars, often embedded in lenses of the same majoraxis size, are often observed in such galaxies (Sandage 1961;Kormendy 1979Kormendy , 1982Buta et al. 2007;Kormendy 2013). Comerón et al. (2014) suggest, however, that about half of S0 galaxies have inner rings, though many of them do not have current or recent star formation (Comerón 2013). Also, lenses are seen in some galaxies that have no bars at all (e. g., NGC 1553 : Freeman 1975;Kormendy 1984Kormendy , 2013. Overall, as there is no detection of Hi for this galaxy in HyperLeda (Serra et al. 2012 give log M(Hi)< 7.27, where masses are in Solar units), the star formation is probably very low and should be consistent with what we observe in regular S0 galaxies. As emphasized in Kormendy (1979) and based on the standard definitions and common illustrations in de Vau- Table 1. Main parameters of the selected galaxies. (1) Distance taken from the NASA/IPAC Extragalactic Database (NED) which is based on Tully & Fisher (1988) and calculated using the Tully-Fisher method (Tully & Fisher 1977). The method of surface brightness fluctuations (Tonry & Schneider 1988), applied by Mei et al. (2007) for 79 galaxies of the Virgo cluster, gives the mean distance D = 16.5 Mpc for "all galaxies (no W' cloud)", which is close the the distances adopted in this paper. (2) Morphological type from HyperLeda, (3), (4) semi-major axis of the isophote 25 mag/arcsec 2 in the r band, (5), (6) asymptotic magnitude in the r band taking into account the Galactic extinction from Schlafly & Finkbeiner (2011), (7) colour based on the GALEX and SDSS photometry (corrected for the Galactic extinction but not corrected for the internal dust attenuation), (8) total stellar mass computed from the galaxy luminosity in the WISE W1 band and using the prescriptions from Wen et al. (2013), (9) mean colour for the region between the isophotes 24 mag/arcsec 2 and 26 mag/arcsec 2 (corrected for the Galactic extinction but not corrected for the internal dust attenuation), (10) mean colour for the region within the isophote 24 mag/arcsec 2 (corrected for the Galactic extinction but not corrected for the internal dust attenuation). (2013), bars and lenses in early-type galaxies 1) both have very slowly decreasing surface brightnesses along their long axes (along any axis, for a lens) and then a sharp outer edge (they represent "shelves" in their surface brightness profiles); 2) bars have higher surface brightnesses than the lenses in which they are commonly embedded; 3) most often, the bar exactly fills the lens along its longest dimension. Using these facts, Kormendy & Bender (2012) clearly distinguish five Sérsic components in NGC 4452: a nuclear stellar cluster (n = 0.7), a small pseudobulge (n = 1.1), a bar at a skew angle (not along the line of sight and not perpendicular to it) with n = 0.18, a lens component (with n = 0.2), and an outer disc. In our decomposition we adopt the geometrical model by Kormendy & Bender (2012) for the four inner (thin or tiny in our SDSS image) components (the nucleus, the pseudobulge, the bar, and the lens), implying that their geometry was well fitted by Kormendy & Bender (2012) (due to the high resolution of the HST image used as compared to our SDSS image) and very sharp edges of the bar and the lens. Also, we assume that the geometry of these components does not change significantly for the HST ACS F475W and SDSS r filters. In our fitting, we only fit the effective surface brightnesses and C 0 of these components. Note that we adopt the ellipticities of these components from Kormendy & Bender (2012) (see their figure 8) and keep them fixed in our decomposition. In contrast to the 1D fitting in Kormendy & Bender (2012), here we fit the overall 2D surface brightness distribution in NGC 4452, -this allows us to take into account the flattening, position angle and disciness/boxyness parameters for the remaining components -an inner disc (which is especially visible in the colour map, see Fig. 5, as a red colour component in the main galaxy plane) and a slightly bluer tilted outer disc. In the cases of the nucleus and the bulge, which are too tiny in our SDSS image to be well resolved, the C 0 parameter is set to 0, that is it implies pure elliptical isophotes. In total, our model consists of six components, four of which have the fixed geometry as derived in Kormendy & Bender (2012) and for the two outer components (discs) all parameters left free during the fitting. The results of our decomposition (see Table 2 and Fig. 6) confirm that the two outer components are indeed discs -their Sérsic indices are close to 1. The radial distribution in an edgeon transparent exponential disc is described by the function r/h · K 1 (r/h), where h is the disc scalelength and K 1 is the modified Bessel function of the second kind (van der Kruit & Searle 1981). If one fits a Sérsic function to this distribution, the Sérsic index will be n ≈ 0.8 − 0.9 -this is what we derive for both the discs in NGC 4452. Also, their negative parameters C 0 show that these components have discy, diamond-like isophotes. Interestingly, the less extended disc coincides with the plane of the inner components, whereas the outer disc is inclined by ∼ 8 • with the plane of the inner structure. The outer tilted disc has a rather large contribution to the total galaxy luminosity (≈ 30%), despite the very low deprojected central surface brightness µ face−on 0,r = 22.77 mag/arcsec −2 (for comparison, in the EGIS catalogue µ face−on 0,r = 21.56 ± 0.81 mag/arcsec −2 ). However, its radial scalelength is also unusual (h = 3.51 kpc) given its very low central surface brightness (see the correlation between the face-on central surface brightness and disc scalelength in figure 2 in Fathi 2010 and discussion in van der Kruit & Freeman 2011). It should also be noted that this disc has a very small flattening = 0.66 as compared to = 0.79 ± 0.05 for the EGIS galaxy discs. The inner disc, however, has typical characteristics for edge-on discs: µ face−on 0,r = 20.78 mag/arcsec −2 , h = 1.75 kpc, and = 0.76. In Fig. 4 we superimpose the results of the IRAF/ELLIPSE fitting for our model. One can see that the model follows the observation fairly well. The increase of the position angle at large radii points to a warp of the outer tilted disc. Taking into account the above said, the outer disc can be considered as a tilted warped thick disc, whereas the inner disc is a warped and flared thin disc. NGC 4469 According to HyperLeda, this galaxy is classified as SB0a. As NGC 4452, it also belongs to the Virgo cluster, but it is located far out from its centre (given the distance to the cluster 16.5 Mpc, the projected distance from its centre to NGC 4469 is 1.05 Mpc versus 0.24 Mpc for NGC 4452). Ciambur & Graham (2016) studied a prominent "hump" Xstructure, which represents a manifestation of a side-on bar in edge-on galaxies (see e.g. Combes et al. 1990;Lütticke et al. 2000). According to the results of their isophote fitting, it is obvious that they do not consider the outer structure of this galaxy (see their figure A1): their isophotes do not extend deeper than ≈ 24 mag/arcsec 2 . Cortés et al. (2015) give a detail description of NGC 4469 in their appendix and note slightly twisted outer isophotes with respect to its inner disc. However, we found that the name NGC 4469 should be replaced there by NGC 4569, as NGC 4469 does not appear anywhere else in their study, whereas NGC 4569 is truly considered. Jo et al. (2018) detected an appreciable extraplanar Hα and UV emission, which is associated with diffuse extraplanar ionized gas and extraplanar dust. This vertically extended dust can effectively scatter UV starlight and Hα from Hii regions located in the galactic plane. Hendy et al. (2016) measure a rather strong warp of the disc in NGC 4469 by estimating the warp degree using the areas of an outer isophote and its fitted ellipse. Mosenkov et al. (2020) detect boxy/oval (C 0 = 0.4) isophotes for the outer structure in NGC 4469. The HERON observation (Rich et al. 2019) of this galaxy clearly shows that the outer structure is tilted by several degrees with respect to the inner one. Here, using the SDSS and Legacy imaging, we confirm their result. Fig. 7 shows a thick boxy outer structure (a disc) which indeed has a prominent tilt. None of the deep observations, which we considered for this galaxy, does reveal any LSB details near NGC 4469. Its inclination i, based on a patchy dust structure in the galaxy centre, was estimated in Mosenkov et al. (2020) as 88 • . However, here we re-estimated the inclination based on the GALEX NUV image, which, together with the optical images under study, shows a ring-like structure (it is also evident from a colour map, see below). For this ring we identified its abrupt edges and, based on its shape, we can use the formula (A1) i = arccos(∆z/R) from Mosenkov et al. (2015), where R is the radius of the observed ring-like structure and ∆z is the maximum projected height of the inclined structure above (or below) the galaxy plane. We measured these parameters to be R = 100.8 and ∆z = 14.4 and computed the galaxy inclination as approximately 82 • . The visual inspection of the results of the isophote fitting in Fig. 8 reveals a change of the position angle of ≈ 8 • , starting from the radius 75 up to the outermost isophotes. The inner region within 10 shows a round ( ≈ 0, B 4 ≈ 0) component (bulge) which is surrounded by a prominent boxy (X-shape) bar (B 4 < 0 up to r ≈ 70 ). At radii 75 to 150 , we can see discy isophotes, which are probably related to a highly inclined inner ring. The slightly tilted outer structure at r 200 exhibits conspicuous boxy isophotes (B 4 < 0). Our (g − z) map in Fig. 9 obviously exhibits some traces of the patchy dust over the galaxy body (depicted by pink and yellow smudges), mainly in the central galaxy region. We can clearly see blue tips of an inner structure (possibly, a ring) and a reddish X-structure, surrounded by a blueish outer disc. The colour profile in Fig. B1 shows a very red central region (the internal extinction is one of the main reasons of this peak) and a steady blueing with radius where the disc dominates. The comparison with NGC 4452 shows that the disc of NGC 4469 is slightly redder. Nevertheless, as Hi mass is estimated to be 4.9 × 10 7 M (calculated as M HI = 2.36 × 10 5 × D 2 × F HI , where the total flux F HI in Jy km s −1 is taken from Table 1 in Taylor et al. 2012) and the emission lines in its SDSS spectrum are excited by star formation. Our decomposition model for this galaxy includes four primary components: bulge, bar, ring, and disc. For uniformity, each of them is described by a Sérsic function. The results of the decomposition are listed in Table 3. The goodness of the model is shown in Fig. 10. In Fig. C1, we show a surface brightness profile with a superimposed decomposition model. According to our decomposition results, the bulge has an exponential profile (n ≈ 1.3) which makes it a pseudobulge (we admit, however, that due to internal dust extinction, the parameters of the bulge may be unreliable). The bright bar is significantly puffed up in the vertical direction ( = 0.3). Its profile is close to exponential (n ≈ 0.8) and the isophotes are extremely boxy (C 0 = 9), which can be explained by the tremendous X-structure. Many or even most well developed bars have gone through a phase of vertical buckling instability that results in a boxy-shaped or even X-shaped inner structure (Combes & Sanders 1981;Combes et al. 1990;Pfenniger & Norman 1990;Pfenniger & Friedli 1991;Raha et al. 1991;Athanassoula & Misiriotis 2002;Athanassoula 2005;Martinez-Valpuesta et al. 2006;Smirnov & Sotnikova 2019). The third component, which looks like a diamond-like disc, has a small Sérsic index (n ≈ 0.4) and shows a smooth truncation at µ r = 21.8 mag/arcsec 2 . We suppose that this component more closely resembles a ring (or, less likely, a tightly wound spiral). Its blue colour, unlike the reddish bar, shows that this component probably is not a lens (Herrera-Endoqui et al. 2017). The outer disc, as in the case of NGC 4452, is also close to exponential (n ≈ 0.8), but, in contrast, has boxy isophotes (C 0 = 0.5). Similar to NGC 4452, its luminosity fraction f is about 34%. The disc has a very low surface brightness (µ face−on 0,r = 23.11 mag/arcsec −2 ), while the radial extent is quite large for its luminosity (its scale- length h = 5.95 kpc is larger than that of our Milky Way, see van der Kruit & Freeman 2011 and references therein). According to our decomposition results, the difference between the position angles for the inner and outer structures is 3.5 • . In Fig. 8 we superimpose the results of the IRAF/ELLIPSE fitting for the model. As one can see, except for the central region, where the galaxy light is significantly affected by the dust attenuation (although the dusty patches were masked out during the fitting), the model follows the observation fairly well. DISCUSSION In this section we discuss possible reasons of the observed phenomenon of the tilted discs in NGC 4452 and NGC 4469. Tilted disc or warps? To show that the tilted outer structures in NGC 4452 and NGC 4469 do not resemble ordinary warps of galactic discs, we collected the centrelines of 13 galaxies with conspicuous optical warps from Reshetnikov et al. (2016). These centrelines were created for isophotes of 25.5 mag/arcsec 2 in the r band. For the galaxies, which we study in this paper, we also created centrelines for the same isophote (see Fig. D1). Prior to that, their stacked images were interpolated in the masked regions. As one can see in Fig. 11, the centre-lines of the outermost isophotes for the galaxies with tilted outer structures are virtually straight lines (their position angles are constant with radius), except that NGC 4452 shows some warping far from the galaxy centre, whereas the centre-lines for the 24.78 ± 1.67 kpc galaxies with warps are appreciably curved. In Fig. D1, we pay your attention that the minor axes of the outermost isophotes for both galaxies are not perpendicular to their general galaxy planes (major axes of the inner isophotes). A more straightforward way to compare the warps from Reshetnikov et al. (2016) and the "warps" in NGC 4452 and NGC 4469 is to compare the parameters of their warps, the warp angle Ψ (an angle measured between the galaxy plane and the line connecting the galaxy centre and the tips of the outer 25.5 isophote) and the radius r w where the warp begins (see Fig. 1). We created centre-lines for each of the isophotes plotted and then averaged them. That resulted in a final centre-line which was then fitted with a double piecewise linear function (see Reshetnikov et al. 2016 for details). In Fig. D1 by red lines we show the centre-lines for each of the galaxies. For NGC 4452, we obtained r w = (0.18 ± 0.02) r 25 , Ψ NW = −7.6 • (north-west, or left warp), Ψ SE = 10.2 • (southeast, or right warp). For NGC 4469, r w = (0.60 ± 0.03) r 25 , Ψ S = −5.5 • (south, or left warp), Ψ N = 5.6 • (north, or right warp). For comparison, in Reshetnikov et al. (2016), Ψ = 7.3 ± 6.4 • and r w = (0.90 ± 0.24) r 25 . We can see that the warps in both galaxies start at smaller radii (especially in NGC 4452) than observed in galaxies with prominent warps. Also, our models of NGC 4452 and NGC 4469, which do not imply warping of the outer discs but only their different position angles with respect to the inner components, describe the observations fairly well, including the major axis profiles, as well as PA, ellipticity and B 4 distributions. From this we can conclude that if the observed outer structures in NGC 4452 and NGC 4469 truly are genuine disc warps, they do not appear typical and deserve a special attention in any event. Triaxiality of the outer or inner structure? Another important issue regarding the tilted structures is the projection effect, when the galaxy inclination and a specific orientation of a triaxial component in this galaxy can give us an illusion that the inner or outer structure is tilted with respect to the outer or inner galaxy region, respectively. As NGC 4469 is not a pure edge-on galaxy and has a bright triaxial bar, the projection effect cannot be neglected. To show this, we performed numerical simulations to create mock galaxies with B/PS bulges (see details in the Appendix E). By changing the galaxy inclination angle and position angle of the bar (the angle between the line of nodes and the line of sight from the galaxy centre to the observer), Figure 11. Centre-lines for the galaxies with conspicuous warps from Reshetnikov et al. (2016) compared to the two galaxies with tilted outer structures, which we consider in the present paper. All centre-lines were plotted for isophotes of 25.5 mag/arcsec 2 in the r band and approximated by splines. The x and y dimensions for each galaxy are normalized by its optical radius r 25 . we can indeed find a combination of these two angles, which gives us an appearance of a galaxy with a tilted inner structure (see Fig. 12, top lefthand panel). Interestingly, in the EGIS catalogue we found a galaxy, NGC 3869 (see Fig. 12, bottom lefthand panel), which resembles the mock galaxy (Model 1) -its inner structure looks tilted and, moreover, its X-shape structure seems asymmetric (note that in the image under the boxy B/PS bulge, there is also a satellite shown by the blue circle). The distributions of position angle, ellipticity and B 4 with radius for NGC 3869 and the mock galaxy are presented in Fig. E1. Both galaxies show very similar distributions of these quantities. Therefore, it is highly likely that the asymmetric view and the tilt of the inner structure (bar) in NGC 3869 is, in fact, a consequence of a projection effect. A similar effect is observed in NGC 7332, where we can see a B/PS bulge with relatively bright ansae (Fig. 12, bottom righthand panel). In another simulation of ours, Model 2, (Fig. 12, top righthand panel), a bar with a lens and bright ansea naturally arise in the course of the model evolution. By changing the galaxy inclination and position angle of the bar, we can again obtain a projection where the inner structure is inclined with respect to the disc. NGC 7332 was noted by Michard & Marchal (1993) among other 10 galaxies with tilted inner structures. Most of these galaxies also harbour bars, therefore the observed tilting in these galaxies can probably be explained by the same effect of projection, as in the case of NGC 3869 and NGC 7332. For a pedagogical purpose, in Appendix A we discuss another galaxy, NGC509 from the HERON sample. This galaxy was initially selected for this paper as a galaxy with a tilted outer structure, but after a careful investigation we rejected it because its orientation is far from edge-on. As to NGC 4469, apart from the B/PS bulge it has an inner ring-like structure. As this structure should, in princi-ple, be almost round (inner rings typically have axial ratios of 0.8 − 0.85, see e.g. Buta 1995;Kormendy 2013;Comerón et al. 2014), the orientation of its major axis should not strongly depend on the galaxy inclination and the position angle of the bar. We note, however, that not only bars, bulges and lenses can be triaxial structures (Kormendy 1979;Méndez-Abreu et al. 2010;Sotnikova et al. 2012;Costantin et al. 2018), but also large-scale discs. For example, in the lenticular galaxy NGC 5485 Sil'chenko (2016) found a non-circular stellar exponential disc with a highly noncircular stellar rotation. Also, she detected two wide elliptical stellar rings in the unbarred lenticular galaxy NGC 502, which might be formed due to a dry minor merging. Galaxies with such oval distortions or elliptical discs may be not so rare. The triaxiality of the stellar haloes is still under question. For example, haloes in hybrid semianalytic plus N-body models by Bullock & Johnston (2005) are oblate (see also Bell et al. 2008), whereas Cooper et al. (2010 find triaxial haloes in their N-body only simulations of Milky Way-mass galaxies (see also Bailin et al. 2014). In recent large volume cosmological hydrodynamical simulation Illustris (Vogelsberger et al. 2014), Elias et al. (2018) found triaxial stellar haloes in galaxies with a wide range of the stellar halo fraction: 0.6 b/a 1.0 and 0.4 c/a 0.9 (in their notation, a, b, c are the major to minor principal axes of the inertia tensor for the stars). However, they concluded that the simulated halos can be fairly oblate, with median b/a ∼ 0.9 and c/a ∼ 0.5. Monachesi et al. (2019) came to the same conclusion: they studied halo global properties in the Auriga cosmological magneto-hydrodynamical highresolution simulations of Milky Way-mass galaxies (Grand et al. 2017): most of the Auriga haloes appeared to be oblate spheroids as well. For the Milky Way, it has been well-established that the shape of the stellar halo is an oblate, not a triaxial spheroid (Yanny et al. 2000;Larsen & Humphreys 2003;Jurić et al. 2008;Deason et al. 2011). Observationally, it is shown that the stellar haloes are moderately flattened spheroids (c/a ∼ 0.6 with a considerable range) with surface brightness distributions that are well described by a r −α law, where the power slope α is usually found to be 2-4 (see e.g. Zibetti & Ferguson 2004;Harmsen et al. 2017), supported by cosmological hydrodynamical simulations by Font et al. (2011). In conclusion, if we assume that the outer component in NGC 4469 has a triaxial shape (although the observations show that the outer structures in disc galaxies are more likely to be oblate spheroids), whether it is a thick disc or a bright flattened exponential halo, its triaxiality may be responsible for the observed difference in the position angles of the inner and outer structures. van der Kruit & Freeman 2011) that the baryons that end up in the inner parts of a galaxy disc get accreted with an angular momentum vector (1) that is different from that of the dark matter that is accreted from the cosmic web and that lands farther out, (2) that is different from that of baryonic gas that cools from the warm-hot intergalactic medium (see also Davé et al. 2001) and that lands in an outer disc, and (3) that is different from that of any satellites made of dark matter and baryons that may be accreted at large radii into the outer disc and/or the dark halo. Cosmological, hydrodynamic simulations show that angular momentum vectors of the gas and dark matter in haloes tend to be misaligned (van den Bosch et al. 2002;Chen et al. 2003;Sharma & Steinmetz 2005;Sharma et al. 2012). All these processes can induce tilting of the outer structure with respect to the inner one. Therefore, misalignments of main discs with outer discs whose inclinations reflect the gravity of misaligned outer dark matter haloes are certainly to be expected. However, as observationally we find few galaxies with tilted outer structures, these mechanisms should not be very important for inducing the tilts which can be directly observed (see also below). Dark matter halo tumbling As shown in a number of early (Binney 1978;Barnes & Efstathiou 1987;Frenk et al. 1988;Warren et al. 1992;Jing et al. 1995;Thomas et al. 1998;Yoshida et al. 2000) and more recent works (Bryan et al. 2013;Zhu et al. 2016), dark matter haloes are generally triaxial (see the review by Zasov et al. 2017). The flattened potential of triaxial haloes can induce oval distortions and non-circular velocities (Hayashi et al. 2007) which can be quite strong in dwarf (Valenzuela et al. 2007) and low-surface brightness galaxies (Hayashi & Navarro 2006). Its triaxiality can rapidly change over time Vera-Ciro et al. (2011) -this can impart a significant external torque on the galaxy disc. Dubinski & Chakrabarty (2009) performed N-body simulations and considered the inner part of the dark matter to be aligned with the disc, while the outer dark matter halo is misaligned and slowly tumbling. They found that the misaligned and tumbling outer halo causes a slowly changing external torque on the disc and that, in its turn, induces long-lived transient warps and tilting (see also Toomre 1983;Sparke & Casertano 1988;Os-triker & Binney 1989;Kuijken 1991;Debattista & Sellwood 1999;Jiang & Binney 1999), bar instabilities (Dubinski & Chakrabarty 2009) and spiral structure (see also Khoperskov et al. 2013;Khoperskov & Bertin 2015;Hu & Sijacki 2016). Therefore, this effect can be at play to explain the observed tilting in the galaxies under study. Unfortunately, this mechanism cannot be verified observationally. Also, the recent study by Earp et al. (2019) showed that the dark matter halo by itself does not play a significant role in the disc tilting. Accretion S0 galaxies are often thought to be red and dead systems without current star formation. However, this claim has been challenged and evaluated in multiple studies of S0 galaxies (see e.g. recent studies by Kostiuk & Sil'chenko 2015;Sil'chenko et al. 2018;Proshina et al. 2019;Sil'chenko et al. 2019;Proshina et al. 2020). In recent study, for example, Proshina et al. (2020) considered the ring S0 galaxy NGC 4513. They found that its ionized gas counterrotates the stellar component and, therefore, it was accreted. They concluded that its blue ring, demonstrating current star formation, is a result of tidal disruption of a massive gas-rich neighbour in the past, or it may be a consequence of a long star-formation event provoked by a gas accretion from a cosmological filament. Strongly inclined ionized gas discs, observed in many S0 galaxies (Katkov et al. 2015;Sil'chenko et al. 2019), is another evidence of the external gas supply. As Sil'chenko et al. (2019) claim, "a crucial difference of the accretion regime in S0s with respect to spirals: the geometry of gas accretion in S0s is typically off-plane". Recent accretion of a gas-rich satellite in the case of NGC 4452 seems to be impossible by two reasons. First, most galaxies in the central region of the Virgo cluster are gaspoor galaxies due to, for example, ram pressure stripping. Second, the velocity dispersion of galaxies in a cluster is very high (approximately, up to several thousands of km/sec). Therefore, a fast encounter does not result in merging, but in galaxy harassment (see Sect. 4.6). In the case of NGC 4469, which is located quite far from the Virgo centre, an accretion of one or a few gas-rich dwarfs might happened several Gyr ago. One of such events can be responsible for a fairly blue ring in this galaxy. Another possible scenario to form tilted structures might be gas accretion via cosmological filaments when thin filaments come into a galaxy under some angle and make an inclined gaseous disc (Thakar & Ryden 1996, 1998Kereš et al. 2005;Dekel & Birnboim 2006) with ongoing star formation. However, in the presence of X-ray emitting, hot gas in the Virgo cluster (especially in its central part where NGC 4452 resides), it is hard to explain how gas accretion via filaments can act in such harsh conditions. Tidally-induced tilts and warps We suppose that the most plausible explanation of the observed tilted discs in NGC 4452 and NGC 4469 is galaxy harassment when high-speed encounters of galaxies in a cluster do not result in a merger but significantly affect the shape (and morphology) of the interacting galaxies which cause the discs to warp and tilt. Numerical and cosmological simulations support this mechanism (Vesperini & Weinberg 2000;Kim et al. 2014;Semczuk et al. 2020). As Kormendy & Bender (2012) noted, similar to NGC 4762, the outer disc in NGC 4452 is thick, warped, and tidally distorted. This could have been caused by a gravitational encounter with IC 3381. Note that twists of outer isophotes, observed in some galaxies, can be the result of tidal effects (see e.g the case of NGC 205 in the vicinity of M 31, Kormendy 1982). Therefore, if we had observed the same galaxies from the edge, their outer structure would have appeared tilted with respect to the inner one. The tilt of the outer structure in NGC 4469 can have the same reason. Galaxy harassment has plausibly heated the outer disc of NGC 4469 (meanwhile the bar has almost inevitably heated the inner disc), with time now for all the heating to have phase-mixed around the galaxy, leaving the disc thicker and very likely tilted with respect to the inner disc. Circumstantial evidence of this mechanism in NGC 4452 and NGC 4469 can be the presence of disc antitruncations in both galaxies (see Fig. C1). As shown in multiple studies (see e.g. Erwin et al. 2005Erwin et al. , 2008, disc antitruncations can be produced by galaxy interactions. Why is the effect of the tilting so rare (only three galaxies, including NGC 4638, have been noticed so far)? This is one of few studies (see also Michard & Marchal 1993;Mosenkov et al. 2020) where we observationally confirm this phenomenon, whereas the disc flaring, warping and lopsidedness are often seen in edge-on galaxies and have been extensively studied for decades (Sancisi 1976;Reshetnikov & Combes 1998;Jog & Combes 2009;Comerón et al. 2011;López-Corredoira & Molgó 2014;Reshetnikov et al. 2016). The main reason of the lack of observational evidence of tilted discs and haloes is that these outer structures are not well seen in regular images, whereas in deep observations we can better visualize faint structures, including tilted envelopes. We suppose that this phenomenon should be common in dense massive galaxy clusters. Apparently, the number of galaxies with tilted structures will increase, as more and more deep observations of galaxies become available. CONCLUSIONS In this paper, we have considered two moderate-luminosity SB0 galaxies with prominent tilted outer structures. Using different optical images (SDSS and Legacy) and by means of the stacking of the galaxy images in different bands, we were able to increase the depth of the resultant images up to 28.3 mag/arcsec 2 for the Legacy and 27.6 for the SDSS survey. We performed isophote fitting and a complex photometric decomposition for each galaxy. Based on the obtained results, we report that these galaxies plausibly show a real tilting of the outer structure with respect to the inner region (i.e. the outer and inner structures are oriented in different, tilted planes), which cannot be only explained by disc warps (though this effect can also be present). For NGC,4452 we obtained ∆PA ≈ 6 • , whereas for NGC 4469 the tilt is lower but still prominent (∆PA ≈ 3 • ). The outer discs in these galaxies have completely different shapes (discy,in NGC 4452 and boxy in NGC 4469). We propose that some combination of a single high speed encounter and the cumulative effect of galaxy harassment (in different proportions), which can distort and tilt the outer galaxy structure in a cluster, is a more plausible explanation of the observed phenomenon of disc tilting in both galaxies. Another important scenario can be misalignment of the triaxial (oblate) dark matter halo and its inner stellar disc simply because of how dark matter haloes grow out of the cosmic hierarchy. In addition to that, the tumbling of the dark matter halo might be enhanced by a fast encounter several Gyr ago, which would cause the outer disc to tilt. Also, for NGC 4469, another explanation of the observed tilt, which cannot be rejected, unless a thorough kinematic study is carried out, is a possible triaxiality of an outer disc (or flat halo) in a highly inclined (but not purely edge-on) galaxy. In spite of the slightly blue colour of the tilted outer structures in both galaxies (NGC 4469 also harbours a blueish inner structure, possibly, a ring), which can be evidence of ongoing star formation from the gas captured due to accretion of several gas-rich dwarfs several Gyr ago, this mechanism seems to be far less likely to form a tilted outer structure (especially for NGC 4452) in the troublesome Virgo cluster where encounters are too fast for accretion to occur. In our future paper, we are about to study the nature of the main structural components in NGC 4469 and NGC 4452 using deep 21 cm Hi observations, optical spectral observations, and kinematic measurements. Using numerical simulations, we also showed that tilted inner structures may be well explained by a specific galaxy inclination, together with a specific orientation of a triaxial inner component (bar, bulge, lens) with respect to the observer. are operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of the HyperLEDA database (http://leda. univ-lyon1.fr/; Makarov et al. 2014 Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant # XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant # 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant # 11433005). The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration. The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This study makes use of observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. Figure A1. Stacked Legacy gr z image for NGC 509 (left plot) and its superimposed isophotes from 20 to 26 mag/arcsec 2 (right plot) Figure A2. The contrast-enhanced snapshot image of NGC 509 based on a SDSS color image. σ R = σ 0 · exp(−R/2h), where the σ 0 value follows from the condition on the Toomre value parameter at R = 2h, Q(2h) = Q 0 . First model (Model 1) considered has a relatively thin, cool (in a dynamical sense) and rather massive initial disc with z 0 = 0.025 h, Q 0 = 1.2 and M halo (R < 4h)/M d = 1. The second one (Model 2) has a thicker, hotter and lighter disc with z 0 = 0.05 h, Q 0 = 2.0 and M halo (R < 4h)/M d = 1.5. The simulations were carried out in a self-consistent manner, that is, both the disc and the halo were allowed to evolve under the influence of their mutual gravitational field. The equations of motions were solved by the fast numerical integrator gyrfalcON (Dehnen 2002) for about 6 Gyr with an adaptive time step with the maximal value of about 2·10 −3 Gyrs. Both models developed a strong bar which have a B/PS bulge appearance if seen edge-on. Model images presented in Fig. 12 Figure A3. The results of the tilted-ring analysis for the SAURON stellar velocity field. The orientation of the kinematical major axis is shown by large squares in comparison with the photometric major axis (asterisks, the SDSS r band) and with the outer disc line of nodes (blue dashed line). are obtained by rotations of the bar major axis and the disc plane, and integrating the density of the luminous matter along the line of sight. In Fig. E1 we present the results of the IRAF/ELLIPSE fitting for NGC 3869 and the Model 1 image. We can observe a change of the position angle in the inner region, where the B/PS bulge dominates (B 4 < 0), by approximately 7 • for NGC 3869 and 10 • for Model 1. Also, note similar behaviours of all light distributions for both galaxies. This paper has been typeset from a T E X/L A T E X file prepared by the author. Figure B1. Radial colour profiles for NGC 4452 (the lafthand plot) and NGC 4469 (right plot), which were created based on the Legacy data. Figure C1. Photometric cuts along the major axes of the outer discs for NGC 4452 (left plot) and NGC 4469 (right plot) with the superimposed corresponding decomposition models. Figure D1. Contour maps for NGC 4452 (left plot) and NGC 4469 (right plot). The red lines correspond to the best fits of the centrelines, based on all isophotes, with piecewise linear functions. The outermost isophote in each plot is of 25.5 mag/arcsec 2 , with the blue line, which represents the centre-line of this isophote. Figure E1. Results of the IRAF/ELLIPSE fitting for NGC 3869 (left plot) and the Model 1 (right plot).
2020-06-29T01:00:47.852Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "f4c74db9ad0bf089c1c81a38f4e510d4632077e6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.14896", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f4c74db9ad0bf089c1c81a38f4e510d4632077e6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233864405
pes2o/s2orc
v3-fos-license
Solvability of Discrete Helmholtz Equations We study the unique solvability of the discretized Helmholtz problem with Robin boundary conditions using a conforming Galerkin $hp$-finite element method. Well-posedness of the discrete equations is typically investigated by applying a compact perturbation to the continuous Helmholtz problem so that a"sufficiently rich"discretization results in a"sufficiently small"perturbation of the continuous problem and well-posedness is inherited via Fredholm's alternative. The qualitative notion"sufficiently rich", however, involves unknown constants and is only of asymptotic nature. Our paper is focussed on a fully discrete approach by mimicking the tools for proving well-posedness of the continuous problem directly on the discrete level. In this way, a computable criterion is derived which certifies discrete well-posedness without relying on an asymptotic perturbation argument. By using this novel approach we obtain a) new stability results for the $hp$-FEM for the Helmholtz problem b) examples for meshes such that the discretization becomes unstable (stiffness matrix is singular), and c) a simple checking Algorithm MOTZ"marching-of-the-zeros"which guarantees in an a posteriori way that a given mesh is certified for a stable Helmholtz discretization. Introduction In this paper, we consider the numerical discretization of the Helmholtz problem for modelling acoustic wave propagation in a bounded Lipschitz domain Ω ⊂ R d , d = 1, 2, with boundary Γ := ∂Ω. Robin boundary conditions are imposed on Γ and the strong form is given by seeking u s.t. Here, n denotes the outer normal vector and k ∈ R\ {0} is the wave number. It is well known that the weak formulation of this problem is well posed; the proof is based on Fredholm's alternative in combination with the unique continuation principle (u.c.p.) (see, e.g., [Lei86]). The restriction to Robin boundary conditions is only to fix the ideas. Our method and theory apply verbatim to any other boundary condition of the type ∂u ∂n − i T k u = g on Γ for some dissipative linear operator T k . The generalization to mixed boundary condition which are imposed only on a subset of Γ with positive surface measure is straightforward as well: only the initialisation of Algorithm 1 (see §4) has to be restricted to the degrees of freedom which lie on the boundary part with mixed boundary conditions. We consider the discretization of this equation (in variational form) by a conforming Galerkin method. The proof of well-posedness for this discretization goes back to [Sch74] and is based on a perturbation argument: the subspace which defines the Galerkin discretization has to be sufficiently "rich" in the sense that a certain adjoint approximation property holds. However, this adjoint approximation property contains a constant which is a priori unknown. The existing analysis gives insights into how the parameters defining the Galerkin space should be chosen asymptotically but does not answer the question whether, for a concrete finite dimensional space, the corresponding Galerkin discretization has a unique solution. In one spatial dimension on quasi-uniform grids the condition kh ≤ C res , for some sufficiently small minimal resolution constant C res = O (1), ensures unique solvability of the Galerkin discretization. This result is proved for linear finite elements on uniform meshes with C res = 1 in [IB95,Thm. 4] and for the hp version of the finite element method on uniform 1D meshes with C res < π in [IB97,Thm. 3.3], i.e., on uniform grids employing hp finite elements the Galerkin discretization has a unique solution if kh ≤ C res < π. On the other hand, piecewise linear finite elements in two or more spatial dimensions on a star-shaped domain with smooth boundary or a convex polygonal domain, a straightforward application of the Schatz' perturbation argument leads to the much more restrictive condition: k 2 h ≤ C res for C res sufficiently small. For piecewise linear elements in spatial dimension two or three this result can be improved: if the computational domain Ω is a convex polygon/polyhedron, the condition "k 3 h 2 ≤ C res for sufficiently small C res = O (1)" ensures existence and uniqueness (see [Wu14,Thm. 6.1]). For the hp-finite element method on an analytically bounded or convex polygonal domain, the analysis in [MS10], [MS11] leads to the condition "kh/p ≤ C res for C res sufficiently small provided the polynomial degree satisfies p log k". Since no sharp bounds for the resolution constant C res are available for general conforming finite element meshes in 2D and 3D such estimates have merely qualitative and asymptotic character. This drawback was the motivation for the development of many novel discretization techniques by either modifying the original sesquilinear form or employing a discontinuous Galerkin discretization. Such discretizations have in common that unique solvability of the discrete problem does not rely on the Schatz argument. Unique solvability can therefore be established under less restrictive conditions; we mention [BWZ16,Wu14] for an analysis of a continuous interior penalty method with piecewise linear elements. In [Wu14,Cor. 3.5] unique solvability is established for general polygonal/polyhedral domains for any k > 0 and h > 0. In [FW11] interior penalty hp-DG methods are analysed and unique solvability for polygonal/polyhedral star-shaped domains is shown for any k > 0 and h > 0 under certain conditions on the penalty parameters, see [FW11,Thm 3.2] for details. Finally, in [CQ17] a least-squares approach is analysed, establishing unique solvability for domains, for which a-priori estimates of the continuous problem are available, see [CQ17,Thm 2.4]. We do not go into the details of such methods because our focus in this paper is on the question whether the conforming Galerkin discretization of the Helmholtz problem with Robin boundary conditions can lead to a system matrix which is singular and how to define a computable criterion to guarantee that for a given mesh the conforming Galerkin discretization is unique. Such an approach can be also viewed as a novel a posteriori strategy: the goal is not to improve accuracy but to guarantee unique solvability without relying on a resolution condition for a conforming Galerkin discretization. The given finite element mesh is the input of our new algorithm called MOTZ ("marchingof-the-zeros") which analyses the mesh, based on a stability criterion which we will develop in this paper. If the result is "certified" then the piecewise linear, conforming finite element discretization of the Helmholtz problem with Robin boundary conditions leads to a regular system matrix. Otherwise, the triangulation ist marked by MOTZ as "critical" and we will present a local mesh modification algorithm "MOTZ flip" with the goal to obtain a modified mesh which is certified by MOTZ. In this way, MOTZ can be regarded as an a posteriori stability indicator. In contrast there exist a posteriori error estimators for the Helmholtz problem in the literature [DS13], [SZ15], [IB01], [CFEV21] which take as input a computed discrete solution and estimates the arising error in order to mark (in an ideal situation) those elements which contain the largest error contributions. However, all these error estimators are fully reliable and efficient only if a resolution condition is satisfied and the discrete system is well posed. They fail if the discrete solution does not exist and differ from our approach with respect to their goal (approximability in contrast to well-posedness). Also the adaptive algorithm in [BBHP19] is essentially of this error estimator type -however has the feature that the mesh is refined uniformly if the discrete solution does not exist. To the best of our knowledge, our approach is the first one which does not assume such conditions to hold and refines adaptively for the goal to improve stability. The paper is organized as follows. In Section 2, we formulate the Helmholtz problem in a variational setting and recall the relevant existence and uniqueness results. Then, the conforming Galerkin discretization by hp-finite elements is introduced; by using a standard nodal basis the discrete problem is formulated as a matrix equation of the form A k u = r. Since the resulting system is finite dimensional it is sufficient to prove uniqueness of the homogeneous problem in order to get discrete solvability. We formulate this condition and obtain in a straightforward manner that the discrete homogeneous solution must vanish on the boundary Γ. In Section 3 we discuss the invertibility of A k for different scenarios. First, we prove in the one-dimensional case, i.e., d = 1, that A k is regular for any conforming hp-finite element space without any restrictions on the mesh size and the polynomial degree p. In contrast, we show in Section 3.2 that the matrix A k can become singular for two-dimensional domains at certain discrete wave numbers for simplicial/quadrilateral meshes. We present an example of a regular triangulation of the square domain (−1, 1) 2 such that the conforming piecewise linear finite element discretization of the Helmholtz problem with Robin boundary conditions leads to a singular system matrix A k . This generalizes the example in [MPS13,Ex. 3.7] where the finite element space satisfies homogeneous Dirichlet boundary conditions to the case of Robin boundary conditions. Next, we discuss conforming finite element discretizations on quadrilateral meshes and show that for rectangular domains and tensor product quadrilateral meshes the matrix A k is always invertible for polynomial degree p ∈ {1, 2, 3} by applying a local argument inductively. We also show that there are mesh configurations where this local argument breaks down for p = 4. Motivated by the results in Section 3.2, we present in Section 4 the aforementioned algorithm MOTZ. For the case that the outcome is "critical" we also present two companion algorithms which refine or modify the given mesh such that the MOTZ algorithm returns "certified". The section is complemented by numerical experiments and a short discussion on the behaviour of the inf-sup constant before and after the modification of a "critical" mesh to a "certified" mesh. Setting Let Ω ⊂ R d , d = 1, 2 be a bounded Lipschitz domain with boundary Γ := ∂Ω. Let L 2 (Ω) denote the usual Lebesgue space with scalar product denoted by (·, ·) (complex conjugation is on the second argument) and norm · L 2 (Ω) := · := (·, ·) 1/2 . Let H 1 (Ω) denote the usual Sobolev space and let γ : H 1 (Ω) → H 1/2 (Γ) be the standard trace operator. We introduce the sesquilinear forms where ·, · Γ denotes the L 2 scalar product on the boundary Γ. Throughout this paper we assume that the wave number k satisfies The weak formulation of the Helmholtz problem with Robin boundary conditions is given as follows: For f ∈ L 2 (Ω) and g ∈ H 1/2 (Γ), we define F = (f, ·) + g, γ · Γ ∈ (H 1 (Ω)) . We seek u ∈ H 1 (Ω) such that In the following, we will omit the trace operator γ in the notation of the sesquilinear form since this is clear from the context. Well-posedness of problem (1) is proved in [Mel95, Prop. 8.1.3]. Proposition 2.1 Let Ω be a bounded Lipschitz domain. Then, (1) is uniquely solvable for all F ∈ (H 1 (Ω)) and the solution depends continuously on the data. We employ the conforming Galerkin finite element method for its discretization (see, e.g., [Cia78], [BS08]). For the spatial dimension, we assume that Ω ⊂ R d is an interval for d = 1 or a polygonal domain for d = 2. We consider either conforming meshes K T (i.e., no hanging nodes) composed of closed simplices or conforming meshes K Q composed of quadrilaterals. The set of all vertices is denoted by N , i.e., for M ∈ {T , Q} For d = 2, we denote the set of all edges by E and For p ∈ N, we define the continuous, piecewise polynomial finite element space by where P p (resp. Q p ) is the space of multivariate polynomials of maximal total degree p (resp. maximal degree p with respect to each variable). The reference elements are given by For M ∈ {T , Q} and K ∈ K M , let φ K : K M → K denote an affine pullback. Moreover for p ≥ 1, we denote byΣ p a set of nodal points in K M unisolvent on the corresponding polynomial space which allow to impose continuity across faces. The nodal points on K ∈ K M are then given by lifting those of the reference element: The set of global nodal points is given by and we denote by (b z,p ) z∈Σ p ⊂ S p M the standard Lagrangian basis. Remark 2.2 For M ∈ {T , Q}, we write S or S M short for S p M and Σ short for Σ p if no confusion is possible. If p = 1 then the two sets N and Σ 1 are equal and we use the notation N for the set of degrees of freedom. The Galerkin finite element method for the discretization of (1) is given by: (2) The basis representation allows us to reformulate (2) as a linear system of equations. The system matrices K = (α y,z ) y,z∈Σ , M = (µ y,z ) y,z∈Σ , B = (β y,z ) y,z∈Σ are given by α y,z := (∇b z,p , ∇b y,p ) , µ y,z := (b z,p , b y,p ) , β y,z := b z,p , b y,p Γ and the right-hand side vector r = (ρ z ) z∈Σ by ρ z = F (b z,p ). Then, the Galerkin finite element discretization leads to the following system of linear equations where u = (u z ) z∈Σ . We start off with some general remarks. It is well known that the sesquilinear form a k (·, ·) satisfies a Gårding inequality and Fredholm's alternative tells us that well-posedness of (1) follows from uniqueness. Similarly, the finite dimensional problem (2) is well posed if the problem has only the trivial solution. We note that if we choose v = u and consider the imaginary part of (4), we get 0 = Im a k (u, u) = −k u 2 Γ =⇒ u| Γ = 0. 3 Regularity of the discrete system matrix A k This section covers three different topics concerning the discretization of the Helmholtz equation. First, in Section 3.1 we analyse the conforming Galerkin finite element method with polynomials of degree p ≥ 1 in spatial dimension one. Next, in Section 3.2, we present a singular two-dimensional example for piecewise linear finite elements on a triangular mesh. Finally, in Section 3.3 we consider structured tensor-product meshes in spatial dimension two. The one-dimensional case We prove that the conforming Galerkin finite element method for the one-dimensional Helmholtz equation is well posed for any k ∈ • R. Theorem 3.1 Let Ω ⊂ R be a bounded interval and consider the Galerkin discretization (2) of (1) with conforming finite elements. Then, for any k ∈ Proof. We assume that Ω = (−1, 1) (the result for general intervals follows by an affine transformation). Since the problem (2) is linear and finite dimensional it suffices to show that the only solution of the homogeneous problem (4) is u = 0 . From (5) we already know that u (−1) = u (1) = 0. We assume that the intervals The function u N := u| K N can be written as As test functions, we employ the functions b y,p , y ∈ Σ p K N \ {x N −1 } and obtain from (4) This is a p × p linear system and our goal is to show that it is regular for all k ∈ • R so that u z = 0 follows for all z ∈ Σ p K N \ {1}. By an affine transformation this is equivalent to the following implication wherek = 2|K N |k and Let u ∈ P p 0) satisfy the assumption in (6). For ω (0 (x) := 1 + x we choose v = ω (0 u ∈ P p (0 as a test function. We integrate by parts, use the fact 2 Re(uu ) = (|u| 2 ) , and employ the endpoint properties of u and v to obtain This implies that u N = u| K N = 0 and we may proceed to the adjacent interval K N −1 . Since u (x N −1 ) = 0 we argue as before to obtain u| K N −1 = 0. The result follows by induction. A singular example in two dimensions In this section, we will present an example which illustrates that Robin boundary conditions are not sufficient to ensure well-posedness of the Galerkin discretization (although the continuous problem is well posed). For this, we first introduce the definition of a weakly acute angle condition on a triangulation. Definition 3.2 Let d = 2 and T a conforming triangulation of the domain Ω. For an edge E ∈ E Ω , we denote by τ − , τ + the two triangles sharing E. Let α − and α + denote the angles in τ − , τ + which are opposite to E. We define the angle We say that the edge E satisfies the weakly acute angle condition if α E ≤ π. The following observation is key for the proof of the upcoming Lemma 3.4, as well as for the derivation of the Algorithm MOTZ ("marching-of-the-zeros") presented in Section 4. , E satisfies the weakly acute angle condition). Lemma 3.4 Let α ∈ (0, 1) and define k c as Then for any k ∈ • R the corresponding Galerkin discretization (2) of (1) with conforming piecewise linear elements on the mesh T α is well posed if k = ±k c . For k = ±k c the system matrix is singular and its kernel has dimension one. Proof. We construct an explicit non-trivial solution u h ∈ S 1 T to the homogeneous equations. To that end, note that by (5) we have u Γ i = 0 for i = 1, . . . , 4. We seek a non-trivial solution to Figure 1: Mesh resulting in a non-trivial solution of the Helmholtz equation. Our strategy is the following: We first test with the degrees of freedom associated to the boundary. This allows us to construct a candidate for a non-trivial solution. Next, we test with the interior degrees of freedom. This allows to show the existence of a critical wave number k c as stated in the present lemma, which can be explicitly calculated. Furthermore, we verfiy that the kernel of the system matrix for k = ±k c is in fact one dimensional. Finally, we show that for any other k = ±k c the system matrix is regular. We test with the hat functions b Γ i for i = 1, . . . , 4 associated to the degrees of freedom on the boundary in (8). Due to their support, they do not interact with the hat function b Ω 5 . We start with the hat function b Γ 1 . For the construction of a candidate for a non-trivial solution, the interactions with b Γ 2 and b Γ 4 are redundant, since u Γ 2 = u Γ 4 = 0. We are therefore left with the interactions with b Ω 1 and b Ω 2 . Due to the weakly acute angle condition and Lemma 3.3, we find that a 0,k (b Γ 1 , b Ω 1 ) < 0 for all k ∈ R \ {0}. Regarding b Ω 2 , we find due to the symmetry of the mesh that a 0, . The same argument holds true for testing with the other hat functions associated to the boundary. Therefore, any solution u h to (8) must . We now test with the hat function b Ω 1 , which interacts with itself, b Ω 2 and b Ω 4 as well as b Ω 5 . For some constants a = a(α) > 0 and b = b(α) > 0 we have It is easy to verify that the edge [P Ω 1 , P Ω 2 ] satisfies the weakly acute angle condition for any 0 < α < 1. Therefore, The same arguments hold true for the other test functions b Ω 2 , b Ω 3 and b Ω 4 , yielding the same values. Regarding the test function b Ω 5 , the symmetry of the mesh and the satisfied weakly acute angle conditions imply (9) Below we will show b − 2d > 0 for any α ∈ (0, 1). This allows to construct exactly two solutions k ∈ R \ {0} such that a + 2c − k 2 (b − 2d) = 0, which in turn lets the right-hand side in (9) vanish so that the vector (1, −1, 1, −1, 0) T is a solution of the homogeneous equations. To that end, let U denote the upper quadrilateral with corners P Γ 4 , P Ω 4 , P Ω 5 , P Ω 1 and let B denote the bottom quadrilateral with corners P Γ 1 , P Ω 1 , P Ω 5 , P Ω 2 . Furthermore, let L denote the left triangle with corners P Γ 1 , P Γ 4 , P Ω 1 . Due to the support properties of b Ω 1 and b Ω 2 we find with the above notation that where, in the last equation, we used the fact that U (b Ω 1 ) 2 dx = B (b Ω 2 ) 2 dx, which holds again due to the symmetry of the grid, which proves b − 2d > 0. In fact, tedious but elementary calculations yield that the α dependent quantities a, c, and b − 2d are given by which yields equation (7) for the critical wave number It is left to show that the vector (1, −1, 1, −1, 0) T is in fact (up to scaling) the only nontrivial solution to the homogeneous equations for k ∈ R \ {0}. By the above arguments, any other candidate has to be of the form (1, −1, 1, −1, µ) T or (0, 0, 0, 0, µ) T for some µ = 0. We first show that (0, 0, 0, 0, µ) T for some µ = 0 can never be a solution to the homogeneous equations: Assume the contrary, then similarly as in equation (9), we find µ(−e, −e, −e, −e, f ) T has to be the zero vector for some µ = 0, which is impossible, since e > 0 for any k ∈ R \ {0}. Remark 3.5 The above example shows that there exist meshes in spatial dimension two, for which unique solvability of the discretized equations does not hold. The constructed solution has an oscillatory behaviour which can not be ruled out by the boundary hat functions. More generally, the same arguments hold true if one chooses a regular 2n polygon, with another rotated one inside, analogous to the above example. For N ∈ N consider a mesh constructed by N 2 scaled versions of the mesh considered in Lemma 3.4 as depicted in Figure 3. Let T macro denote the corresponding mesh. The mesh size h is then given by h = 2N −1 . A simple scaling argument together with Lemma 3.4 and Remark 3.6 allows to construct non-trivial solutions the corresponding Galerkin discretization (2) of (1) with conforming piecewise linear elements: Lemma 3.7 Fix α ∈ (0, 1). Letk c denote the critical wave number as in equation (7). Consider a conforming Galerkin discretization (2) of (1) with piecewise linear elements on the mesh T macro . Then, if kh = 2k c holds true, the Galerkin discretization is not uniquely solvable. Proof. A scaling argument together with Lemma 3.4 and Remark 3.6 allows to construct a global singular solution as follows: On each of the N 2 sub-quadrilaterals one chooses the non-trivial solution constructed in the proof of Lemma 3.4. It is easy to see that with the condition kh = 2k c this global function is then also a non-trivial solution to the global system of homogeneous equations. Structured quadrilateral grids The present section is devoted to the study of conforming Galerkin discretizations using quadrilateral elements in spatial dimension two. We employ structured tensor-product meshes. The first result concerns a p-version on one quadrilateral element. Throughout this section the following notation is employed: For vectors a = (a 1 , a 2 ), b = (b 1 , b 2 ) in C 2 we use the notation a · b := a 1 b 1 + a 2 b 2 without complex conjugation. Furthermore, · 2 denotes the Euclidean 2-norm. Theorem 3.8 Let Ω = K ⊂ R 2 , where K denotes the reference quadrilateral with vertices (±1, ±1) T . Consider the Galerkin discretization (2) of (1) with polynomials of degree p ≥ 1 on K. Then, for any k ∈ • R the matrix A k in (3) is regular. Proof. As in the proof of Theorem 3.1 it suffices to show that any solution for the homogeneous problem is already trivial. Throughout the proof, we denote by S the finite element space, i.e., the space of polynomials of total degree p on K. Let u ∈ S be a solution to the homogeneous equations. We again have u = 0 on ∂ K, see equation (5). Therefore, u ∈ S solves (∇u, ∇v) − k 2 (u, v) = 0 ∀v ∈ S. The proof relies, similarly to Theorem 3.1, on the choice of a special kind of test functions, i.e. Morawetz-multipliers. Note that for any a, b, c, d ∈ R we have (ax + b)u x ∈ S and (cy + d)u y ∈ S. Therefore, with ρ := (ax + b, cy + d) T the function v = ρ · ∇u ∈ S is a valid test function. Choosing v = ρ · ∇u in (10), passing to the real part, integrating by parts together with the facts that 2 Re(ww x ) = ∂ x (|w| 2 ) and 2 Re(ww y ) = ∂ y (|w| 2 ) for sufficiently smooth functions w and employing the boundary conditions of u, we find 0 = Re (∇u, ∇(ρ · ∇u)) − k 2 (u, ρ · ∇u) = Re (∇u, ∇ρ ∇u) + (∇u, ∇ 2 u ρ) − k 2 (u, ρ · ∇u) The choice ρ = (1−x, 1−y) T , results in ∇ρ−1/2 ∇·ρ I being the zero matrix. Furthermore, ∇ · ρ = −2. We therefore have We find u ≡ 0 once we have shown ρ · n ≤ 0. Since ρ · n = 0 on the top-right part of ∂ K and ρ · n = −2 on the bottom-left part, we can conclude u ≡ 0. The natural next step is to consider an axial parallel quadrilateral domain Ω ⊂ R 2 and use a mesh, which consists of one corridor of elements, see Figure 4. Theorem 3.9 Let Ω ⊂ R 2 be an axial parallel quadrilateral. Consider the Galerkin discretization (2) of (1) with conforming finite elements with a mesh consisting of a corridor of axial parallel quadrilaterals as depicted in Figure 4 and polynomial degree p ≥ 1. Then, for any k ∈ • R the matrix A k in (3) is regular. Proof. Without loss of generality, we may assume that two of the sides of Ω are on the lines {y = 1} and {y = −1}. Choosing v = u and passing to the imaginary part again leads to u = 0 on Γ. We also find 0 = ( ∇u 2 2 , 1) − k 2 (|u| 2 , 1). Finally ρ · n = −2 on the line {y = −1} and vanishes on all other sides of the boundary. Therefore, the above further simplifies to Multiplying (11) by −1/2 and adding (12) leads to Consequently, u y ≡ 0. Combined with the fact that u vanishes on the boundary we find u ≡ 0, which concludes the proof. Theorem 3.10 Let Ω ⊂ R 2 be an axial parallel quadrilateral. Consider the Galerkin discretization (2) of (1) with conforming finite elements with a mesh consisting of two corridors of axial parallel quadrilaterals as depicted in Figure 5. Then, for any k ∈ • R the matrix A k in (3) is regular. Proof. Without loss of generality, we may assume that the dividing line between the two corridors to be located on the line {y = 0}. Let the two sides parallel to the x-axis lie on the lines {y = a} and {y = −b}, for some a, b > 0. Let U ⊂ Ω and D ⊂ Ω denote the upper and lower corridor respectively. Again the proof relies on an appropriate choice of test functions. These are the global function v = u, as well as v = −2yu y localized on U and D respectively, i.e., extended by zero. This localization is again a valid test function since v = −2yu y is piecewise polynomial and conforming since v = 0 on the line {y = 0}. Analogous integration by parts as in the proof of Theorem 3.10 we find the following three equations to hold: Adding the equations for U and D and subtracting the one on the whole domain Ω again gives u y ≡ 0, which concludes the proof, with the same arguments as in the proof of Theorem 3.9. The previous results relied on the use of appropriate global test functions. The remainder of this section is concerned with discretizations employing quadrilaterals on structured Cartesian meshes. To that end let K again denote the reference quadrilateral with vertices (±1, ±1) T . The setup is such that the bottom-left part of the boundary of K is part of the boundary Γ of the computational domain Ω, again itself an axial parallel quadrilateral. The upper right part of the boundary of K is therefore inside the domain Ω. Our argument will be a localized one, i.e., we consider only test functions v whose support is given by K. To that end let Q p BL ( K) denote that space of polynomials of total degree p, which are zero on the bottom-left part of the boundary of K. Analogously let Q p TR ( K) denote that space of polynomials of total degree p, which are zero on the top-right part of the boundary of K. These spaces are therefore given by To perform a localized argument, we only test with functions v ∈ Q p TR ( K). Note that the Galerkin solution u vanishes on the bottom-left part of the boundary of K and therefore u| K ∈ Q p BL ( K). These considerations now lead to the question if a solution u ∈ Q p BL ( K) of The above is therefore equivalent to whether a solution u r ∈ Q p−1 ( K) to can only be u r ≡ 0. The answer to this question is yes for p = 1, 2, 3. However, for p = 4 and some k > 0 such exists a non-trivial solution. The consequences of this are twofold: Firstly, on a structured Cartesian grid, the Galerkin discretizations for p = 1, 2, 3 are well posed for any k ∈ R\ {0}. Secondly, for p ≥ 4 a localized argument based on the appropriate choice of test functions as in the one-dimensional case, see Theorem 3.1, is not possible in two dimensions, for all wave numbers simultaneously. Proof. The proof is an algebraic one. We calculate the matrix corresponding to the system of linear equations (14) explicitly. To that end we choose the monomial basis of Q p−1 ( K), i.e., for p = 1 the basis is given by {1}, for p = 2 by {1, y, x, xy} and for p = 3 by {1, y, y 2 , x, xy, xy 2 , x 2 , x 2 y, x 2 y 2 }. For p = 1, i.e., a 1 × 1 system, we find for the determinant the polynomial which is strictly negative for any k ∈ R. For p = 2 we find which is again non-zero. For p = 3 we find the determinant is given, up to a positive factor, by − k 6 + 15k 4 + 270k 2 + 3150 4k 6 + 60k 4 + 495k 2 + 450 2 . For p = 4 we find the following polynomial to be a factor of the determinant (−3492720 − 161028k 2 + 41013k 4 + 10800k 6 + 810k 8 + 36k 10 + k 12 ) 2 , which has a positive real root. An immediate consequence of Lemma 3.11 is the following Theorem. Theorem 3.12 Let Ω ⊂ R 2 be an axial parallel quadrilateral. Consider the Galerkin discretization (2) of (1) with conforming finite elements with a tensor-product mesh and polynomial degree p = 1, 2, 3. Then, for any k ∈ Proof. Propagating through the mesh by applying Lemma 3.11 yields the result. A discrete unique continuation principle for the Helmholtz equation In this section, we will introduce the Algorithm MOTZ ("marching-of-the-zeros"), which mimics a discrete unique continuation principle for the Helmholtz equation 1 for d = 2 and a triangular mesh K T . We restrict ourselves to the case where p = 1 and use the notation as in Remark 2.2. The discrete problem then reads where S = S 1 T is the space of continuous and piecewise linear functions with respect to a conforming triangulation K T on Ω. Notation 4.1 In the following, we skip the polynomial degree p in the notation and write short b z for b z,p = b z,1 . ( ∈ N test and ∈ N dof ). Definition 4.2 Let N 1 ⊂ N be a subset of nodes. For a node z ∈ N 1 we define the transmission degree with respect to N by (see Figure 6). Let u ∈ S be a solution of (15). Then the following implication holds Notation 4.4 We generally denote by N test ⊂ N a subset of nodes z, where we already know that the solution u of (15) is zero, i.e., u z = 0 as in the assumption in (16), but where we have a basis function b z that can be used as test function. An example is N test = N ∩ Γ. Indeed, from (5), we know that u z = 0 for all z ∈ N ∩ Γ. Remark 4.5 If the weakly acute angle condition is violated for some edge E ∈ E Ω one can take the midpoint of E as a new mesh point and bisect the adjacent triangles. For general dimension d, an analogous angle criterion as (17) can be formulated (see [XZ99,Lem. 2 .1]). Proof of Lemma 4.3. Let b z be the Lagrange basis function for the node z and b z the one for z. Then (16) is equivalent to showing By Lemma 3.3 we have that (∇b z , ∇b z ) ≤ 0 if and only if α E ≤ π. Since b z and b z are positive in int (τ + ) ∪ int (τ − ), we conclude that (18) holds. A first checking algorithm and numerical experiments In this section we present a main result of the paper (cf. Theorem 4.8): If the algorithm MOTZ (Algorithm 1) returns certified then we conclude that the discrete problem is well posed. On the other hand, if the output is critical then this means that the discretization might lead to a singular matrix. In the latter case a slight modification of the mesh (using bisection or flip of an edge) may be applied to receive a regular system matrix. We introduce the following notation • N test : The set of z ∈ N where u(z) = 0. • N dof : The complement of N test . The idea of Algorithm 1 is the following: We initialize the algorithm with N test = N ∩ Γ and N dof = N \ N test . Recall that for z ∈ N test with deg (z , N test ) = 1 (cf. Definition 4.2) there exists one and only one z ∈ N dof such that the edge E = [z, z ] belongs to E Ω . Lemma 4.3, together with (5) then implies that u(z) = 0. We update the sets N test = N test ∪ {z} and N dof = N dof \ {z} accordingly and repeat the same procedure. After each step one has If N test = N the algorithm stops. We remark that we do not stop the algorithm if the weakly acute angle condition (17) is not satisfied. Instead, we assign to the corresponding edge the property acute(E) = false (line 7 of Alg. 1). If angle conditions are not satisfied everywhere, but the algorithm ends with N test = N , we bisect the relevant, adjacent triangles in a post-processing step as explained in Remark 4.5 using Algorithm 2 in the end. This is possible since two transmission edges never share an adjacent triangle. bisect E until weakly acute angle condition is satisfied 3: end for ( ∈ N test , ∈ N dof ). Lemma 4.6 After a finite number of bisections the weakly acute angle condition is satisfied. Proof. Let E = [z , z] be a transmission edge such that α E > π. After one bisection of E, a new vertexz on E is added (cf. Figure 9). In order to have the same outcome of Algorithm MOTZ trans we need to show that u(z ) = u(z) = 0. Indeed, this follows from applying Lemma 4.3 along the edge [z ,z] and then [z, z], provided the angle condition is met. If the angle condition does not hold, the argument can be repeated for every subsequent bisection. The algorithm stops after a finite number of bisections, according to Lemma 4.6. Theorem 4.8 Consider the Galerkin discretization of (1) by (15). Proof. From u ∈ S and (5) we know that u| Γ = 0. Choose z 1 , z 1 as defined in Algorithm 1 MOTZ and set E = [z 1 , z 1 ]. With Lemma 4.3, we conclude that u (z 1 ) = 0. By an inductive application of this argument to pairs (z j , z j ) we obtain that u is zero at all z ∈ N . This means that u vanishes in all mesh points N . If MOTZ trans = true, but MOTZ angle = false, running Algorithm 2 makes sure that for the new mesh all relevant weakly acute angle conditions are met. This procedure does not change the output of MOTZ trans as proved in Lemma 4.7. Therefore, if we run Algorithm 1 with the new mesh the output will be MOTZ result = certified and the first statement of the theorem applies. Remark 4.9 In general, a lower bound on the smallest angle of the mesh determines how many bisections are at most needed. In all examples that are presented in this publication, all weakly acute angle conditions were satisfied. In particular this means that the outcome MOTZ result solely depended on the outcome MOTZ trans, i.e. on the connectivity of the mesh. We also note that the algorithm could easily be modified in order to avoid edges which do not satisfy the weakly acute angle conditions (if possible). However, since acute angle conditions in our examples were always met, this was not implemented in our code. Figure 10a shows the finite element mesh of a non-convex geometry with re-entrant corners. The boundary with Robin boundary conditions is illustrated in blue. In order to determine if the Galerkin finite element method (15) for this mesh is well posed, we apply Algorithm 1 with N test being the nodes on the boundary (cf. Figure 10b). The red nodes belong to N dof . In the subsequent figures the evolution of the algorithm is shown. It successively tries to find nodes in N dof that have a neighbouring node in N test with transmission degree 1. Once such a node has been identified, it is removed from N dof and added to N test . In this example the procedure can be repeated until N dof is the empty set and N test = N , i.e., MOTZ will return MOTZ trans = true. Since also all angle conditions are satisfied the algorithm will return certified. Furthermore, due to the regularity of the mesh, nodes that satisfy condition (a) of Lemma 4.3 are easily found in each step, since they are typically located next to the node that has been removed from N dof in the previous step. Note that the order in which nodes are removed from N dof depends on the enumeration of the nodes in the mesh. The outcome MOTZ trans however, is independent of the node enumeration. Figure 11 shows the mesh of a geometry with one hole. As before, the boundary with Robin boundary conditions is illustrated in blue (note, that the hole has Robin boundary conditions as well) and we initialize the algorithm with N test := N ∩ Γ. Also in this example, MOTZ returns MOTZ result = certified which means that problem (15) is well posed. In the following we refer to the nodes in N dof (red nodes) that are connected by an edge to the boundary nodes of the inner circle as 'layer 1' nodes, 'layer 0' being the boundary nodes on the circle. Interestingly, none of the 'layer 1' nodes can be marked orange initially, since each boundary node on the circle is connected to at least two 'layer 1' nodes and therefore has a transmission degree larger than one. Thus, MOTZ has to start from the outer boundary and successively moves towards the interior boundary points. Only in step 62 of the algorithm (see Figure 11c) one entry point into 'layer 1' can be found. Note that none of the other nodes in 'layer 1' could have been marked orange at this point, since all of the orange 'layer 2' nodes, except the one, have transmission degree 2 (are connected to two red nodes). This suggests how a finite element mesh would need to look like in order for MOTZ to give the result MOTZ result = critical. If the mesh in Figure 11a was such that each 'layer 1' node is connected to exactly two nodes in 'layer 0' and 'layer 2', the algorithm would not find any entry point into 'layer 1' and would return MOTZ result = critical. However, in practice we could not produce such a mesh with standard mesh generation tools due to the non-optimal quality of the desired mesh. We applied the Algorithm MOTZ to various geometries and mesh configurations (sharp corners, complex geometries, strong local refinements). None of the tested meshes that were produced by a standard mesh generation algorithm (such as [SH21,Sch22]), actually led to the output MOTZ result = critical of the Algorithm MOTZ. A mesh modification algorithm and numerical experiments Even though the algorithm seems to return the output MOTZ result = certified for most shape regular meshes, one can construct examples where it returns a critical result. In this section, we are in particular interested in the case where this is due to the output MOTZ trans = false, i.e. where no more edges which satisfy condition (a) of Lemma 4.3 could be found. In these cases we need to modify the finite element mesh so that the corresponding Galerkin discretization has a unique solution. We propose the following three simple mesh modification strategies, that can lead to a passing of the checking algorithm: • Re-building of the whole mesh with slightly modified mesh parameters. • Local refinement of the mesh across the interface of N dof and N \N dof , where N dof is the result of MOTZ. • Application of Algorithm 3 (MOTZ flip), which flips certain edges at the interface of N dof and N \N dof . If MOTZ returns MOTZ trans = false (together with the non-empty set N dof ), the algorithm was not able to find any more nodes that satisfy condition (a) of Lemma 4.3. The idea behind all three strategies above is to alter existing or create new entry points for the algorithm into the remaining set N dof . Re-building of the whole mesh with slightly different parameters or a different meshing algorithm is a simple way of altering triangles and edges which in turn might break up the constellations that lead to the output MOTZ result = critical. If this is not possible or not successful, a more targeted local refinement around a node on the boundary of N dof might lead to new entry points and to a passing of the checking algorithm. A highly targeted approach to create new entry points with minimal modifications to the original mesh is described in Algorithm 3. The idea is to detect those nodes on the boundary of N dof that have exactly two connected nodes in N test , which in turn have exactly one common node in N test (see Figure 12). Flipping the interior edge in this scenario, i.e., replacing [z 1 , z 2 ] with [z,z], will then increase the transmission degree ofz by one. Since this will typically mean that deg (z, N test ) = 1, this mesh modification will create a new entry point for MOTZ into N dof . Often, the constellation described in Figure 12 can be found multiple times in a mesh. In Algorithm 3 we propose to compute a mesh quality score for each potential edge flip (e.g. based on minimal angles of the resulting triangles). MOTZ is then rerun for the modified mesh with the highest quality score. Algorithm 3 makes use of the following definitions: (i) The neighbouring nodes of a node z ∈ N are denoted by N neighbours (z) = {z ∈ N \{z} | [z, z ] ∈ E Ω }. Set P = {}, which will hold possible modified meshes, together with a quality score for each mesh. 3: for all z ∈ ∂N dof do 4: Let z have exactly two neighbours in N test , denoted by z 1 , z 2 ∈ N neighbours (z) ∩ N test 5: if N neighbours (z 1 ) ∩ N neighbours (z 2 ) ∩ N test = {z} then 6: Compute a quality score q for the mesh associated with E Ω We emphasize that the strategies described above are heuristic, e.g., only the case of two neighbours is considered in Algorithm 3: line 5. However, we expect that they are successful in the vast majority of cases where MOTZ returns MOTZ trans = false. Figure 13a shows an example of a mesh, where the checking algorithm returns MOTZ result = critical, together with a non-empty set N dof consisting of four points (see Figure 13b ). We apply Algorithm 3 MOTZ flip, which in this case will detect four edges that can potentially be flipped. As a simple quality score we measure the minimal angle for each triangle in the mesh. Due to the symmetry of the mesh, the quality score for each potential modification suggested by MOTZ flip coincides. Therefore, there is no preference concerning the choice of the edge that will be flipped in this case. Figure 13c shows the modified mesh. Indeed, this modification is sufficient in order for MOTZ to return MOTZ result = certified. Below we consider the impact of mesh modification via Algorithm 3 MOTZ flip by numerically calculating the reciprocal of the discrete inf-sup constant β k given by for a modification of the mesh considered in Section 3.2 for α = 1 2 . The k-weighted natural norm · 1,k,Ω on H 1 (Ω) is given by u 1,k,Ω := ∇u 2 L 2 (Ω) + k 2 u 2 L 2 (Ω) 1/2 for u ∈ H 1 (Ω). The discrete inf-sup constant β k can be numerically calculated via a generalized eigenvalue problem. For the mesh considered in Section 3.2 with α = 1 2 , we find that k = 6 results in a singular system matrix, see equation (7). We inscribe the mesh considered in Section 3.2 into another quadrilateral, see Figure 14, in order to apply Algorithm 3 MOTZ flip. For the left mesh in Figure 14 MOTZ returns MOTZ result = critical. The numerical results are visualized in Figure 15. We observe that for k = 6 the original mesh results in a singular system matrix, while the modified mesh from MOTZ flip results in a regular one. The algorithms MOTZ and MOTZ flip have been implemented in Python. The code is available via https://github.com/alexander-veit/MOTZ. Figure 14.
2021-05-07T01:15:56.125Z
2021-05-05T00:00:00.000
{ "year": 2021, "sha1": "4b8212f93cf3069f4fbcf8fc2134d7385a58df99", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4b8212f93cf3069f4fbcf8fc2134d7385a58df99", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
244159536
pes2o/s2orc
v3-fos-license
Interconnecting Systems Using Machine-Actionable Data Management Plans – Hackathon Report The common standard for machine-actionable Data Management Plans (DMPs) allows for automatic exchange, integration, and validation of information provided in DMPs. In this paper, we report on the hackathon organised by the Research Data Alliance in which a group of 89 participants from 21 countries worked collaboratively on use cases exploring the utility of the standard in different settings. The work included integration of tools and services, funder templates mapping, and development of new serialisations. This paper summarises the results achieved during the hackathon and provides pointers to further resources. INTRODUCTION The Data Management Plan (DMP) was introduced to document and publish both data management practices and policies that are applied to data throughout its lifecycle. This implies describing the techniques, methods and policies on how data is to be created, collected, documented, processed, accessed, preserved, disseminated as well as the roles and responsibilities of associated actors (Michener, 2015). The premise behind the concept of a machine-actionable DMP (maDMP) is that information contained within a DMP can be enacted both by humans and automated systems, thus addressing some of the limitations associated with traditional DMP documents. To that effect, data management workflows should integrate maDMPs and data management policies should take into account not only human agents but also machines. maDMPs should support both human and machine-processable representations so they act as an interchange format for dissemination and public access of the maDMP (Simms et al., 2017). In order to provide a machine-actionable representation of a maDMP, it becomes necessary to establish a standardised representation of the maDMP. The Research Data Alliance (RDA) 1 DMP Common Standards (DCS) working group (Miksa, Cardoso, and Borbinha, 2018;Miksa, Neish, et al., 2018;Miksa, Walk, and Neish, 2019) developed an application profile making it easier to express information from traditional DMP documents in a machine-actionable way. The DCS maDMP application profile allows for automatic exchange, integration, and validation of information provided in DMP documents. Thus, facilitating the exchange of information between systems acting on behalf of stakeholders involved in the research life cycle, such as researchers, funding bodies, repository managers, ICT providers, librarians, etc. This paper reports on a hackathon organised by the DCS working group, which had as main motivation to promote the adoption of the maDMP concept by the research community, and, in particular, the usage of the DCS application profile for interchange of maDMPs. To that effect four main areas were identified: (1) serialisation, to encourage community development of serialisations of the DCS application profile; (2) integration of DMP tools, to promote the compliance or usage of the DCS application profile in the existing DMP tools used by the community; (3) further integration, aimed at the integration of the DCS application profile with existing data management tools and workflows; and (4) funder templates mapping, where existing DMP modules and representations were to be aligned with the concepts in the DCS application profile. By focusing on these four areas, the hackathon aimed at achieving three primary objectives: (1) to broaden the community focused on maDMPs, (2) to expand the support for maDMPs, and (3) to expose the growing endorsement on the adoption of the DCS application profile in a wide range of settings, enabling exchange of DMP specific information in a machine-actionable way. In order to achieve these objectives, participants were encouraged to both submit topics and form hacking teams. After the hackathon activities were finished, participants were asked to report their results, by compiling individual topic reports. The main aim of this paper is to familiarise readers with the recent developments in the field of maDMPS, by summarising and providing pointers to results, projects, and prototypes developed during the hackathon. The paper provides also context on how the results from the hackathon were produced to allow the readers to better interpret the achieved results in view of constraints, e.g. distributed nature of collaboration, many participants not being originally involved in the recommendation development, limited time to produce results, etc. The remainder of the paper is organised as follows. Section 2 describes the maDMP hackathon by providing a characterisation of both the organisation of the hackathon and its participants. Section 3 details the submitted topics and provides a short summary of objectives and results per topic. Section 4 reports on the experiences from organising and participating in a virtual online hackathon and importance of maDMPs for Open Science. Finally, Section 5 describes the conclusions drawn from the hackathon as well as points that can be tackled in the future. THE MADMP HACKATHON, A COMMUNITY PRACTICE With the idea of broadening the maDMP community, improving the current DCS application profile and promoting its adoption, a hackathon was organised from the 27 th to the 29 th of May 2020. A hackathon is a programming-oriented event where participants gather together to collaboratively work at an intensive pace towards possible solutions to some particular challenges (Briscoe, 2014;Garcia et al., 2020;Stoltzfus et al., 2017). Such solutions are created by interdisciplinary teams including, for instance, domain experts, designers and developers. In additions to open source projects, either at a prototype or production level, hackathons also boost medium to long term collaborations, with hacking teams typically continuing to work on the proposed solutions as well as follow up solutions, past the original timeline of the event (Briscoe, 2014;Garcia et al., 2020). Following RDA common practices, the maDMP Hackathon was announced via mailing lists and advertised via Twitter. The registration was open to everybody from newcomers to experts, with participants being given a period of five weeks to both register and submit topics to be addressed during the hackathon. HACKATHON ORGANISATION In order to comply with the open access principles, information regarding topics, teams and activities was made publicly available on GitHub. 2 The hackathon followed a self-organising spirit, with participants being free to propose topics and establish teams, which all participants were open to join. The teams had time to propose topics before the start of the event in a collaborative spreadsheet. The hackathon was opened by the the kick-off meeting in which participants discussed the previously collected topics and had a chance to better understand the scope of work proposed for each of them. Based on that they were able to join selected groups. During the hacking days, participants used Slack, a communication platform for teams, so people within and across teams could easily discuss their ideas. Every team usually held several video meetings -depending on their preferred working mode. Everyday there was an open video meeting for all teams in which every team reported on their progress and thus everyone had a chance to ask questions and provide feedback. The hackathon was concluded by the wrap-up meeting to present and discuss results produced by each group. The results of each team are published individually on zenodo and are referenced in this paper. The video with a recording of the "Grand Finale" in which every group discusses the outcomes is also available on GitHub. PARTICIPANT CHARACTERISATION In total there were 89 participants that registered to attend the hackathon. Participants were associated with institutions from 21 distinct countries, see Figure 1. In terms of gender, see Figure 2, there were 53.9% male and 46.1% female participants (corresponding to 48 men and 41 women), a 7.8% difference between genders. Concerning participant engagement in teams, as can be seen in Figure 5, 60.7% of registered participants opted to join a team, with only 39.3% of registered participants opting not to join (corresponding to 54 joiners to 35 non joiners). Team members represented institutions from 14 different countries, see Figure 3. In terms of gender, see Figure 4, there were 56.4% male and 43.6% female participants (corresponding to 31 men and 24 women), a 12.8% difference between genders. In regard to community engagement, the 66.3% of registered participants (59 participants) were not members of the DCS working group, whilst 33.7% (30 participants) were registered members of the DCS working group, as seen in Figure 6. HACKATHON TOPICS There were a total of 12 topics addressed in this hackathon. Topics were distributed according to the four main areas of application, that were identified based on the overall motivation for the hackathon, (see Table 1). The areas of application were: (1) serialisation; (2) integration of DMP tools; (3) further integration; and (4) funder templates mapping. SERIALISATION By definition, the DCS application profile was always intended to have multiple serialisations. In this hackathon, participants were encouraged to submit topics that would tackle the creation of new or upgrade existing serialisations. There was only one topic submitted that aligned with the serialisation area of focus. The objective of the Unaturals team was the creation of a new version of the DMP Common Standard Ontology (DCSO) (Cardoso, Ekaputra, et al., 2020). The DCSO is an existing serialisation of the DCS application profile that is expressed in RDF/XML. During the hackathon the team achieved the following results: (1) reorganisation of the existing DCSO by separating the core elements from complementary concepts such as countries or languages; (2) integration of concepts from third-party ontologies into the DCSO; and (3) establishing a set of basic Shape Expressions (ShEx) 3 constraints for the DCSO, which can potentially be used to check the conformance of DCSO represented DMP documents, with the DCS application profile specification. INTEGRATION OF DMP TOOLS The integration of DMP tools area is motivated by the need of promoting compliance and usage of the DCS application profile throughout the DMP tools in use by the research community. Five teams submitted topics that matched this focus area, and overall, these topics tackled 6 DMP tools. The Data Stewardship Wizard (DSW) 4 is a DMP tool that guides researchers into creating their DMP documents. The objective of the DSW team was to equip the DSW with the ability to use the DCS application profile as an interchange format. This implied having the ability to both import and export maDMPs that were compliant with the DCS application profile. The objective was achieved, with the latest version of the DSW now being able to import JSON maDMPs and export maDMPs both in JSON, DCSO and in a human readable version (Suchánek et al., 2020). EasyDMP 5 is a DMP creation tool supporting simple and nested question types, organised in a linear structure. On the other hand the DCS application profile has simple and nested data types, in a tree structure. The objective of the Datatypists team was to represent any missing DCS application profile concepts as as question type in EasyDMP, and ponder how to encode the DCS application profile nested tree structures, such as the dataset type. The result of this topic was a revised design of the EasyDMP approach to represent the concepts, further detailed in the topic results report (Moa, Hasan, and Philipson, 2020). The DMP Exchange included developers from the following DMP tools: DSW, EasyDMP, DMPOnline, DMPtool and Haplo. 6,7,8 The objective of the team was to determine if the DCS application profile serialised in JSON could be used as an interchange format in various DMP tools, to do so, mappings across their internal DMP models were established, and the tools were equipped with the ability to both import and export maDMPs. Team members were able to update or create a mechanisms that allowed maDMPs to be exported by the participating DMP tools. However the objective of allowing for the import of maDMPs by the DMP tools was not totally achieved, as that requirement was only fulfilled by one of the tools (i.e. Data Stewardship Wizard). An extended description of results of this topic are presented in its results report (Faure et al., 2020). The Research Data Management Organiser (RDMO) 9 is an open source tool that helps institutions and researchers with planning and executing data management activities. It uses an internal vocabulary called "domain" to map and abstract the user's input into questionnaires together with any other relevant information. The objective of the RDMO team was to map RDMO's domain to the DCS application profile and then build an export functionality. The team was able to map most of the RDMO's domain to the DCS application profile, and establish a prototype of the export functionality (Klar et al., 2020). The OpenDMP software 10 is a data management planning tool that was created by a consortium comprising OpenAIRE 11 and EUDAT CDI. 12 The OpenDMP tool has been implemented by two DMP tools, Argos 13 and EasyDMP. The team focused on Argos, and had two objectives: (1) to implement an import and export function that would allow the usage of DCS application profile compliant maDMPs as an interchange format; and (2) to establish mappings between OpenAIRE Research Graph model (Manghi et al., 2019) and the DCS application profile. A description of results of this topic can be found in its results report (Tziotzios et al., 2020). FURTHER INTEGRATION The further integration area was intended to cover topics whose objective was to integrate the DCS application profile into existing data management frameworks and workflows. Four teams submitted topics that matched this focus area. The objective of the maDMP Link team was to enable both DMP Roadmap 14 and Figshare 15 to use the DCS application profile as an interchange format for maDMPs. The team was able to develop a prototype for Figshare, that allowed to both import and export maDMPs serialised in JSON (Zimmer et al., 2020). The InsTmaDMP team aimed at analysing research data management (RDM) worfklows from four distinct research institutions. They opted to focus their analysis on the DMP creation processes of these workflows. The objective was to analyse the resulting DMP documents and identify any changes that would be necessary (i.e. addition, removal or editing of DMP concepts) to create DCS application profile compatible maDMPs. The result was a list of recommendations for changes in the analysed RDM workflows. A full detailed description of the results of this topic can be found in its report report (Karimova et al., 2020). The Something team set themselves to analyse the data management workflows from the Climate Community in the EOSC-Nordic project, 16 and establish mappings between DCS application profile and existing data management concepts. The objective was to be able to represent the data management workflows with DCS application profile compliant maDMPs. The team was unable to completely map the analysed workflows to the DCS application profile, and as such they considered creating an extension to the DCS application profile. The full results are available in the team's results report (Hasan, Fouilloux, and Jacquemot, 2020). The objective of the DMP InvenioRDM team was to establish mappings between the InvenioRDM 17 data model and the DCS application profile. The team was able to map the DCS application concepts in the InvenioRDM data model, and developed a prototype that allowed for maDMP serialisations to be imported into InvenioRDM. The team results can be found in the results report (Wali et al., 2020). FUNDER TEMPLATES MAPPING The funding templates mapping area covered topics that aimed at establishing mappings between the DCS application profile and DMP representation models from institutions, funding bodies or other stakeholders in the community. There were a total of two teams that submitted topics matching this focus area. The objective of the Tigtag team was to establish mappings between the DCS application profile to several of the DMP templates most commonly used by funding bodies. As DMP templates are typically a list of questions, the proceeded to analyse each individual question and map them to one or multiple matching fields in the DCS application profile. The result of this process was the proposal of an funder-extension to the DCS application profile that added a set of concepts that are required to completely map all of the analysed DMP templates, but are not present in the DCS application profile. The results are further detailed in the results report (Cardoso, Jones, et al., 2020). The Fancycatmeme team had the objective of automating quality-control metrics in the Linked Data Pipeline 18 for the Research Data Connectome. 19 To accomplish this objective, the team first analysed a series of existing DMP documents and proceeded to create DCS application profile compliant maDMPs. Secondly, they provided a series or recommendations on how to approach the path to maDMPs for research institutions, funding bodies and other stakeholders in the ecosystem. The team results can be found in the results report (Rettig et al., 2020). DISCUSSION: BEYOND MADMP Our discussion revolves around two main topics: experiences from organising and participating in a virtual online hackathon and importance of maDMPs for Open Science. A VIRTUAL HACKATHON, LESSONS LEARNT Hackathons, aka CodeFests or Programming Sprints, are becoming more and more common within the scientific community, varying from a couple of days to a whole week. Depending on the length and number of people, logistics might differ. However, there are some common elements such as: defining goals, ensure a balanced and representative participation, use effective communication channels, promote a respectful environment, encourage teambuilding and self-organisation and include retrospective sessions . Given the high degree of interaction, hackathons are commonly organised in face-to-face fashion. 9 Cardoso et al. Data Science Journal DOI: 10.5334/dsj-2021-035 However, due to the COVID-19 pandemic in 2020, the maDMP hackathon, as most of the scientific events running at that time, was organised as a virtual online meeting. One of the main challenges was keeping a high level of participant interaction. Organisers lead the common activities, namely pitching and wrap-up meetings, and made sure these were announced in advance by sending constant reminders and posting announcements in Slack. Repeating information multiple times to ensure that it reaches the target audience is a common practice, particularly in virtual events. Emoticons, animated GIFs and other eyecatching elements play an important role in virtual communication, as they convey emotions beyond the text. Organisers relied on team leaders for internal communication with participants joining their effort. Online meeting rooms and Slack were continuously used. As this was a twoday hackathon, keeping focus was not a main issue, participants were committed to (mostly) clear their agendas for two days so the could actively participate on their selected projects. Finally, a post-hackathon survey was shared with participants. In total 13 participants provided answers to the survey, providing an overall positive feedback. It is relevant to highlight the participant responses on four of the survey questions: (1) On the overall assessment of the event, the majority (69.23%, 9 participants) of survey responses rated the event as being either "very good"; (2) On its organisation, the majority (61.54%, 8 participants) of survey responses rated the event's organisation as "very organised"; (3) On its length, the majority (76.92%, 10 participants) of survey responses rated the event's duration as "about right"; and (4) on the likelihood of participating in similar future events, the majority (61.54%, 7 participants) of survey responses described the likelihood of participating in a similar event as "extremely likely". MADMPS AND OPEN SCIENCE Open Science advocates for open access to research objects (e.g. data, software, workflows, DMPs); however, open access on its own is not enough to ensure reproducibility and advance science, two of the aims of Open Science (Ali-Khan et al., 2018). Effective open data access requires researchers producing data to accompany their data with a DMP as it covers the whole data cycle, from collection to archival, and includes guidance on how to use this data in the future. As it happens with any other research object, DMPs should include enough metadata for them to support the FAIR principles (Wilkinson et al., 2016). Given the amount of steps covered by DMPs and the metadata involved (e.g. techniques, methods, policies), maDMPs become as a natural evolution for DMPs. maDMPs make it easier to continuously and systematically monitor a DMP from start to end. Furthermore, the DCS application profile provides information for researchers to include the necessary metadata to make maDMPs FAIR. Whenever a funding agency demands an update on a project it is funding, maDMPs enable researchers to quickly produce a view up-to-date, with less manual effort that it would be needed on traditional DMPs. A limitation here is the variety of templates used by funding agencies, a topic that was tackled by two of the hackathon participant teams. These efforts can be extended to other funding agencies across different research domains, making maDMPs an ideal companion offering a better support to FAIR principles and Open Science. CONCLUSIONS AND FUTURE WORK The DCS working group entered the maintenance mode. It periodically reviews the recommendation and makes new releases based on the community feedback collected, only if needed. The group, being an active community of maDMP users, helps in promoting the adoption of its recommendations. In this particular case, the adoption of the DCS application profile, and its serialisations as an interchange format for maDMPs. The maDMP Hackathon was therefore an important event to carry out this overarching goal. The hackathon was expected to achieve three primary objectives: 1. To grow the maDMP community. The maDMP hackathon had 89 registered participants of which, 59 were not previously associated with the DCS working group (see Section 2.2). Considering this, it is possible to state that the maDMP Hackathon succeeded in helping the maDMP community to grow, as its expected that participants will continue to engage with the community in the future. Cardoso et al. Data Science Journal DOI: 10.5334/dsj-2021-035 2. To increase the support for maDMPs. The topics addressed in Sections 3.2 and 3.3 showcase the work developed by participants in pursuit of this objective. Tools and solutions still need time to mature. However, in the context of this hackathon the prototypes and mappings that were created, are a step towards achieving this objective. Furthermore, in case of Argos and EOSC Nordic teams, the work they started during the hackathon continued afterwards and resulted in adoption of maDMPs described in adoption stories 20,21 submitted to RDA. Same applies to the ontological representation of the DCS that was published in proceedings of a peer-reviewed workshop (Cardoso, Garcia Castro, et al., 2020). 3. To provide exposure to the adoption of the DCS application profile as a means to exchange DMP information in a machine-actionable way, in multiple of contexts. This objective posed a challenge that was larger than what was attainable in the scope of this hackathon. This paper, and other papers reporting on the topics addressed in this hackathon, can be a contribution to the attainment of this objective. However due to its nature, the impact of the efforts towards the completion of this objective can only be measured in the long term. Overall the hackathon can be considered a successful event. Participants provided results pertaining to all of the identified objectives. It is important to point out that the objectives can all be considered open problems, where there is always the potential to strive for better results. As such, the RDA DCS working group will continue to support its adoption by the community. Participants in the post-hackathon feedback have already expressed their willingness to participate in future hackathons or similar events that would promote the integration of the DCS application profile with of other DMP tools or systems.
2021-11-17T16:29:30.115Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1e0b49271a54efb56efc384ef8c00fe4b137f259", "oa_license": "CCBY", "oa_url": "http://datascience.codata.org/articles/10.5334/dsj-2021-035/galley/1106/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "099b5de8ad5606b067f930a0368bb9f5fb2095b6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
233326013
pes2o/s2orc
v3-fos-license
Anti-aging effects of a functional food via the action of gut microbiota and metabolites in aging mice Wushen (WS) is a mixed food containing 55 natural products that is beneficial to human health. This study aimed to reveal the preventive effect of WS on aging via a combined analysis of gut microbiome and metabolome. Senescence-accelerated mouse prone 8 (SAMP8) mice were used as aging model and senescence-accelerated mouse resistant 1 (SAMR1) mice as control. The mice were fed four diet types; control diet (for SAMR1 mice), standard diet (for SAMP8 mice, as SD group), WS diet, and fecal microbiota transplantation (FMT; transplanted from aging-WS mice). Our results showed that the weight, food intake, neurological function, and general physical conditions significantly improved in WS-fed mice compared to those fed with SD. The CA1 hippocampal region in WS-fed aged mice showed fewer shriveled neurons and increased neuronal layers compared to that of the SD group. WS-fed mice showed a decrease in malondialdehyde and an increase in superoxide dismutase levels in the brain; additionally, IL-6 and TNF-α levels significantly decreased, whereas IL-2 levels and the proportion of lymphocytes, CD3+CD8+ T, and CD4+IFNγ+T cells increased in WS-fed mice. After fed with WS, the abundance of Ruminococcus and Butyrivibrio markedly increased, whereas Lachnoclostridium and Ruminiclostridium significantly decreased in the aging mice. In addition, 887 differentially expressed metabolites were identified in fecal samples, among these, Butyrivibrio was positively correlated with D-glucuronic acid and Ruminococcus was positively associated with 5-acetamidovalerate. These findings provide mechanistic insight into the impact of WS on aging, and WS may be a valuable diet for preventing aging. INTRODUCTION Aging is a progressive complex process that comprises a plethora of mechanisms such as senescence, immune-senescence and inflammation, representing important pathways of age-related diseases [1]. Estimates suggest that ~20% of the world population may be aged 65 or older by 2030, and have an increased prevalence of AGING cardiovascular disease [2]. Diet is an important factor in aging, and individuals with a proper diet and nutrition management, including the consumption of antioxidant supplements, specific foods, and vitamins, may show a reduction in the rate of age-related diseases and a have a prolonged life span [3]. Natural products or nutraceuticals have been shown to elicit anti-aging, anti-cancer and other health-enhancing effects. Wushen (WS) is a food mixture composed of 55 different food ingredients which were rich in antioxidant, anti-inflammatory and immune-regulating substances. WS is composed of natural products without any additives and it is made from powders of fruits, vegetables, grains, aquatic products, cooked meats, eggs, and milk. Thus, WS can provide the human body with protein, fat and carbohydrates, and can supplement various minerals that human body lacks [4]. Many of the ingredients in WS, such as Vitis vinifera (Grape) and Fagopyrum tataricum, are reported to have anti-aging effects [5,6]. Wang et al. revealed that WS had antitumor effect on S180 tumor-bearing mice and its mechanism was partly related to its antioxidant activity, suggesting that WS could be eaten directly and might be beneficial to human health [4]. However, the underlying mechanism of WS for anti-aging has not been elucidated. WS contains many antioxidant and immune-related bioactive constituents such as crude polysaccharide, procyanidins, anthocyanin, resveratrol, squalene, vitamin C, selenium, and taurine. Antioxidants have the ability to delay aging and prevent age-related diseases through relieving oxidation [7]. For example, the crude polysaccharides extracted from Calocybe indica may reduce the occurrence of age-related diseases via their antioxidant activity [8]. In addition, resveratrol belongs to the family of natural phytoalexins and has been claimed as a master anti-aging agent against several age-associated diseases. Previous study has reported that gut microbiota is essential for regulating agingrelated immunological, metabolic, and pathological pathways [9]. Thus, maintenance of healthy or young gut microbiota architecture may delay the aging process [10]. Procyanidin B2 is a component of natural plants or food and has the potential to prevent cognitive and oxidative impairment in d-gal induced aging in rats by regulating metabolic pathways and remodeling the gut flora [11]. Meanwhile, anthocyanin extracted from Vaccinium myrtillus L (Bilberries) could regulate the intestinal function of aging rats. After consumption Bilberry anthocyanin, bacteria beneficial (Lactobacillus and Bacteroides) to the intestine were inducted to grow, and harmful bacteria (Verrucomicrobia and Euryarchaeota) were inhibited [12]. All of these studies show that the active compounds of WS may be major contributors to changes in the composition of gut microbe, whereas none of them clarified the microbiota related to preventive effect of WS on aging. More recently, fecal microbiota transplantation (FMT) has emerged as an effective therapy in preventing and treating age-related diseases, and FMT from wild-type mice contributes to the prolonged lifespan in progeroid mice [13]. Additionally, metabolic processes in the host are regulated by the gut microbiota [14]. For example, Lactobacillus acidophilus DDS-1 (a probiotic strain) can improve the metabolism of pathways associated with amino acids, proteins, and carbohydrates in aging mice [15]. In addition, Luo et al. [16] indicated that alteration of gut microbiota and metabolomics profiles were observed in the aging mice, and FuFang Zhenshu TiaoZhi might exert anti-aging effects by interfering with arachidonic acid metabolism, spingolipid metabolism, glycerolipid metabolism, and intestinal microbes of mice. Based on these studies, we intend to evaluate the preventive effect of WS on aging via integrated analysis of the changes in metabolites and gut microbiota. Senescence accelerated mice-prone 8 (SAMP8) shows significant age-related deteriorations in memory and learning ability, which is consistent with the early onset and rapid advancement of senescence [17]. Normally, SAMP8 live 10 to 12 months on average and, at this time, they start to present a decline in learning and memory formation, and increased emotional disturbances (such as anxiety and depression), abnormal circadian rhythms, and brain atrophy [18]. Thus, SAMP8 mice have been used as a model for the study of brain aging and age-related neurodegenerative conditions. In this study, we aimed to investigate the effects of WS on metabolites and gut microbial composition of aging mice (SAMP8). We examined the neurological functions, inflammatory cytokines, antioxidant indicators, and the immune functions of SAMP8 mice, and further explored the changes in metabolites and gut microbiota. WS improves the weight, food intake, and general physical conditions of aging mice The hair color and body weight of the mice were recorded to note the base conditions during the dietary intervention process. As shown in Figure 1A, the hair color of the mice in the control group was snow-white, whereas the hair color of the aging mice in the SD, WS, and FMT groups was deep yellow. The mice in the SD group were thinner than the mice in the WS group. During the experimental intervention, the body weight AGING and food intake of mice in the control group were found to be significantly higher than that of mice in the other three groups, and the administration of the WS and FMT resulted in a slight recovery in body weight ( Figure 1B, 1C). Further, the influence of WS on the physical fitness of the mice was examined. We found that the level of serum ALB was markedly decreased in the experimental group compared to the control group (P < 0.001), and the ALB levels were restored in mice from the WS group ( Figure 1D). The measurements for the hind limb ( Figure 1E) showed a distinct decrease in the muscle circumference in the SD group relative to the control group, whereas the administration of WS resulted in a significant increase in muscle circumference than in the SD group (P < 0.01). Compared to the control group, the lean body mass and fat content significantly decreased in the other three groups, while treatment with WS and FMT caused an increase in the lean body mass and fat percentage ( Figure 1F, 1G). Magnetic resonance imaging was used to identify the fat distribution and content in mice from the SD and WS groups, and the subcutaneous fat content was lower in the SD group compared to the WS group ( Figure 1H). These results suggested that WS might effectively alleviate the aging-induced reduction in food intake, body weight, serum ALB level, hind limb muscle circumference, lean body mass, and fat content. WS alleviates cognitive impairments and improves neurological function in SAMP8 mice The learning and memorization abilities of mice were analyzed using the shuttle-box test. The results showed that the ratio of the successful avoidance response of mice in the control and WS groups was significantly higher than in the other experimental group from the 4 th day of training (Figure 2A, P < 0.05). The retention of the passive avoidance response of mice in the SD group was significantly higher than that in the WS and FMT groups from the 3 rd day of training ( Figure 2B, P < 0.05). In addition, mice that were administered FMT and WS showed no significant difference compared with the control group. These results indicated that WS might ameliorate the learning and memorization abilities of SAMP8 mice. The hematoxylin-eosin (HE) staining of the hippocampus ( Figure 2C) showed that the features of cells in the CA1 region of the hippocampus in mice from the control group were abundant in neurons, and showed a normal morphological structure, clear cytoplasmic boundaries, and an evident nucleolus. However, the cells of mice in SD group were characterized by multiple shriveled neurons, reduced numbers of neurons and cell layers, nuclear hyperchromatism, unclear nucleolus, and rare cytoplasm. The cells from the CA1 region in the WS group showed deeper nuclear staining and no detectable shriveled neurons. The number of neuronal layers in WS group significantly increased relative to that in the SD group. Moreover, several shriveled neurons, a deeper nuclear staining, unclear cytoplasmic and nuclear boundaries, and relatively tightly packed neurons were observed in the cells of the CA2 region from the WS group. Notably, the characteristics of cells in the FMT group were similar to that in the WS group, except for the presence of a greater number of shriveled neurons. These results indicated that WS might improve . *P < 0.05, **P < 0.01, ***P < 0.001 relative to the control group, and # P < 0.05, ## P < 0.01, ### P < 0.001 relative to the SD group. (C) Images of the HE stained hippocampus in the mouse cerebrum at 100× and 400× magnification. (D) Immunohistochemical images of Aβ expression in tissues from the mouse cerebrum at 20× and 400× magnification. (E) Immunohistochemical images of GFAP expression in tissues from the mouse cerebrum at 50× and 400× magnification. AGING the aging-induced morphological changes in the hippocampal cells in the cerebrum of mice. Immunohistochemical (IHC) assays for the hippocampus showed that the expression of glial fibrillary acidic protein (GFAP, Figure 2E) and amyloid β-protein (Aβ, Figure 2D) considerably increased in mice from the SD group than those from the control group, while this effect was ameliorated in mice from the WS and FMT groups. These results demonstrated the effects of WS in the improvement of declining learning memory and neurological impairment in aging mice. Effects of WS on inflammatory cytokines, oxidation, and antioxidation indicators The levels of inflammatory cytokines IL-2, IL-4, IL-6, IL-10, TNF-α, and IFN-γ in the serum of mice are shown in Table 1. The results indicated that the level of IL-2 in the control group was significantly higher than that in the SD, WS, and FMT groups (P < 0.001). In contrast, the levels of IL-6, IL-10, and TNF-α in the control group were significantly decreased relative to that in the SD, WS, and FMT groups (P < 0.001). Compared with the SD group, the IL-2 level significantly increased in the WS group (P < 0.001), whereas the IL-6 and TNF-α levels were obviously reduced in the WS group (P < 0.05). However, the levels of IL-2, IL-6, and TNF-α did not show significant differences between the WS and FMT groups. Further, no significant changes in the levels of IL-4 and IFN-γ were found among the four groups. These findings showed that WS might exert a partial anti-inflammatory effect by regulating several inflammation-related cytokines. Additionally, the indicators for oxidation and antioxidation were also measured in the cerebrum of mice. The results demonstrated that the malondialdehyde (MDA) content in the cerebrum significantly decreased, and superoxide dismutase (SOD) and glutathione peroxidase (GSH-Px) was significantly higher in the control group than in the SD, WS, and FMT groups ( Table 2, P < 0.001). Interestingly, the content of MDA in the cerebrum was significantly reduced in the WS group relative to the SD group (P < 0.001). Moreover, a significant activation of SOD activity in the cerebrum was detected in the WS group compared to the SD group. Notably, no significant differences of SOD and MDA activities were observed between the WS and FMT groups. Additionally, no significant change for GSH-Px activity was found in the SD, WS, and FMT groups. Together, these results demonstrated that WS had an antioxidative effect on aging mice. WS improves immune functions in SAMP8 mice We investigated the immune cell subsets in the spleen of the mice in the four groups. The results showed that the percentage of lymphocytes, natural killer (NK) cells, CD3 + CD8 + T cells, and CD4 + IFNγ + T cells in the aging mice (SD, WS, and FMT groups) were significantly decreased relative to that of the control group. As expected, WS increased the number of these cells in the aging mice, and the percentages of these four cell types were not significantly different between the WS and FMT groups (Table 3). In addition, no significant change in the numbers of B cells, T cells, CD3 + CD4 + T cells, CD4 + IL-4 + T cells, and CD4 + IL-17 + T cells were detected among the four groups. Influence of WS on bacterial diversity and composition in the gut We profiled the gut microbiota in fecal samples from each mouse via 16S rRNA gene sequencing. The αdiversity (Chao1 index) analysis revealed that the types of bacterial species and their abundance in the gut microbiota from SD group were significantly decreased relative to mice from the other three groups ( Figure 3A, 3B). No significant differences were observed in the gut microbiota among control, WS, and FMT groups, indicating the potential capability of WS to restore the altered imbalance of gut microbial distribution in the aging mice. Mice in the FMT group mimicked the status of intestinal microbiota from the WS group to some extent. In addition, the species accumulation curves demonstrated that the number of microbial species detected showed a leveling off, indicating that sufficient sampling had been achieved and our results were reliable ( Figure 3C). Furthermore, the β-diversity of the bacterial community in each group of mice during the baseline, metaphase, and end-point periods were compared. The principal coordinates analysis (PCoA) plots showed a distinct gut microbial community structure in a comparison between the control and SD groups, accounting for 43.62% of the total variability in the data (PC1=22.14%, PC2=10.96%, and PC3=10.52%; Figure 3D). After the administration of WS and FMT, the bacterial community structure in the group of aging mice that were administered SD were found to be increasingly similar to the control group as the intervention time increased ( Figure 3E, 3F). In addition, we found that the bacterial community structure of FMT group was highly similar to that of WS group; however, these results were not consistent, indicating that the status of intestinal microbes in mice from the FMT group was simulated from the WS group to a certain extent. *P <0.05, *P < 0.01, ***P < 0.001 significant difference compared with control group; #P <0.05, ##P < 0.01, ###P < 0.001 significant difference compared with aging-SD group. Subsequently, the top 15 relatively abundant microbes in SD and WS groups were identified ( Figure 3G), and the top 20 as well as top 10 differential genera are respectively presented in the heatmap ( Figure 3H) and boxplot ( Figure 3I). Results indicated that the abundance of bacteria from the family Prevotellaceae, and the genera Ruminococcus and Butyrivibrio markedly increased in the WS group relative to the SD group, whereas the abundance of Lachnoclostridium, Ruminiclostridium, Eubacterium coprostanoligenes, Intestinimonas, Clostridium sp. ASF356 and microbes from the family Lachnospiraceae significantly decreased in the WS group compared to the SD group. Comparison of the number of differential metabolites at different experimental times The differences in the metabolites level between the SAMP8 and SAMR1 mice at baseline, 8 weeks, and 15 weeks were analyzed. PCA plot revealed that the distribution of metabolites in SAMP8 and SAMR1 mice was clearly separated at the baseline, indicating that the SAMP8 samples had distinct metabolic characteristics relative to the SAMR1 controls. However, the distance between the two sample types gradually approached during the experiment. In brief, the amount of the different metabolites between the SAMP8 and the AGING SAMR1 mice significantly, decreased from 1131 at baseline, to 917 at 8 weeks, and to 111 at 15 weeks (end-point) (Supplementary Figure 1A). Further, no separate distributions for metabolites were observed between the SD and WS groups at 8 weeks; however, distinct distributions were presented at 15 weeks. Specifically, the number of differentially expressed metabolites between the SD and WS groups increased at 15 weeks (887) compared with 8 weeks (56) (Supplementary Figure 2B). Furthermore, for the samples from the WS and FMT groups, the PLS-DA plots showed an approaching distance at 15 weeks compared to 8 weeks, and the number of differentially expressed metabolites between the WS and FMT groups was reduced at 15 weeks (632) than at 8 weeks (375) (Supplementary Figure 2C). These results indicated that the FMT group simulated the alterations of metabolites from the WS group at 15 weeks. However, there was a AGING certain difference in the number of metabolites between the WS and FMT groups, suggesting that the simple intestinal fecal transplantation in this study could not completely replicate the efficacy of oral WS. Pathways analysis of differentially produced metabolites between the WS and SD groups For the fecal samples, compared with the SD group, 887 differentially expressed metabolites (eg., vitexin, melatonin, and histamine) were identified in the WS group. The volcano plot of the metabolites is shown in Figure 4A, 4B shows the heatmap for the top 50 differentially expressed metabolites. Based on the VIP values, the top metabolites including 2E, 13Zoctadecadienal, alpha-artemisic acid, and D-glucuronic acid were significantly upregulated in the WS group, whereas 4-Pyridoxic acid, 17-hydroxy-linolenic acid, 5-Hydroxyindoleacetic acid, and (S)-(-)-Perillyl alcohol were significantly downregulated. The log2 foldchange (FC) values indicated that Tetrahydrofolyl-[Glu](n), diltiazem, candletoxin A, formyl-5hydroxykynurenamine, ponasterone A, quillaic acid, and vitexin were the top significantly upregulated metabolites in the WS group, whereas propinol adenylate, synaptolepis factor K1, and dehydrosoyasaponin I were the top significantly downregulated metabolites in the WS group (Supplementary Table 3). The results for the pathway enrichment analysis indicated that the differentially expressed metabolites were significantly involved in choline metabolism in cancer, linoleic acid metabolism, and neuroactive ligand-receptor interaction pathways ( Figure 4C, 4D). Table 4). These differential metabolites were significantly associated with protein digestion and absorption, aminoacyl-tRNA biosynthesis, and glycerophospholipid metabolism pathways (Supplementary Figure 2C, 2D). Correlation analysis between microbial and metabolic profiles Next, we explored the relationships between the differential microbes and altered metabolites based on Person's correlation coefficient. A total of 535 microbemetabolite relationship pairs involving 21 microbes and 94 metabolites were obtained. The heatmap and network of the top 20 correlation pairs are shown in Figure 5A-5C. The results showed that Lactobacillus positively correlated with calenduloside B (r=0.659, DISCUSSION Aging is a complex and multifactorial process, which is accompanied by changes in the gut microbiota and metabolites. Diet represents the major extrinsic factor that influences the makeup and activity of resident intestinal microbes, thus the dietary intervention may be a beneficial method to improve intestinal flora disorders in the elderly [19]. WS, as a functional food containing various natural products, has been found to be beneficial to human health [4]. Our study revealed that WS was associated with a series of anti-aging events, including improved the cognitive impairments and neurological function of aging mice, reduced the level of several inflammatory and oxidative indicators, as well as increased the number of some immune cells. Moreover, the gut microbial and metabolic profiling analyses showed that the presence of several microbes and metabolites was altered in aging mice, and these changes could be restored using WS. The decline in the learning memory ability is a typical feature of aging [20]. SAMP8 mice are models of spontaneously accelerated aging, showing exhibit neuropathological abnormalities and cognitive and behavioral alterations, such as deficits in learning and memory, oxidative stress, pathological changes in the cerebral cortex and hippocampus in the central nervous system, and alterations of immune function [21]. Thus, we explored the effects of WS on learning, neuropathological alterations and oxidative factors in aging mice. As observed in the shuttle-box test, the retention of the passive avoidance response of the SAMP8 mice was significantly prolonged and the memory of the animal also decreased. After fed with WS, the ability of learning and memory were improved, which indicated that WS effectively improved the agerelevant behavioral phenotype. Further, WS and FMT significantly alleviated the neuropathological alterations in SAMP8 mice such as neuronal loss and atrophy in the CA1 region of the hippocampus. We also found that WS and FMT greatly reduced the expression level of Aβ and GFAP in the hippocampus of aging mice. Aβ formation is thought to be one of the causes of neuronal and synaptic degeneration underlying cognitive decline in Alzheimer's disease (AD). Previous study found that SS31 (a small molecule antioxidant peptide) could slow down cognitive decline of SAMP8 mice via lowering of central Aβ levels and protection of mitochondrial homeostasis [22]. Moreover, GFAP is a marker for the intermediate filament protein in astrocytes, which is the key glial cell type in the central nervous system [23,24]. Reactive astrocytosis is a common feature of central nervous system (CNS) injury during the process of aging [25], and is often accompanied by increased GFAP expression [26]. Taken together, these results demonstrate the WS may serve the potential neuroprotective function in aging mice by decreasing the expression of Aβ and GFAP. Aging is associated with chronic, low-grade increases in circulating levels of inflammatory marks [27]. Studies have shown that the testes in long-lived mice show antiinflammatory and antioxidant capacities, whereas shortlived mice suffer from inflammatory and oxidative processes in the testes [28]. Ginés et al. [29] observed that the inflammatory status of old SMAP8 mice was elevated, and the protein expressions of TNF-α and IL-6 were increased, which was consistent with the results of our analysis. Notably, TNF-α and IL-6 are not only indicators of inflammation, but also causes of morbidity and mortality in the elderly [30]. In addition, we found that WS was able to decrease the pro-inflammatory status in SAMP8 mice, since the expression of TNF-α and IL-6 was significantly decreased after its administration. Aging is associated with proinflammatory cytokines, which might induce the formation of reactive oxygen species [31]. MDA is a marker of the oxidation index, whereas the SOD and GSH-PX indicators reflect the antioxidative properties of cells. Notably, it has been demonstrated that onjisaponin B can prevent cognitive impairment in dgal induced aging in rats by regulating inflammatory mediators (TNF-α, IL-6, and IL-1β) and oxidative stress-related indicators (MDA, SOD, GSH, and GSH-PX) [32]. Similarly, our results indicated that the administration of WS significantly alleviated agingrelated inflammation and oxidative damage by reducing the expression of MDA, IL-6, and TNF-α, and increasing the activity of SOD, GSH-PX. The antioxidant and anti-inflammatory activity of WS in aging rats may result due to its main components, of which the lutein [33], anthocyanin [34], zeaxanthin [35], and resveratrol [29] are well known antioxidative and anti-inflammatory molecules. A crucial component of aging is a series of functional and structural alterations of the immune system that can manifest as a decreased ability to fight weaken immune response, autoimmunity and constitutive low-grade inflammation [36]. Previous study has reported that the features of an aged immune system include significantly AGING reduced naïve T cell proliferation rate and changes in the subset composition of T cell caused by thymus shrinkage [37]. Abe et al [38] investigated the defects of immune cells in the senescence-accelerated mouse, and found that there were qualitative defects in CD4+T cells in SAMP8 mice, which might be closely related to the low endogenous activity of NK cells; these findings were consistent with the results of this study. Further, we observed that WS and FMT improved the immune functions in SAMP8 mice by enhancing the number of these immune cells. In recent years, exciting advances have been made in the study of the mechanisms of aging, especially in work concerning gut microbiota, which is mainly altered by diet. In this study, we focused on altered microbial composition induced by aging and WS. Results revealed that the diversity and composition of the gut microbiome could be significant changed in aging mice, and WS and FMT had the potential to restore the imbalance of gut microbes in aging mice. Compared with SD group, diet with WS triggered marked changes in gut microbial composition, including an increase in Ruminococcus and Butyrivibrio, and a reduction in Lachnoclostridium and Ruminiclostridium. Ruminococcus is one of the key bacteria in the human colon microbiota, which is highly specific to resistant starch through the formation of amyloid [39]. It is also a microbial genus associated with age. Previous study showed that the abundance of Ruminococcus in the mice transplanted with fecal from long-living person was higher than that in the mice transplanted with fecal from elderly person [40]. Butyrivibrio is a butyrateproducing bacteria, and it has been confirmed to be connected with deterioration of clinical symptoms in health status. Luan et al. [41] observed that the abundance of Butyrivibrio showed significant reduction starting from 7 months before the death of the healthy centenarians. Moreover, butyrate can attenuate proinflammatory cytokine expression in microglia in aged mice, and can counterbalance the age-related microbiota dysbiosis, potentially improving neuro-inflammation [42]. These studies suggest that Ruminococcus and Butyrivibrio are beneficial bacteria that may help slow down the progress of aging. Lachnoclostridium and Ruminiclostridium have been found to be related to obesity, inflammation, and aging. However, the specific mechanism of their roles in the aging process has not been reported. Together, key gut microbiota related to aging had undergone a critical transformation due to WS intake effects. Subsequently, we investigated the altered metabolites in SAMP8 mice fed with WS using LC-MS analysis. Compared with SD group, several upregulated metabolites such as vitexin and melatonin, and significantly downregulated metabolites such as histamine were detected in the WS group. The apigenin flavone glycoside vitexin possesses antioxidant and anti-inflammatory roles, and also has lifespan-extending activity [43]. Melatonin is known to reduce oxidative stress in aging cells [44]. In addition, the release of histamine that occurs in aging individuals may impact the aging process by the induction of an allergic reaction [45]. Taken together, we speculated that WS might alter the expression metabolites to play an antiaging effect in SAMP8 mice. To further explore the relationship between changed metabolites and altered microbiotas, we conducted a correlation analysis. Results revealed that Ruminococcus showed a significant positive correlation with the D-glucuronic acid. Studies have shown that Ruminococcus gnavus, as part of the gut microbiome, can produce inflammatory polysaccharides containing a rhamnose backbone and glucose sidechains to induce the secretion of TNF-α in dendritic cells [46]. Tang et al. identified an acid polysaccharide consisting of Darabinose, D-xylose, D-glucose, D-galactose, Dgalacturonic acid, and D-glucuronic acid, and demonstrated its antioxidant and anti-aging properties [47]. Further, we showed that WS increased the level of Ruminococcus and D-galacturonic acid, and reduced the expression of TNF-α, indicating that WS might exert an anti-aging effect via the activity of Ruminococcus to promote the secretion of D-glucuronic acid, thereby inhibiting inflammation factors such as TNF-α. In addition, we found that the presence of Butyrivibrio and 5-acetamidovalerate was positively correlated. However, the role of 5-acetamidovalerate in aging has not been clarified, thus further studies are required to explore this link in the development of aging. In this study, different types of metabolites have positive effects on the inflammatory response and gut microbial composition, so it is necessary to further analyze the correlation of hub metabolites and biochemical parameters. To summarize, this is the first study to investigate the effect of WS on aging mice. Our findings demonstrated that dietary supplementation with WS could improve the symptoms of aging, including improved the ability of learning and memory, alleviated the neuropathological alterations, and enhances immune function. Moreover, compared with normal mice, the microbiota and metabolites of aged SAMP8 were significantly changed, while WS might restore them to normal levels. WS significantly increased the abundance of Ruminococcus and Butyrivibrio, and decreased the abundance of Lachnoclostridium and Ruminiclostridium. Moreover, correlation analysis distinctly revealed that Butyrivibrio was positively AGING correlated with D-glucuronic acid and Ruminococcus was positively associated with 5-acetamidovalerate. Together, WS exerted anti-aging effects via modulating gut microbiota and metabolites, which could be valuable dietary prevention for aging. Animals and experimental design All animal procedures were approved by the Institutional Animal Care and Committee of Second Military Medical University. A total of 60 male SAMP8 mice (aged 3-4 months) and 20 male senescenceaccelerated mouse resistant 1 (SAMR1, aged 3-4 months) were purchased from Zhishan (Beijing) Institute of Health Medicine (Beijing, China). SAMP8 mice were allocated into three groups (n=20 per group): standard diet (SD) group, mice had ad libitum access to SD; WS group, mice had ad libitum access to processed WS laboratory feed; and FMT group, mice were fed with SD and were injected in the anus with 200 μL of fecal suspension derived from the WS group. SAMR1 mice fed with SD were used as control group. The SD consisted of 9% water, 19% protein, 4% fat, 5% fiber, 8% ash, and 55% carbohydrate. All mice were allowed to have free water, and maintained at standard temperatures (25 ± 2° C) and relative humidity (40 ± 5 %) conditions under 12 h light/dark cycle. The interventions were continued for 15 consecutive weeks. Shuttle-box test Shuttle-box test was performed in accordance with a previous study [48], and it was used to evaluate the learning and memorization abilities of mice. Before the experiment, the mice were placed into chambers (25 ×18.5 ×30 cm) and acclimatized to the new situation for 5 min. Each training consisted of a conditioned stimulus (60 dB, 5 s) followed by an electrical shock (100 V, 50 Hz, AC, 10 s). If the mice escaped to the other side of the shuttle box before electrical stimulation, it is recorded as active avoidance. If the mice completed the shuttle after the electrical shock, it was considered as passive avoidance. Moreover, if no escape occurred after both stimuli, no avoidance behavior was recorded. The test was conducted for 5 consecutive days and 10 times per day with an inter-trial interval of 20 s. After the experiment, the response rates for active avoidance and passive avoidance were calculated to evaluate the memorization ability of mice. Additionally, the time needed for active and passive avoidance evaluated positively with the learning ability of the mice. Detection of basic indices The body weight of each mouse was measured every week, and the daily food intake of mice was calculated. In addition, the activity and condition of the hair of the mice in each group were observed and recorded. At the end of the trial, the lean body mass and fat percentage of the mice in each group were examined by using the Awake animal body composition analyzer (MesoQMR23-060H; Shanghai Electronic Technology Co., Ltd, China). Meanwhile, the fat content and its distribution were detected by using the magnetic resonance imaging analyzer MesoMR23-060H-I. Sample collection After the experiments, the mice were anesthetized using isoflurane and blood samples were collected. Serum was extracted after centrifugation of the blood samples at 3000 rpm for 15 min and stored at −80° C. Subsequently, the mice were sacrificed and immediately anatomized on ice to obtain the cerebrum, spleen, and hind limbs as described previously [4]. After washing with saline and drying with filter paper, the weights of the harvested organs were recorded. The muscle circumference of the hind limbs of the mice were measured and recorded. The whole hippocampus was isolated from the cerebrum for HE staining and IHC analysis. The feces (50 mg) of the mice at days 0, 8, and 15 weeks were collected and stored at −80° C for the analysis of short-chain fatty acids (SCFA) and the intestinal microflora. Hematoxylin-eosin staining and immunohistochemical analysis The isolated hippocampus was immediately fixed upon isolation with 4% paraformaldehyde (Wuhan Servicebio Technology Co., Ltd., Wuhan, China), and subsequently dehydrated in ethanol and embedded in paraffin. Tissue AGING sections of 5 μm thickness were prepared. For HE staining, the sections were stained with hematoxylin (Sigma-Aldrich) for 10 min, followed by incubation with eosin (Sigma-Aldrich) for 30 s at 25° C. The histological morphology of neurons in the CA1 area of the hippocampus was observed using an optical microscope (NIKON Eclipse Ci, Japan) at a magnification of 100× and 400×, respectively. For IHC staining, the sections were subjected to 3 min of microwave heating for antigen retrieval and were subsequently blocked with 5% goat serum (Sigma-Aldrich) at 37° C for 1 h. Next, the sections were incubated with rabbit anti-glial fibrillary acidic protein (GFAP; 1:500; Abcam) primary antibody at 4° C overnight, followed by treatment with horseradish peroxidase (HRP)-conjugated sheep anti-rabbit IgG (1:1500; Abcam) at 37° C for 1 h. The visualization of the immunoreaction was performed using 3, 3′diaminobenzidine (Sigma-Aldrich). The stained images were photographed using a microscope at 400× magnification. Quantification of immune cells via flow cytometric analysis Flow cytometry was used to identify the immune cell subsets from the fresh spleen tissues of mice. Briefly, the spleen tissues were dissociated into single cells by trituration and cell lysis with red blood cell lysis buffer (Cat. No: 420301; Biolegend, San Diego, CA, USA). After centrifugation, the density of the single-cell suspension was adjusted to 1 × 10 6 cells/mL. The CD3 + CD8 + T cells, CD3 + CD4 + T cells, CD4 + IFNγ + T cells, and Treg cells were stained with antibodies for labeling. Next, the percentage of the different immune cell subsets was identified by flow cytometry (FACSFortessa X20). The antibodies used were as follows: CD3 eF450 (Cat. No: 11-0042-82, Thermo), 16S rRNA sequencing Total microbial genomic DNA (gDNA) from the fecal samples (5 g) of each mouse was extracted using the QIAamp DNA Stool Mini Kit (50) (51504, Qiagen) as per the manufacturer's protocol. The V3-V4 regions of the bacterial 16S rRNA gene were amplified using the primers pairs 356F (5'-CCTACGGGNGGCWGCAG-3') and 803R (5'-GACTACHVGGGTATCTA ATCC-3') [49]. Next, the PCR amplification, the library preparation and sequencing were performed as described previously [49]. The Illumina Miseq (Illumina, San Diego, CA, USA) sequencing platform was used to generate paired-end sequencing reads (2×300 bp). Subsequently, the FLASH software (Trimmomatic) was used to merge the matched pairs and perform quality control for the reads obtained. Bioinformatics analysis USEARCH (version 7.0) was used for Operational Taxonomic Units (OTU) clustering at a similarity level of 97%. The Ribosomal Database Project classifier v2.2 was applied to perform the taxonomy for each representative OTU against the Silva database [50]. Then, the obtained OTU data were used for taxonomical assignments using RDP Classifier algorithm (http:// rdp.cme.msu.edu/classifier/classifier.jsp). The analyses for the richness index, alpha (Chao index [51]), and beta diversity (PCoA) were conducted through QIIME (Version 1.9, http://qiime.sourceforge.net/). The t-test and the Wilcoxon sum-rank test were used to assess the difference between the groups. The OTUs with P<0.05 were considered to be significantly different between groups. A principal component analysis (PCA) and UniFrac-based PCoA were performed using the R statistical software to perform the clustering of different samples. Metabolomic analysis A total of 100 μL serum and 60 mg stool samples from each mouse in different groups (n=8) were collected. Subsequently, 10 μL and 20 μL of the internal standard (0.3 mg/mL L-2-chlorophenyl alanine and 0.01 mg/mL 17:0 Lyso PC) was added to the serum and stool samples, respectively, and 400 μL mixture of methanol and water (V:V=1:4) was added to remove impurities. The supernatant of each sample was used for liquid chromatography-mass spectrometry (LC-MS) analysis using Dionex UltiMate 3000 UHPLC, (Thermo Scientific). Chromatographic separation was conducted AGING using a 100 × 2.1 mm, 1.8 μm ACQUITY UPLC HSS T3 column at 50° C with a flow rate of 0.35 mL/min. Water with 0.1% formic acid was used as the solvent A of the mobile phase, and acetonitrile with 0.1% formic acid was used as solvent B. Data pre-processing, multivariate analysis, and metabolic annotation The Progenesis QI v2.3 software was used for data processing, including baseline filtering, peak identification, integration, correction of retention time, peak alignment, and normalization. Next, multivariate statistical analyses including PCA and orthogonal partial least-squares discriminant analysis (OPLS-DA) were used to observe the distribution of samples and the differential metabolites among groups. The degree of difference between the two groups was evaluated using the t-test, and the metabolites showing significant differences were selected using thresholds of VIP > 1 (VIP = variable importance in the projection) and Pvalues < 0.05. Volcano plots were created to identify the P and the fold-change values for metabolites between the groups. Next, hierarchical clustering was performed to identify the expression differences of the top 50 metabolites in different samples. The functions of differential metabolites were annotated using the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. Correlation analysis for microbial taxonomy and metabolites To evaluate the potential associations between the different microbes and metabolites identified in the WS and SD groups, a Spearman's correlation analysis was conducted. The Spearman's correlation coefficient was calculated for the relative abundance of OTU and the response intensity of the metabolites, and the relationship between microbial taxonomy and metabolites. A network showing the top 20 results was plotted. Statistical analysis Data were expressed as mean ± standard deviation (mean ± SD) and were analyzed using the SPSS 19.0 software (IBM, New York, NY, USA). Data from more than three groups were analyzed using a one-way analysis of variance (ANOVA) and the least significant difference (LSD) test. A value of P < 0.05 was considered as statistical significant. Supplementary Figures Supplementary Figure 1. The comparisons of numbers of differential metabolites at different experiment time from faeces samples. (A) PCoA plot and the counts of differential metabolites identified at baseline, 8w and 15w between SMPA8 and SMMR1 sample. (B) PCoA plot and the counts of differential metabolites identified at experiment 8w and 15w between SD and WS groups. (C) PCoA plot and the counts of differential metabolites identified at 8w and 15w between FMT and WS groups. Please browse Full Text version to see the data of Supplementary Table 3.
2021-04-22T06:18:57.009Z
2021-04-20T00:00:00.000
{ "year": 2021, "sha1": "711a83f97484f13f96b1f5495dc545fc42304d14", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.202873", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ac1a4c049fcefe707280444437d1d00ccb8b7f7", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
235214811
pes2o/s2orc
v3-fos-license
Combined Impact of Inflammation and Pharmacogenomic Variants on Voriconazole Trough Concentrations: A Meta-Analysis of Individual Data Few studies have simultaneously investigated the impact of inflammation and genetic polymorphisms of cytochromes P450 2C19 and 3A4 on voriconazole trough concentrations. We aimed to define the respective impact of inflammation and genetic polymorphisms on voriconazole exposure by performing individual data meta-analyses. A systematic literature review was conducted using PubMed to identify studies focusing on voriconazole therapeutic drug monitoring with data of both inflammation (assessed by C-reactive protein level) and the pharmacogenomics of cytochromes P450. Individual patient data were collected and analyzed in a mixed-effect model. In total, 203 patients and 754 voriconazole trough concentrations from six studies were included. Voriconazole trough concentrations were independently influenced by age, dose, C-reactive protein level, and both cytochrome P450 2C19 and 3A4 genotype, considered individually or through a combined genetic score. An increase in the C-reactive protein of 10, 50, or 100 mg/L was associated with an increased voriconazole trough concentration of 6, 35, or 82%, respectively. The inhibitory effect of inflammation appeared to be less important for patients with loss-of-function polymorphisms for cytochrome P450 2C19. Voriconazole exposure is influenced by age, inflammatory status, and the genotypes of both cytochromes P450 2C19 and 3A4, suggesting that all these determinants need to be considered in approaches of personalization of voriconazole treatment. Introduction Voriconazole (VRC) is a broad-spectrum azole antifungal agent indicated for the treatment and prevention of invasive fungal infections. It is one of the first-line treatments for invasive aspergillosis [1,2]. However, despite adequate care, mortality due to invasive 2 of 12 aspergillosis remains very high, ranging between 19 and 61% for patients with hematological malignancies [3]. One of the possible causes of these many failures is insufficient exposure to the drug [4]. Indeed, VRC exhibits high pharmacokinetic variability, with insufficient concentrations exposing patients to an increased risk of treatment failure and excessively high concentrations resulting in adverse effects and the risk of treatment discontinuation [5,6]. In this context, VRC therapeutic drug monitoring (TDM) is recommended throughout treatment [2,7]. As invasive aspergillosis is a serious and directly life-threatening disease, it is essential to achieve effective concentrations as soon as treatment is initiated [8][9][10]. In this context, VRC personalized treatment with a priori dose adjustment has already been proposed instead of five days after the initiation of treatment [4]. Such strategies are most frequently based on the cytochrome P450 (CYP) 2C19 genotype [11][12][13][14], as many studies have demonstrated the contribution of CYP2C19 polymorphisms in the variability of VRC trough concentrations (Cmin) [15][16][17][18]. In addition to CYP2C19 polymorphisms, numerous other factors are known to influence VRC exposure. For example, certain genetic variants of CYP3A4 (rs35599367 and rs1464637) [19][20][21] and inflammatory status [22][23][24][25][26] are associated with increased VRC Cmin. The clinical implications have been limited, as only a few small-scale studies have simultaneously evaluated genetic polymorphisms of both CYP2C19 and CYP3A4, along with the inflammatory status of patients [25][26][27][28]. Moreover, it was suggested that the effect of inflammation on VRC exposure may depend on the CYP genotype [27,28]. We aimed, therefore, to more precisely define the respective impact of the inflammatory state and genetic variants on VRC exposure by gathering available data to perform an individual patient data meta-analysis. Materials and Methods This meta-analysis was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis statement guideline [29]. The study is registered with PROSPERO (CRD42020162292), and the protocol and systematic search strategy are available online. Systematic Literature Review A systematic literature review was conducted using Pubmed to identify studies evaluating the impact of CYP genetic polymorphisms on VRC exposure, taking into account the inflammatory status of patients. The following search terms were used: ("VORICONA-ZOLE") AND ("PHARMACOGENETICS" OR "PHARMACOGENOMICS" OR "THERA-PEUTIC DRUG MONITORING" OR "CYP2C19" OR "CYP3A4" OR "PHARMACOKINET-ICS" OR "POLYMORPHISM"). The terms ("VORICONAZOLE") AND ("THERAPEUTIC DRUG MONITORING" OR "PHARMACOKINETICS") AND ("INFLAMMATION") were used to identify studies that evaluated the impact of inflammation on VRC concentrations. Study Selection Criteria One of the authors (LB) first screened studies based on titles and abstracts. Then, a second selection was made by two authors (LB and EG) based on the full text of the manuscripts. Studies were deemed eligible if they were case-control or cohort studies with VRC Cmin measured at pharmacokinetic steady-state and genetic data (at least CYP2C19 genotyping ± CYP3A4 and 3A5 genotyping) were available. Even if the inflammatory status of patients was not investigated in some studies, it was not a discriminating criterion for inclusion because CRP levels determined during routine medical care could be retrospectively collected. Literature reviews, case reports, and studies conducted on pediatric populations, healthy volunteers, or less than 10 patients were excluded. Data Extraction The objective of this work was to collect individual patient data from the studies selected during the systematic literature review. Thus, all the authors were contacted three times by email. The following data were requested and merged into a single database for analysis: patient identification, age, sex, weight, main pathology, CRP level, method of measuring CRP, liver enzymes (aspartate amino transferase (ASAT) and alanine aminotransferase (ALAT)), VRC Cmin determined at pharmacokinetic steady-state, the method for VRC Cmin determination, date of blood collection, date of VRC initiation, daily dose, route of administration, concomitant proton-pump inhibitor (PPI) treatment, CYP2C19, 3A4, and 3A5 genotype, and genotyping technique. The inclusion criteria were patients with a CYP2C19 genotype ± CYP3A4 and CYP3A5 genotypes with at least one pair of VRC Cmin and CRP level determined concomitantly. The absence of major drug-drug interactions was verified in each included study. The presumed ethnicity of the patients was determined based on the geographical origin of the studies, except for one study in which ethnicity was specified [30]. VRC Cmin values below the limit of quantification were replaced by the lower limit of quantification of the method. The phenotype of CYP2C19 (poor (PM), intermediate (IM), extensive (EM), rapid (RM), or ultrarapid (UM) metabolizer) was determined based on CPIC recommendations for each patient [31]. Subsequently, three groups were defined: patients with increased metabolic capacity (RM and UM), patients with decreased metabolic capacity (IM and PM), and patients with standard metabolic capacity (EM). Similarly, the phenotype of CYP3A4 and 3A5 was determined for each patient if data were available. For CYP3A4, patients with the rs35599367 (CYP3A4*22) allele were assigned an IM phenotype. For CYP3A5, the rs776746 (CYP3A5*1) allele is associated with the expression of this cytochrome, unlike the *3 alleles, which is associated with non-expression. The combined genetic score, including the CYP2C19, CYP3A4, and CYP3A5 genotypes, was calculated, as previously described (see [19] and Table S2). Quality Assessment The quality of the included studies was assessed by two authors (LB and CK) using the Newcastle-Ottawa Scale (NOS), as they were observational case-control or cohort studies. This scale assigns a maximum of 9 stars for good quality studies with a low risk of bias. It is based on the selection of study groups, comparability of groups, and determination of the exposure or outcome of interest for case-control or cohort studies. Statistical Analysis A linear mixed-effect model was used to assess the influence of various factors on VRC exposure. In the base mixed-effect model, random effects were included in the intercept for interindividual variability and study. Then, we performed univariate analyses for all continuous (age, weight, ASAT, ALAT, CRP levels, daily dose, and combined genetic score) and categorical (route of administration, concomitant PPI intake, CYP3A4, 3A5, and 2C19 genotypes) variables. Covariates associated with a p-value < 0.1 in the univariate analysis were considered clinically relevant and biologically plausible and were, therefore, included in the multivariate intermediate model. The final model was selected using a backward stepwise process based on the Akaike information criterion. Finally, all assumptions were checked in the final model, including linearity, absence of collinearity, homoscedasticity, normality of residuals, absence of influential data points, and independence. Finally, the marginal means of the selected variables in the final model were plotted. Missing data for weight, ASAT, and ALAT were imputed using subject-centered means of available data from other visits. The genetic score was included in the final model but was not computable for all studies due to the lack of data for the CYP3A genotype. We, therefore, performed two sensitivity analyses by substituting the genetic score with the CYP2C19 phenotype alone and the CYP2C19 and CYP3A4 phenotypes. A p-value < 0.05 was considered statistically significant. All statistical analyses were performed using Jamovi (version 1.1.9) and R (version 3.6.1). General Characteristics of Studies The study selection process is shown in Figure 1. Among the 1793 articles initially identified, 85 were selected for full-text evaluation. Forty-six articles were selected after the exclusion of 39 that did not meet the inclusion criteria. Eleven of the 46 corresponding authors who were approached responded to our emails, resulting in the collection of individual data from six studies. and the CYP2C19 and CYP3A4 phenotypes. A p-value < 0.05 was considered statistically significant. All statistical analyses were performed using Jamovi (version 1.1.9) and R (version 3.6.1). General Characteristics of Studies The study selection process is shown in Figure 1. Among the 1793 articles initially identified, 85 were selected for full-text evaluation. Forty-six articles were selected after the exclusion of 39 that did not meet the inclusion criteria. Eleven of the 46 corresponding authors who were approached responded to our emails, resulting in the collection of individual data from six studies. The main characteristics of the studies included in the meta-analysis are summarized in Table 1. Three studies were on retrospective cohorts [15,19,30], two were prospective observational studies [26,28], and one a retrospective case-control study [25]. The total number of included patients was 203 for 754 VRC Cmin values. The genotypes of both CYP2C19 and CYP3A4/5 were determined for 4/6 studies, corresponding to 136/203 (67.0%) patients. All characteristics of the patients, including the frequency of various phenotypes for each CYP, are summarized in Table S1. The combined genetic scores of various CYP2C19 and CYP3A4/5 polymorphisms are presented in Table S2. The main characteristics of the studies included in the meta-analysis are summarized in Table 1. Three studies were on retrospective cohorts [15,19,30], two were prospective observational studies [26,28], and one a retrospective case-control study [25]. The total number of included patients was 203 for 754 VRC Cmin values. The genotypes of both CYP2C19 and CYP3A4/5 were determined for 4/6 studies, corresponding to 136/203 (67.0%) patients. All characteristics of the patients, including the frequency of various phenotypes for each CYP, are summarized in Table S1. The combined genetic scores of various CYP2C19 and CYP3A4/5 polymorphisms are presented in Table S2. Determinants of Voriconazole Trough Concentration The results of the univariate analysis are presented in Table 2. Weight, liver function, concomitant treatment by PPI, and CYP3A5 phenotype were not associated with the VRC Cmin. Conversely, age, VRC daily dose, and CRP levels were significantly associated with a higher VRC Cmin, whereas oral VRC administration was significantly associated with a lower VRC Cmin. The phenotypes of CYP2C19 (RM/UM versus EM) and CYP3A4 were associated with VRC Cmin, even if statistical significance was not reached for CYP3A4. Similarly, the combined genetic score was associated with VRC Cmin, with a lower VRC Cmin for a higher combined genetic score. [31]. c Proposed by Gautier-Veyret and al. [19]. Results of the multivariate analysis are presented in Table 3, which shows the results of three different linear mixed-effect models. All three models showed that higher age, VRC daily dose, and CRP levels were significantly and independently associated with a higher VRC Cmin. Model 1, based on the largest number of observations, showed the CYP2C19 phenotype to be significantly associated with variations of VRC Cmin. Model 2, which individually considered each phenotype of CYP2C19 and 3A4, showed that the phenotypes of both CYPs significantly influence VRC Cmin. Model 3, which considered the combined genetic score, showed that an increase in the genetic score is significantly associated with a decrease in the VRC Cmin. An increase of 1 unit of the combined genetic score was associated with a reduction of 43% of the VRC Cmin, whereas an increase in the CRP level of 10, 50, or 100 mg/L was associated with respective increases of 6, 35, and 82%. [31]. c Proposed by Gautier-Veyret and al [19]. Impact of Inflammation Modulated by CYP2C19-Mediated Metabolism of VRC We tested the results of the interaction between inflammation and pharmacogenetic markers on VRC Cmin in the three multivariate models (Table 3). Significant interactions were found between CRP and CYP2C19 phenotype in model 2 and between CRP and genetic score in model 3. The evolution of VRC Cmin according to CRP levels stratified according to CYP2C19 genotype is shown for model 2 in Figure 2. The effect of inflammation was reduced for patients with a phenotype of PM/IM, whereas its impact was not significantly different between EM and RM/UM patients. according to CYP2C19 genotype is shown for model 2 in Figure 2. The effect of inflammation was reduced for patients with a phenotype of PM/IM, whereas its impact was not significantly different between EM and RM/UM patients. Quality Assessment The assessment of study-specific quality scores from the NOS system is summarized in Table S3. For those studies for which the criterion did not apply, the indication "not applicable" (NA) was entered. The overall quality of the included studies was good (all the studies received a score of 7 or 8 stars). Discussion This meta-analysis performed on a large number of adult patients and VRC Cmin determinations show that the VRC Cmin is influenced by the inflammatory status and genotypes of both CYP2C19 and 3A4, in addition to the age of the patients and the dose of VRC. The positive and independent association between CRP levels and VRC Cmin is in accordance with the results of numerous previous studies [24,[26][27][28] and can be explained by the phenomenon of inflammation-induced phenoconversion [32,33]. Indeed, the expression and activity of CYPs are down-regulated during an acute inflammatory episode, notably under the effects of pro-inflammatory cytokines, such as interleukin-6, which leads to a reduction in CYP-mediated drug metabolism [32,34]. Such an inhibitory effect of inflammation was especially demonstrated in vitro for CYP3A4 and CYP2C19 [32], the main enzymes involved in VRC metabolism in adults [35]. Increases of 10, 50, and 100 mg/L in CRP levels were associated with an increase in the VRC Cmin by 6, 35, and 82%, respectively. For example, an initial VRC Cmin of 1.8 mg/L (median VRC Cmin in this meta-analysis) would increase to 3.3 mg/L for a 100-mg/L increase in the CRP level. This factor is of the same order of magnitude as that found in two European studies [23,36] but larger than that found in a Chinese study that reported an increase in VRC Cmin of 0.6 mg/L [37]. This difference can be explained by different genotypic frequencies between Quality Assessment The assessment of study-specific quality scores from the NOS system is summarized in Table S3. For those studies for which the criterion did not apply, the indication "not applicable" (NA) was entered. The overall quality of the included studies was good (all the studies received a score of 7 or 8 stars). Discussion This meta-analysis performed on a large number of adult patients and VRC Cmin determinations show that the VRC Cmin is influenced by the inflammatory status and genotypes of both CYP2C19 and 3A4, in addition to the age of the patients and the dose of VRC. The positive and independent association between CRP levels and VRC Cmin is in accordance with the results of numerous previous studies [24,[26][27][28] and can be explained by the phenomenon of inflammation-induced phenoconversion [32,33]. Indeed, the expression and activity of CYPs are down-regulated during an acute inflammatory episode, notably under the effects of pro-inflammatory cytokines, such as interleukin-6, which leads to a reduction in CYP-mediated drug metabolism [32,34]. Such an inhibitory effect of inflammation was especially demonstrated in vitro for CYP3A4 and CYP2C19 [32], the main enzymes involved in VRC metabolism in adults [35]. Increases of 10, 50, and 100 mg/L in CRP levels were associated with an increase in the VRC Cmin by 6, 35, and 82%, respectively. For example, an initial VRC Cmin of 1.8 mg/L (median VRC Cmin in this meta-analysis) would increase to 3.3 mg/L for a 100-mg/L increase in the CRP level. This factor is of the same order of magnitude as that found in two European studies [23,36] but larger than that found in a Chinese study that reported an increase in VRC Cmin of 0.6 mg/L [37]. This difference can be explained by different genotypic frequencies between these studies. Indeed, 15.8% of the Asian population is PM for CYP2C19 versus 2.2% of the Caucasian population [38] and only 4.4% in this study. Concerning pharmacogenetic markers, univariate analysis showed that VRC Cmin tended to be associated with the genotypes of CYP2C19 and 3A4 but was not influenced by that of CYP3A5. Conversely, the combined genetic score, the determination of which integrates all these genotypes, was significantly associated with the VRC Cmin. Multi-variate analysis demonstrated a significant impact of the CYP2C19 phenotype on VRC Cmin, without any two-by-two comparisons showing statistical significance, except the RM/UM versus EM comparisons. This result can be explained by the fact that 78% of the patients were presumably Caucasian, and the number of patients with the PM CYP2C19 phenotype was low (only 9/203 patients (4.4%)). Concerning CYP3A4, the trend towards an increased VRC Cmin for IM patients observed in univariate analyses was confirmed in multivariate model 2, in which significance was reached. This finding is in accordance with those of previous studies that demonstrated the impact of genetic polymorphisms of CYP3A4 on VRC exposure [14,[19][20][21]. Conversely, we did not find any association between the VRC Cmin and CYP3A5 genotype. This result is consistent with those of two previous studies performed in healthy European volunteers [14,39] but not in agreement with a Chinese study that highlighted a trend towards a higher frequency of the CYP3A5*1/*3 genotype for patients with a low VRC Cmin [21]. These discrepancies may be related to different frequencies of the CYP3A5 genotype depending on the study population. Indeed, the frequency of CYP3A5 expression is relatively low in Europeans, almost 15% (9.9% in this study), whereas 41% of the patients included in the study of He et al. expressed CYP3A5 [21]. Further research is needed to elucidate the impact of the CYP3A5 genotype on VRC exposure. Two previous studies included in this meta-analysis had suggested that the impact of inflammation on VRC exposure could be modulated by CYP genotypes [27,28]. We obtained a similar result in this meta-analysis, with a smaller effect of inflammation for patients with decreased metabolic capacity for CYP2C19 (IM and PM) in model 2 (significant interaction between inflammation and the CYP2C19 phenotype) than those with normal (EM) or elevated metabolic capacity (RM and UM) (see Figure 2). A study conducted in the Chinese population [37] reported a lower magnitude in the increase in the VRC Cmin in the presence of an inflammatory syndrome than two studies conducted in the European population [23,36]. Such a finding is consistent with our meta-analysis, as a higher frequency of PM for CYP2C19 was found in the Asian population than in the Caucasian population [38]. The fact that VRC exposure appears to be independently influenced by age, inflammatory status, and genetic polymorphisms of both CYP2C19 and 3A4 calls into question the relevance of VRC dose-adjustment strategies based solely on the CYP2C19 genotype [11][12][13]. Although these approaches are useful to reduce the risk of insufficient VRC Cmin in prophylaxis [11,13], their efficiency could be improved by integrating additional determinants [14], particularly the CYP3A4 genotype and inflammatory status. In addition, our findings highlight the fact that interpretation of VRC Cmin measured in routine care and resulting dose adjustment should account for the inflammatory status of the patient [32]. This study is the first to analyze the respective impact of inflammation and pharmacogenetic markers on such a large number of patients and observations. Nonetheless, it had certain limitations. First, among the 46 authors contacted by email, 35 (76%) did not answer, resulting in a still relatively small sample size. In addition, the primary endpoint, namely the VRC Cmin, is an intermediate endpoint, and the consequences of variations of VRC Cmin (due to genetic polymorphisms and inflammatory status) on treatment efficacy and/or adverse effects were not investigated. However, the concentration-effect relationship of VRC is well characterized for both efficacy and toxicity [6], suggesting that any variation in the VRC Cmin would directly influence the treatment outcome. Moreover, we assessed VRC exposure by the VRC Cmin, whereas the ideal parameter would have been the area under the curve of VRC, as recently proposed [14]. Finally, the included studies were heterogeneous in their methodology, with, for example, the absence of CYP3A4/5 genotypes for two of the six studies (representing 67 patients and 266 VRC Cmin determinations), and most of the patients were presumed to be Caucasian, resulting in a small number of PM for CYP2C19. In conclusion, this meta-analysis demonstrates that VRC exposure is independently influenced by the dose of VRC, age, inflammatory status, and the genotypes of both CYP2C19 and 3A4 aggregated in a combined genetic score. Such findings suggest that an a priori VRC dose-adjustment strategy should consider the CYP2C19 and CYP3A4 genotypes, as well as the patient's inflammatory status. More generally, in the era of predictive, preventive, and personalized medicine, inflammatory markers, already considered to stratify patients with regard to the risk of non-communicable diseases [40][41][42], should be further studied in pharmacokinetics studies. In light of the example of voriconazole, existing strategies of personalized treatment of narrow therapeutic index drugs based most often on one or few pharmacogenomic/demographic parameters could be improved by the integration of additional markers, such as inflammatory markers. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/jcm10102089/s1, Table S1. Patient characteristics according to the studies included in the meta-analysis. Table S2. Calculated combined genetic scores (number of patients) according to CYP2C19 and CYP3A4/5 genotypes. Table S3. Quality assessment of included cohort studies. Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Institutional Review Board Statement: Ethical review and approval were waived for this study due to its design. Informed Consent Statement: The study was conducted according to the guidelines of the Declaration of Helsinki.
2021-05-28T05:21:21.399Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "e2173b93a35ad5cf4fbaa090abc4e219d52e9104", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/10/2089/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2173b93a35ad5cf4fbaa090abc4e219d52e9104", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
9365641
pes2o/s2orc
v3-fos-license
Self-Reported Physical Health Associations of Traumatic Events in Medical and Dental Outpatients Abstract The purpose of this cross-sectional study was to understand the prevalence and severity of health-related sequelae of traumatic exposure in a nonpsychiatric, outpatient sample. Self-report surveys were completed by patients seeking outpatient medical (n = 123) and dental care (n = 125) at a large, urban academic medical center. Results suggested that trauma exposure was associated with a decrease in perceptions of overall health and an increase in pain interference at work. Contrary to prediction, a history of interpersonal trauma was associated with less physical and emotional interference with social activities. A history of trauma exposure was associated with an increase in time elapsed since last medical visit. Depression and anxiety did not mediate the relationship between trauma history and medical care. Based on these results, clinical and research implications in relation to the health effects of trauma are discussed. The results suggest that routine screening for traumatic events may be important, particularly when providers have long-term relationships with patients. INTRODUCTION E xposure to traumatic events, such as domestic violence, sexual assault, and child abuse are highly prevalent in the general US population. [1][2][3] Yearly crime data suggest that exposure to community violence (eg, robbery, physical assault, and gun violence) is also common in certain U.S. neighborhoods. 4 Individuals vary in how they react and adjust to traumatic events. Some survivors experience impairments immediately after a traumatic event, and others exhibit delayed symptomatology. 5 Some individuals experience a level of posttraumatic resilience that is often associated with a flexible personality style and higher levels of social support. 6 For many survivors, trauma also has a large health burden. For example, survivors might cope by adopting maladaptive health behaviors such as overeating, smoking, substance use, and high-risk sexual behavior. 7,8 In the long-term, trauma survivors may experience chronic levels of physiological reactivity which may increase vulnerability to developing inflammatory or autoimmune disorders. 9 Overall, trauma exposure has been associated with subsequent arthritis/rheumatism, headaches, chronic pain, gastrointestinal and gynecological problems, and cardiac issues; exposure to multiple traumatic events results in an increased disease burden. 8, 10,11 Although negative coping behaviors contribute to health problems and high levels of overall healthcare utilization for illness, 12 trauma survivors may actually avoid seeking routine, preventive medical care, such as mammograms, cervical cancer screenings, and dental prophylaxis. [13][14][15][16] In addition to emotional distress, patients often experience specific physiological reactions when trauma memories are ''retriggered.'' 17,18 Many healthcare visits involve the clinician being in close proximity to the patient and needing to touch the patient's body, which may be distressing for some survivors. Overall, research suggests that some trauma survivors show a paradox in engaging in medical care-increased overall usage, particularly for sick visits, and decreased use of routine, nonemergency care. Survivors who have high levels of depression, anxiety, and posttraumatic stress symptoms may be the most vulnerable because these symptoms contribute to avoidance of nonemergency medical care and an increase in medical complications overtime. 13,19 TRAUMA-INFORMED CARE (TIC) INITIATIVES Recent efforts have been made to implement ''traumainformed care'' in medical settings, where all services are assessed and potentially modified to include an understanding of how trauma impacts the life of an individual seeking services. 20 In addition to recognizing the prevalence of trauma, providers should understand that different types of traumatic events may have differential effects. Childhood sexual abuse, adult sexual assault, and family violence may influence an individual's ability to form trusting interpersonal relationships. [21][22][23] Because car accidents and natural disasters may cause serious physical injury, survivors may experience anxiety and depression, as well as long-term difficulties with pain. 24 Survivors of violence who live in poor and/or violent neighborhoods may have difficulty prioritizing their healthcare needs over the daily stressors of life. 25 Overall, healthcare professionals are very likely to treat patients who have experienced a wide range of traumatic events. The purpose of this study was to understand the prevalence and severity of sequelae of traumatic exposure in 2 outpatient dental and medical clinics in an urban academic healthcare setting. Study Design and Procedures A cross-sectional study design was used to establish the rates of traumatic events experienced by patients in a nonpsychiatric, outpatient sample at a large university-affiliated health center. Self-report surveys were completed by patients seeking outpatient medical (n ¼ 123) and dental care (n ¼ 125). This project was approved by the University of (deleted to ensure blind review) Institutional Review Board. Research assistants approached patients who were waiting for scheduled clinic appointments and offered them a $5.00 grocery store gift card to complete a brief self-report survey about ''experiences in the healthcare system.'' Informed consent was obtained from all participants. During the process, participants were advised that participation was voluntary, participation would not influence their services at the University clinics, and no identifying information would be collected. Approximately 427 patients were approached in the waiting rooms of medical and dental outpatient clinics, prior to patients' visit with the clinician, over a 6-month frame. Of those invited, 248 patients (58%) completed the survey. The most common reason for refusal was a lack of sufficient time to complete the survey prior to the scheduled appointment. Study Setting The University of (deleted to ensure blind review) Health Sciences campus serves patients who are largely uninsured or underinsured. Two hundred and forty-eight participants (69% female) completed the survey. Women were overrepresented in our sample. Overall, 65% of family medicine clinic patients and 56% of dental clinic patients are female. MEASURES Demographic Items Participants indicated their age (in years), sex, race/ethnicity, household income, highest level of education, and military status. Additionally, a single item measured medical service use (ie, ''Before today, when was the last time you visited your doctor [not an ER visit]?'') on a 4-point scale (1 ¼ in the last month, 2 ¼ in the last 6 months, 3 ¼ in the last year, 4 ¼ more than a year ago). Traumatic Life Events Questionnaire (TLEQ) The TLEQ measures lifetime prevalence of trauma exposure. 26 Participants indicated whether they had ever experienced any of 21 potentially traumatic events. These include natural disasters (flood, hurricane, and earthquake), motor vehicle accidents (which required medical attention or resulted in injury), other accidents (eg, place crash, home fire, and chemical leak), living or working in a war zone, sudden/unexpected death of a loved one, life threatening or permanently disabling accident or illness of loved one, life-threatening illness, robbery with a weapon, physical assault by a stranger, witnessing a stranger attack someone else, being threatened with physical harm (by strangers, relatives, and partners), physical punishment that resulted in injury as a child, witnessing family violence as a child, experiencing intimate partner violence, sexual abuse before the age of 13, sexual abuse between the ages of 13 and 18, sexual abuse after the age of 18, stalking, miscarriage, and abortion. The TLEQ contains follow-up questions regarding each type of traumatic event. For example, the sexual abuse questions ask participants to indicate their relationship with the perpetrator (eg, stranger, friend, relative, intimate partner, etc.) and indicate if they were physically injured. The TLEQ has shown strong test-retest reliability and good content validity across various trauma populations. 26 The specific traumatic events were divided into 5 dichotomous categories scored as 0 ¼ never experienced and 1 ¼ ever experienced that trauma subtype: interpersonal trauma (IPT), noninterpersonal trauma (Non-IPT), combat, community violence, and illness-related. IPT included traumas perpetuated by individuals with which the victim was acquainted (eg, family member, friend), such as sexual assault, childhood abuse, intimate partner violence, and stalking. Non-IPT involved natural disasters and serious accidents. Combat exposure referred to combat experiences that may have occurred in an active war zone or during warfare. Community violence exposure was characterized as exposure to a violent crime (with or without weapons use) that was perpetuated by a person outside of one's immediate family. 27 This included murder of loved one, robbery, physical and sexual assault by a stranger, and childhood abuse by a stranger. The illness-related traumas were the sudden death of a loved one, facing a life-threatening illness of self or a loved one, miscarriage, or abortion. Brief Symptom Inventory (BSI) The BSI is a 53-item questionnaire that assesses current psychological health across 9 symptom dimensions. 28 The BSI has demonstrated good test-retest reliability, high internal consistency, and good convergent validity with related measures. 29,30 Only the anxiety dimension was utilized in this study. Participants rated the severity of their anxiety symptoms over the last week on a scale from 0 (not at all) to 3 (extremely). A mean anxiety score was computed by summing the ratings and dividing by the number of items (Cronbach a ¼ .91). Center for Epidemiologic Studies Depression Scale (CESD) The CESD is a 20-item measure of current depressive symptoms. 31 Participants rated the severity of symptoms over the last week from 1 (rarely or never [<1 day]) to 4 (most or all the time [5-7 days]). The ratings were summed and divided by the number of items to compute a mean depression score, with higher overall scores reflect greater severity (Cronbach a ¼ .90). The CESD has shown good test-retest reliability, adequate internal consistency, and moderate construct validity. 31 Short Form-12 (SF-12) Health Survey The SF-12 is a brief, self-report measure of overall physical and mental functioning. 32 It has been shown to be reliable and valid in diverse clinical and community populations. 33 The present study used 3 items from the SF-12 to represent physical health impairments. The first item asked about the status of general health (1 ¼ excellent, 2 ¼ very good, 3 ¼ good, 4 ¼ fair, and 5 ¼ poor). The second item assessed the extent to which pain interfered with work over the last 4 weeks on a 5-point scale (1 ¼ not at all to 5 ¼ extremely). The third item measured the extent to which physical and emotional problems interfered with social functioning on a 6-point scale (1 ¼ all of the time to 6 ¼ none of the time). Study Hypotheses Compared to those without a trauma history, individuals with a trauma history would report lower levels of overall health. Compared to participants without a trauma history, we expected trauma survivors to report a greater level of pain interference. Additionally, we expected Non-IPT survivors to report a greater level of pain interference compared to other types of trauma survivors. We expected trauma survivors to report that physical/emotional health issues were interfering with their social activities. Compared to Non-IPT survivors, we expected all the other types of trauma survivors to report a greater level of physical/emotional health issues that interfered with social activities We hypothesized that compared to individuals with no trauma history, trauma survivors would have greater time since last nonemergency medical visit. We predicted that compared to other types of traumas, IPT would be most closely associated with longer time since previous medical visit. We also predicted that anxiety and depression would moderate the relationship between trauma history and time since last medical visit. RESULTS Data were collected from 248 participants (69% female). The average age of participants was 38.03 years (SD ¼ 15.40). Twenty seven percent self-identified as Black, 27% as White, 24% as Hispanic, 16% as Asian or Pacific Islander, 3% as Multiracial, and 3% as ''other.'' In terms of annual household income, 12% earned less than 10,000 dollars, 22% earned between 10,001 and 20,000 dollars, 26% earned between 21,001 and 40,000 dollars, 23% earned between 40,001 and 60,000 dollars, and 17% earned over 60,000 dollars. The sample varied in terms of highest level of education completed. Twenty-three percent were high school graduates, 35% had completed some college, 26% had a college degree, and 16% had completed some graduate coursework. In regard to primary health insurance, 50% reported having private health insurance, 18% had Medicare, and 21% had Medicaid. All means and SDs were found to be plausible. Data were inspected for any out-of-range values or univariate outliers using IBM SPSS Statistics 21. No out-of-range values or outliers were found. Further, less than 5% of data were missing; therefore, analyses proceeded as planned using listwise deletion. Table 1 shows the bivariate correlations among potential covariates and primary variables of interest. Covariates that were significantly related to primary variables of interest were controlled for in subsequent analyses (described below). Table 2 shows the prevalence of traumatic events in the total sample and is separated by sex. Natural disasters, car accidents, sudden death of a loved one, witnessing family violence, and experiencing interpersonal violence and sexual abuse were common in our sample. Only 6% of the sample did not report experiencing any traumatic events. Hypothesis 1 It was hypothesized that a history of trauma exposure (compared to no history of trauma exposure) would be predictive of lower reported perceptions of overall health. We categorized traumatic event exposure into dichotomous categories (0 ¼ no exposure, 1 ¼ exposure) and then used these dichotomous variables to assess whether trauma exposure predicts physical health of the participants. Perceptions of overall health were regressed onto history of trauma exposure. Income and race/ethnicity were included as covariates because they were negatively correlated with perceptions of overall health (see Table 1). Results supported this hypothesis, suggesting that history of trauma exposure was associated with a decrease in perceptions of overall health (b ¼ 0.14, P < 0.05), holding ethnicity and income constant. Hypothesis 2 Pain interference was regressed onto history of trauma exposure to test whether a history of trauma exposure was associated with greater pain interference at work. Age was included as a covariate because it was positively associated with pain interference at work (see Table 1). As predicted, history of trauma exposure was related to an increase in pain interference at work (b ¼ 0.14, P < 0.05), holding age constant. Furthermore, it was hypothesized that after controlling for other trauma types, participants exposed to Non-IPT would report greater levels of pain interference at work. Endorsement of Non-IPT exposure (b ¼ 0.17, P < 0.01) was related to an increase in pain interference at work after controlling for age and other trauma types. Additionally, endorsement of IPT exposure was also a predictor of increase in pain interference at work (b ¼ 0.16, P < 0.05). It was hypothesized that history of trauma exposure would be indicative of physical/emotional interference with social activities. Therefore, physical/emotional interference with social activities was regressed onto history of trauma exposure. Sex was included as a covariate because it was negatively associated with physical/emotional interference with social activities (r ¼ À0.16, P < 0.05). History of trauma exposure trended toward significance as a predictor of physical/emotional interference with social activities (b ¼ À0.12, P ¼ 0.07) after controlling for sex. However, this association was in the opposite direction of what was predicted (ie, history of trauma exposure predicted less physical/emotional interference with social activities). It was further hypothesized that after controlling for Non-IPT exposure, participants who endorsed any other trauma type would report greater levels of physical/emotional interference with social activities. Contrary to expectation, only a history of IPT exposure was significantly associated with physical/emotional interference in social activities (b ¼ À0.17, P < 0.05), and this association was in an unexpected direction. That is, a history of IPT exposure predicted less physical/ emotional interference with social activities. Hypothesis 3 Participants with a history of trauma exposure (compared with those without a history of trauma exposure) were predicted to report longer time elapsed since their last medical visit. Sex and age were both negatively associated with time elapsed since last medical visit (r ¼ À0.19, P < 0.001; r ¼ À0.20, P < 0.001, respectively) and were, therefore, controlled for in the regression analysis. Results demonstrated that a history of trauma exposure was associated with an increase in time elapsed since last medical visit (b ¼ À0.15, P < 0.05) after controlling for sex and age. Additionally, we predicted that after controlling for all other trauma types, a history of IPT would specifically predict time elapsed since last medical visit. This hypothesis was not supported (b ¼ 0.04, P ¼ 0.45). Depression and anxiety symptoms were hypothesized to moderate the relationship between history of trauma exposure and time elapsed since last medical visit. We tested this hypothesis using 2 separate hierarchical multiple regressions. Consistent with Aiken and West, 34 the moderators (ie, depression symptoms, anxiety symptoms) were mean centered. In the first hierarchical multiple regression, depression was examined as a moderator between history of trauma exposure and time elapsed since last medical visit. Covariates (ie, sex, age) were entered in the first step of the model as predictor variables and time elapsed since last medical visit served as the outcome variable. History of trauma exposure and symptoms of depression were entered as predictor variables in the second step of the model. In the third step of the model, the interaction term (ie, the product of the mean-centered depression symptoms variable and history of trauma exposure variable) was entered as a predictor variable. Results from this moderation analysis are presented in Table 3. Sex and age were both significant predictors of time elapsed since last medical visit. History of trauma exposure was also a significant predictor of time elapsed since last medical visit (b ¼ À0.15, P < 0.05). Neither depression symptoms nor the interaction term emerged as significant predictors of time elapsed since last medical visit. The second hierarchical multiple regression model tested whether anxiety symptoms moderated the relationship between history of trauma exposure and time elapsed since last medical visit. Covariates (ie, sex, age) were entered in the first step of the model as predictor variables and time elapsed since last medical visit served as the outcome variable. History of trauma exposure and symptoms of anxiety (mean centered) was entered as predictor variables in the second step of the model. In the third step of the model, the interaction term (ie, the product of the mean centered anxiety symptoms variable and history of trauma exposure variable) was entered as a predictor variable. Results from this moderation analysis are presented in Table 4. Similar to the first moderation analysis, sex, age, and history of trauma exposure were significant predictors of time elapsed since last medical visit. Neither anxiety symptoms nor the interaction term were significant predictors of time elapsed since last medical visit. DISCUSSION Traumatic experiences were prevalent in this ethnically diverse sample of patients seeking medical and dental treatment at a large academic medical center. Overall, trauma history was associated with poorer self-reported health. Global self-report ratings of overall health are highly predictive of mortality, 35 underscoring the relationship between traumatic events and physical health symptoms, and the need for interventions to help traumatized individuals better manage stress. Ethnic minority status and lower household income were associated with poorer self-reported health, and had to be controlled for in our analyses. These groups of patients may have limited access to healthcare resources, which might exacerbate untreated trauma symptoms Both IPT and Non-IPT were associated with self-reported pain interference at work. Although exposure to car accidents and natural disasters have obvious implications for bodily pain, the link between IPT and work-related pain interference may be less obvious to medical practitioners. Because IPT still involves bodily violation, it may be linked to experiences of pain in specific parts of the body. 36 Overtime, experiences of pain may be triggered more generally, based on the posttraumatic neurophysiological changes. 37 Another interesting finding was that a history of trauma exposure, particularly a history of IPT, appeared to increase self-reported social functioning. There is growing evidence that many people display psychological resilience after trauma. 6 Factors that encourage resilience include hardiness and a strong family and social support network. Perhaps in our nonpsychiatric sample, many trauma survivors attempted to engage in positive, resilient ways of coping, including seeking social support. Future studies should explore this link. As we expected, traumatic events negatively influenced time since last nonemergency medical visit. Thus, even though survivors reported poorer overall health, they do not go to the doctor as frequently. In our study, depression and anxiety did not mediate the relationship between trauma history and medical care. It may be that many trauma survivors feel that their life is generally ''unpredictable'' and ''uncontrollable'' and may distance themselves from the negative affect that accompanies an unpredictable environment. These individuals may try to avoid feelings of depression and anxiety, but the physiological consequence of trauma may still result in poorer self-reported health. 38 Future research should focus on measuring emotional avoidance and its potential role in physical health symptoms. It may also be that trauma history has a more direct relationship with lack of medical care. For example, it is possible that trauma survivors are experiencing greater current life stressors (eg, financial and housing difficulties), which may influence their engagement in medical care. Future work should also consider current life circumstances to more fully understand this relationship. From a clinical perspective, healthcare providers may rely on depression and anxiety as ''red flags'' to deduce if a patient may have a trauma history. Providers may also rely on self-reported depression and anxiety as the sole indicators that a patient may benefit from mental health referrals. The results of our study suggest that patients who do not present with anxiety or depression may still be experiencing trauma-related physical impairments. Consistent with the principles of TIC, clinicians should have an awareness of the prevalence of trauma and the varied ways patients present in medical care. For example, some patients may present as overtly emotionally distressed, and others may not. The results of this study suggest that routine trauma screening may be appropriate in primary care medical and dental settings, particularly when providers are aware of resources and referrals available to survivors and they have a trusting, long-term relationship with the patient. 20 This study had several limitations. The data were selfreported, so there may have been a bias toward underreporting trauma or other symptoms. However, levels of self-reported traumatic events appear to be comparable to other communitybased samples. 39 In addition, the data are cross-sectional, so causality cannot be determined. We do not know if traumatic events cause a lack of routine medical care, or if a lack of routine medical care is associated with chaotic or stressful life circumstances (eg, poverty, housing instability) that may increase vulnerability to future victimization. Finally, although our sample was ethnically representative of the patients we serve, women were more likely to complete our survey. To protect patient confidentiality, we did not collect identifying information about patients who refused to take the survey (eg, age, sex). Although these limitations constrain the external validity of our findings, future studies can build on the foundation provided by the present study. CONCLUSIONS This study adds to existing knowledge about the prevalence of prior traumatic experiences and associated factors among patients seen in outpatient primary care medical and dental settings. The prior trauma experienced by these patients may not always be evident due to the lack of presenting signs and symptoms. Future work should focus on large, epidemiological surveys of men and women, surveying them about trauma history, current life stressors, mental health status, and self-reported health and healthcare usage. Further studies can include chart reviews to objectively assess healthcare utilization. Mixed methods and qualitative research should include focus groups and in-depth interviews with this population to explore how various types of traumatic events may be related to self-care, stress, ways of coping, and health. Future training and research in the area of TIC is essential to guide the development of effective interventions to provide high quality, patient-centered care to patients with prior histories of traumatic life experiences.
2016-05-12T22:15:10.714Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "3ece052b756d31ea590118aeaab087cf1f863e17", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000000734", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ece052b756d31ea590118aeaab087cf1f863e17", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
2964277
pes2o/s2orc
v3-fos-license
Vitamin D Decreases Serum VEGF Correlating with Clinical Improvement in Vitamin D-Deficient Women with PCOS: A Randomized Placebo-Controlled Trial Vascular endothelial growth factor (VEGF) has been suggested to play a role in the pathophysiology of polycystic ovary syndrome (PCOS) and may contribute to increased risk of ovarian hyperstimulation syndrome (OHSS) in affected individuals. Vitamin D (VitD) supplementation improves multiple clinical parameters in VitD-deficient women with PCOS and decreases VEGF levels in several other pathologic conditions. Unveiling the basic mechanisms underlying the beneficial effects of vitamin D on PCOS may enhance our understanding of the pathophysiology of this syndrome. It may also suggest a new treatment for PCOS that can improve it through the same mechanism as vitamin D and can be given regardless of vitamin D levels. Therefore, we aimed to explore the effect of VitD supplementation on serum VEGF levels and assess whether changes in VEGF correlate with an improvement in characteristic clinical abnormalities of PCOS. This is a randomized placebo-controlled trial conducted between October 2013 and March 2015. Sixty-eight VitD-deficient women with PCOS were recruited. Women received either 50,000 IU of oral VitD3 or placebo once weekly for 8 weeks. There was a significant decrease in serum VEGF levels (1106.4 ± 36.5 to 965.3 ± 42.7 pg·mL–1; p < 0.001) in the VitD group. Previously reported findings of this trial demonstrated a significant decrease in the intermenstrual intervals, Ferriman-Gallwey hirsutism score, and triglycerides following VitD supplementation. Interestingly, ∆VEGF was positively correlated with ∆triglycerides (R2 = 0.22; p = 0.02) following VitD supplementation. In conclusion, VitD replacement significantly decreases serum VEGF levels correlating with a decrease in triglycerides in women with PCOS. This is a novel molecular explanation for the beneficial effects of VitD treatment. It also suggests the need to investigate a potential role of VitD treatment in reducing the incidence or severity of OHSS in VitD-deficient women with PCOS. Introduction Polycystic ovary syndrome (PCOS) is a common endocrinopathy affecting 6%-10% of reproductive-aged women [1,2]. It is characterized by menstrual dysfunction, subfertility, polycystic ovaries and hyperandrogenism [3]. PCOS is also associated with an increased risk of depression, anxiety, endometrial carcinoma, cardiovascular disease, and multiple metabolic disorders such as insulin resistance, type II diabetes mellitus, dyslipidemia, high blood pressure and fatty liver [2,3]. The pathogenesis of PCOS is not well understood, but accumulating evidence suggests that vascular endothelial growth factor (VEGF) dysregulation may be play a role in its genesis [4]. VEGF is an approximately 46-kDA heparin-binding homodimeric protein that exists in six different isoforms: VEGF-A, VEGF-B, VEGF-C, VEGF-D, VEGF-E and placental growth factor [5][6][7]. It is a leading regulator of angiogenesis; indeed, its essential role has been demonstrated in physiological, developmental and pathological angiogenesis [8]. VEGF is a robust mitogen primarily for vascular endothelial cells [5]; it acts by binding to tyrosine kinase receptors, KDR (kinase domain region) and Flt-1 (fms-like-tyrosine kinase) receptors [8]. Ovaries of women with PCOS manifest upregulation of VEGF, associated with increased vascularity as measured by ultrasound Doppler blood flow and confirmed by histologic studies [9,10]. In addition, the hyperthecotic stroma of these ovaries overexpresses VEGF [11]. Furthermore, it has been shown that women with PCOS exhibit increased VEGF levels in serum and/or follicular fluid [12,13]. VEGF dysregulation in women with PCOS has also been correlated with an increased risk of ovarian hyperstimulation syndrome (OHSS) following follicular stimulation [14]. Vitamin D deficiency is more prevalent among women with PCOS when compared to controls [15]. The aforementioned deficiency has been associated with increased hirsutism score, insulin resistance and body mass index (BMI) in these women [16]. Additionally, vitamin D supplementation has been shown to improve blood pressure profiles and decrease insulin resistance, total testosterone and androstenedione levels in vitamin D-deficient women with PCOS [17,18]. However, the basic mechanisms underlying the beneficial effects of vitamin D in PCOS are still obscure. Resolving this mechanism may provide insight into the pathophysiology of this syndrome. It can also offer a new therapeutic option for PCOS women with normal vitamin D levels such as a medication that can improve PCOS through the same mechanism as vitamin D. Interestingly, VEGF has been widely implicated in the pathogenesis of diabetes; for example, a negative correlation has been described between serum VEGF and vitamin D levels in diabetic patients [19,20]. In addition, vitamin D administration has been shown to decrease VEGF production by human lumbar annulus cells and various human cancer cells [21,22]. Thus, it could be speculated that vitamin D may ameliorate PCOS symptoms by regulating VEGF. Taken together, vitamin D supplementation in vitamin D-deficient women with PCOS could reduce serum VEGF levels, thus improving PCOS characteristic clinical manifestations. Of note, this is an extension of our previous work in the same patient cohort showing that transforming growth factor-β1 bioavailability decreases following vitamin D treatment [23]. Study Subjects This study was a randomized, single-blind, placebo-controlled trial designed to assess the impact of vitamin D supplementation on serum VEGF levels and PCOS characteristic clinical manifestations in vitamin D-deficient women with PCOS [23]. All participants signed an informed consent form at the time of recruitment. The institutional review board (IRB) of Maimonides Medical Center approved the study. Ninety-three reproductive-aged women (18-38 years) diagnosed with PCOS according to the Rotterdam criteria presented to Maimonides Women's Health center for well-woman visit between October 2013 and March 2015 were screened for vitamin D deficiency [24]. Women were considered vitamin D-deficient when their serum 25 hydroxy-vitamin D (25OH-D) levels were less than 20 ng·mL -1 . We excluded women who were: (1) pregnant, postpartum or breastfeeding; (2) taking metformin, vitamin D supplements or any hormonal therapy. Interventions and Blood Collection A total of 68 women with PCOS were diagnosed with vitamin D deficiency. Participants were randomly allocated using a ratio of 2/1 (vitamin D/placebo) into the following two arms: (1) 50,000 IU of vitamin D3 once weekly for 8 weeks as per the Endocrine Society guidelines [25] and (2) placebo once weekly for 8 weeks. The placebo capsule looked similar to the vitamin D capsule but contained only lactose monohydrate powder (Gallipot Inc., Saint Paul, MN, USA). All participants were contacted once weekly and reminded to take their pill. Fasting blood samples were collected by venipuncture in both arms before starting and within two weeks of completing the treatment. Blood samples were allowed to clot for 30 min at room temperature before centrifugation at 1200 rpm for 10 min. Serum was stored in aliquots at −80 • C until assayed. Clinical Parameters The clinical parameters related to PCOS were assessed before and two months after the completion of treatment. These parameters included Ferriman-Gallwey score (FGS) (hirsutism score), acne status, blood pressure (BP), and intermenstrual intervals. Statistical Analysis Data were tested for normality. All values were expressed as mean ± standard error of the mean (SEM). A paired student's t-test was used to compare the clinical parameters and serum levels evaluated before and after completing the treatment. Correlations between changes in serum VEGF and changes in disease clinical parameters were analyzed using Pearson's test and linear regression. All data analyses were performed with STATA statistical software version 14 (StataCorp LP, College Station, TX, USA). p < 0.05 was considered to be statistically significant. Demographics and Changes in Serum 25OH-D Levels As previously described, 53 participants completed the study: 35 in the vitamin D group and 18 in the placebo group [23]. BMI was comparable between the vitamin D and placebo groups (30 ± 1 and 28 ± 1.6 kg·m -2 respectively, p = 0.33). Women in both groups had comparable demographic characteristics such as age, skin color, ethnicity, smoking, daily milk consumption, and history of infertility (p ≥ 0.2). There was a significant increase in serum 25OH-D level reaching the normal range following vitamin D supplementation (16.3 ± 0.9 to 43.2 ± 2.4 ng·mL -1 ; p < 0.01) while it did not significantly change following placebo (17 ± 1.8 to 17.4 ± 1.9 ng·mL -1 ; p = 0.85) [23]. Changes in VEGF Following Vitamin D Supplementation There was a significant decrease in serum VEGF levels (1106.4 ± 36.5 to 965.3 ± 42.7 pg·mL -1 ; p < 0.001) in the vitamin D group, but not in the control group (893.1 ± 90.2 to 866 ± 70.8 pg·mL -1 ; p = 0.83) (Figure 1). Correlation and linear regression analyses were used to evaluate the relationship between the change in serum VEGF and changes in clinical and/or biochemical parameters in women with PCOS. The decrease in serum VEGF levels was positively correlated with the decrease in triglycerides (R 2 = 0.22; p = 0.02) following vitamin D supplementation (Figure 2). The decrease in VEGF was not correlated with the decrease in FGS (p = 0.25), intermenstrual intervals (p = 0.7), or any other PCOS clinical or biochemical parameters. Discussion The current study examined the effect of vitamin D supplementation on serum VEGF levels and its correlation with the changes in PCOS clinical and biochemical manifestations in vitamin D-deficient women with PCOS. Our data show that vitamin D supplementation decreases serum VEGF levels and this change positively correlates with the decrease in triglycerides. Our findings, showing a significant decrease in VEGF following vitamin D treatment, are consistent with multiple previous studies [19,21,22,27,28]. In fact, Shao et al. have recently shown that serum 25OH-D3 negatively correlates with VEGF in diabetic patients; such correlation suggests that the protective effects of vitamin D in terms of decreasing proteinuria and delaying the progression of diabetic kidney disease may be mediated through its suppression of abnormal angiogenesis, inflammation, and vascular endothelial dysfunction [19]. Moreover, Ren et al. have confirmed that 1,25(OH) 2 -D3 downregulates the expression of VEGF in the retinal tissues of diabetic rats, thereby protecting them against diabetic retinopathy [28]. Similarly, Yildirim et al. have demonstrated that 1,25(OH) 2 -D3 regresses endometriotic implants in rat models by impeding the expression of VEGF in these implants, thus inhibiting inflammation and neovascularization [27]. Likewise, Ben-Shoshan et al. have shown that 1,25(OH) 2 -D3 inhibits VEGF expression in various human cells (breast, colon, and prostate) under normoxic and hypoxic conditions [21]. Gruber et al. have also shown that 1,25(OH) 2 -D3 decreases the production of VEGF in human lumbar annulus cells [22]. There is compelling evidence suggesting an important role of VEGF in the pathophysiology of PCOS [4,[11][12][13][14]29]. VEGF, the prototypical member of the angiogenic factors, may be implicated in the increased ovarian mass supported by excessive neovascularization in stroma and theca of PCOS ovaries. Serum levels of VEGF have been reported to be elevated in women with PCOS compared with normal women [12,29]. VEGF levels are also elevated while its circulating receptor Flt-1 levels are decreased in the follicular fluid of women with PCOS undergoing controlled ovarian hyperstimulation compared with controls, which may explain their increased risk of ovarian hyperstimulation syndrome [13,14,30,31]. Furthermore, PCOS ovaries overexpress VEGF mRNA particularly in hyperthecotic stroma cells and some follicular granulosa cells [11]. In addition, endocrine gland-VEGF, which is an endothelial cell mitogen with selectivity for endothelium of steroidogenic glands, has been shown to be overexpressed in the theca interna and stroma of PCOS ovaries [32]. Our data showing that the decrease in VEGF after vitamin D treatment is correlated with a decrease in triglycerides are in line with the literature supporting the role of VEGF in the pathogenesis of PCOS. Vitamin D treatment has been shown to improve the characteristic clinical manifestations of PCOS in vitamin D-deficient women [17,18,23,33]. Selimoglu et al. have shown that the administration of a single dose of 300,000 IU of vitamin D3 was associated with a significant decrease in insulin resistance [18]. Pal et al. have also shown that the daily administration of 8533 IU of vitamin D and 530 mg of elemental calcium for 3 months was associated with a significant reduction in blood pressure parameters and total testosterone levels [17]. Furthermore, vitamin D3 supplementation with 50,000 IU once weekly for 8 weeks improved hirsutism, acne, and decreased triglycerides and intermenstrual intervals in women with PCOS [23]. The fact that vitamin D also decreases VEGF, correlating with an improvement in triglycerides levels supports the theory that the beneficial effects of vitamin D may be partly exerted through decreasing VEGF and subsequently inhibiting abnormal ovarian angiogenesis. Additionally, laparoscopic ovarian drilling in women with PCOS has been suggested to exert its effects via decrease in VEGF and associated abnormal ovarian vasculature [34]. However, in order to gain further insight into the mechanism underlying the beneficial effects of vitamin D in PCOS, in-depth molecular studies on vitamin D's effects on human ovarian cells in culture as well as PCOS animal models are warranted. The main limitation of this trial was its failure to adjust for the potential impact of seasonal variation on vitamin D levels. The seasonal changes and the skin's exposure to sun can influence the skin's production of vitamin D3 [25]. In conclusion, we have demonstrated that vitamin D supplementation in vitamin D-deficient women with PCOS significantly decreases serum VEGF levels correlating with a significant decrease in serum triglycerides. These data suggest a possible molecular mechanism by which vitamin D mitigates PCOS symptoms. Our findings support the role of VEGF in the pathophysiology of PCOS. It also underscores the need to investigate a potential role of vitamin D treatment in the incidence or severity of ovarian hyperstimulation syndrome in women with PCOS undergoing follicular stimulation.
2017-03-31T08:35:36.427Z
2017-03-28T00:00:00.000
{ "year": 2017, "sha1": "86fe5998f7f881dce05b95f1786e89670fe54177", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/9/4/334/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86fe5998f7f881dce05b95f1786e89670fe54177", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226371524
pes2o/s2orc
v3-fos-license
neoplasms Canine cutaneous neoplasms in the metropolitan region of Goiânia, Goiás state, Brazil 1 in diagnostic investigation. Prevalence of canine cutaneous neoplasms in several countries. these studies showed differences in the frequency of different types of neoplasms differences may be the interference environmental and the breeds the regions. ABSTRACT.-Santos cutaneous The present study aimed to describe the occurrence and epidemiological features of skin neoplasms diagnosed in dogs in the metropolitan region of Goiânia, Goiás state, Brazil. Diagnoses from dog biopsies from 2011 to 2016 provided by a private veterinary pathology laboratory were analyzed. The main diagnoses were mast cell tumor, hemangiosarcoma, squamous cell carcinoma, malignant melanoma, and hemangioma. Highest frequency of neoplasms was found in female dogs, dogs aged >8 years, and purebred dogs, particularly the American Pit Bull Terriers and the Poodles. Most common sites affected by the neoplasms were the limb and the head. Using multiple correspondence analysis, groups of neoplasms were found to be associated with different epidemiological features and the size of the neoplasms was associated with the biological behavior. The results of this study described predispositions and verified the importance of different types of skin neoplasms in dogs in the region being studied. other mesenchymal neoplasms), round cell tumors, and melanocytic neoplasms. According to the anatomical locations, neoplasms were categorized into those of head, neck, thorax, abdomen, limbs, perineum, scrotum, tail, and multiple locations (Fernandes et al. 2015). When more than one histological classification of cutaneous neoplasms were identified, each classification was considered separately. Age groups of the animals were <1 year, 1-8 years, and >8 years (Souza et al. 2006). Descriptive statistical analyses were used to evaluate the data. Additionally, multiple correspondence analysis was performed using R software (R Core Team, 2019) . The following associations were tested: i) groups of neoplasms with anatomical locations, ii) groups of neoplasms with age and sex (males <1 year, males 1-8 years, males >8 years, females <1 year, females 1-8 years, and females >8 years), and iii) size (<1cm, 1-2cm, 3-4cm, and >5cm) with biological behavior (benign and malignant). Results were presented in two-dimensional perception maps and Fisher’s exact test was applied to the contingency tables to verify significant associations ( p <0.05). INTRODUCTION Understanding the most common lesions affecting the animals in a particular region is an important tool in diagnostic investigation. Prevalence of canine cutaneous neoplasms has been reported in several countries. However, these studies showed differences in the frequency of different types of neoplasms (Bostock 1986, Rothwell et al. 1987, Dobson et al. 2002, Kaldrymidou et al. 2002, Pakhrin et al. 2007, Graf et al. 2018). These differences may be justified by the interference of environmental factors (Souza 2005) and the dog breeds in the respective regions. The aim of present study was to describe the occurrence and epidemiological features of skin neoplasms diagnosed in dogs in the metropolitan region of Goiânia, Goiás state, Brazil. MATERIALS AND METHODS The present study was conducted at "Laboratório de Histologia e Patologia Animal" of "Instituto Federal Goiano", Campus Urutaí. The authors analyzed the reports from dog biopsy samples between 2011 and 2016 provided by a private veterinary pathology laboratory located in Goiânia, Goiás state. Information related to the histological diagnosis, breed, age, sex, size, and anatomical location of the neoplasms was obtained. For this study, primary cases of mammary neoplasms were excluded. According to the anatomical locations, neoplasms were categorized into those of head, neck, thorax, abdomen, limbs, perineum, scrotum, tail, and multiple locations (Fernandes et al. 2015). When more than one histological classification of cutaneous neoplasms were identified, each classification was considered separately. Age groups of the animals were <1 year, 1-8 years, and >8 years (Souza et al. 2006). Descriptive statistical analyses were used to evaluate the data. Additionally, multiple correspondence analysis was performed using R software (R Core Team, 2019) . The following associations were tested: i) groups of neoplasms with anatomical locations, ii) groups of neoplasms with age and sex (males <1 year, males 1-8 years, males >8 years, females <1 year, females 1-8 years, and females >8 years), and iii) size (<1cm, 1-2cm, 3-4cm, and >5cm) with biological behavior (benign and malignant). Results were presented in twodimensional perception maps and Fisher's exact test was applied to the contingency tables to verify significant associations (p<0.05). Dataset From 2011 to 2016, 4336 canine tissue samples were received. Out of these, 2138 (49.3%) were from the skin and 1266 (59.2%) among the skin samples were diagnosed with cutaneous neoplasms. Fifty-nine (4.9%) out of the 1200 reports analyzed in this study reported that the animal had at least two histologically different cutaneous neoplasms. Distribution of neoplasms according to anatomical location Only 889 (70.2%) reports contained data about the anatomical location of the skin neoplasms. Among these, 22.2% (n=198) were localized to the limbs, 22.2% (n=198) to the head, 19.6% (n=175) to the perineum, 13.3% (n=119) to the abdomen, 5.3% (n=48) to the thorax, 5.0% (n=45) to the neck, 3.9% (n=35) to the scrotum, and 1.3% (n=12) to the tail. Neoplastic lesions in multiple locations were found in 6.6% (n=59) of the cases. Three most frequent neoplasms in each location are shown in the Table 3. Multiple correspondence analysis Groups of neoplasms showed a significant association (p<0.01) with anatomical locations, age, and sex. Head, perineum, and tail were associated with epithelial neoplasms; abdomen with mesenchymal and melanocytic neoplasms; and limbs, thorax, scrotum, and multiple locations with round cell tumors. Irrespective of sex, dogs aged <1 year were more likely to develop round cell tumors and dogs aged >8 years were more likely to develop mesenchymal and melanocytic neoplasms. Dogs aged 1-8 years and the neck location showed a random relationship with the groups of neoplasms. Among the neoplasms, 56.1% (n=711) were malignant and 33.5% (n=425) were benign. Non-specified neoplasms and epitheliomas were excluded. Size of the neoplasms was mentioned in 19.8% (n=251) reports. A significant association was observed between the size and the biological behavior (p<0.01), where neoplasms with sizes <1cm presented a higher association with benign behavior and those with sizes >5cm showed a higher association with malignant behavior. Neoplasms with sizes of 1-2cm and 3-4cm were associated with both types of biological behaviors. DISCUSSION In the present study, we analyzed 1266 skin neoplasms in dogs in the metropolitan region of Goiânia, Goiás state, Brazil. The data was derived from a single pathology laboratory in the region. Although the prevalence of lesions diagnosed in other laboratories may have similarities with this study, we assume that the exposed results are underestimated. It is also known that not every cutaneous tumor in dogs is sent for histological analysis. About 9.8% of the dogs in this study had more than one skin neoplasms, irrespective of whether they were of the same histological type. Similar results were found in other studies, adding the chance of occurrence of non-neoplastic tumors (Souza et al. 2006, Machado et al. 2018. These data highlight the importance of the clinical veterinarians and the veterinary surgeons sending samples of all skin tumors for histological analysis, despite the possibility that they are of the same histological type. Skin tissue samples comprised of almost 50% of all canine samples during the period under evaluation. Previous studies have shown that skin lesions constitute the highest number of pathological diagnoses in dogs (Meirelles et al. 2010, Silva et al. 2011, Graf et al. 2018, Machado et al. 2018. Neoplastic lesions can range from 49.9% to 75.8% of all skin lesions (Souza et al. 2006, Machado et al. 2018), a finding similar to the results found in the present study. The high prevalence of skin neoplasms in dogs may be related to factors specific to the species such as genetic predisposition , and factors related to tutors such as ease of observation of the lesions (Goldschmidt & Goldschmidt 2017). Moreover, skin has a high rate of cell regeneration (Murphy 2006), is formed from numerous components in its structure (Bastos et al. 2017), and is directly exposed to oncogenic conditions (Martinez et al. 2006). Epithelial neoplasms were the most common neoplasms in dogs in the studied region. Although round cell tumors are known to be of mesenchymal origin (Hendrick 2017), our criteria for classifying the neoplasm groups according to cell origin were defined to assist in the clinical diagnostic routine. In studies using a similar division of groups, round cell tumors (Graf et al. 2018) and mesenchymal neoplasms presented high frequencies of 33.0% and 41.0%, respectively. Regarding analysis of the relative frequency of the types of neoplasms, our results are partially similar to previous studies. Mast cell tumor is described as the main cutaneous neoplasm in dogs (Bostock 1986, Mukaratirwa et al. 2005, Graf et al. 2018 and has a multifactorial etiology (Welle et al. 2008). The high occurrence of mast cell tumor in this study may be associated with the number of samples evaluated from dogs with racial predisposition such as Boxers and Labrador Retrievers (Dobson et al. 2002). Squamous cell carcinoma, hemangioma, and hemangiosarcoma are among the most common canine cutaneous neoplasms in other Brazilian states (Andrade et al. 2012, Fernandes et al. 2015. However, they have low prevalence in other countries (Bostock 1986, Pakhrin et al. 2007, Graf et al. 2018. The frequency of these neoplasms in dogs has a direct association with geographical location, as they are associated with prolonged sun exposure and breeds with little skin pigmentation and short hair (Hargis et al. 1992, Goldschmidt & Goldschmidt 2017. Due to the low degree of differentiation, 7.9% of all evaluated neoplasms were diagnosed with non-specific neoplasia. In these cases, immunohistochemistry is indicated for the identification of the cell origin, a test little used due to the high cost and low availability in the laboratories. However, in some situations, the histological aspects and the biological behavior of the neoplasm may be sufficient for the clinical veterinarian to determine the treatment for the animal (Meirelles et al. 2010). The evaluation of the distribution of neoplasms according to breeds, age groups, anatomical locations, and sex confirmed the previously reported predispositions such as sebaceous adenoma in Cocker Spaniels and in the head, hepatoid neoplasms in males (Goldschmidt & Goldschmidt 2017), mast cell tumor in Boxers and Labrador Retrievers (Kiupel 2017), histiocytoma in young animals, and lipoma in females (Hendrick 2017). However, due to the limited knowledge about the dog population in the studied region, there is a possibility of environmental bias in the samples. In the present study, groups of neoplasms showed an association with different epidemiological features. It is suggested that these results were observed due to the main histological diagnoses found in the analyzed variables such as mast cell tumor in the limbs, thorax and scrotum, lymphoma in multiple locations, and histiocytoma in dogs aged <1 year. It is noteworthy that some results may vary due to the large number of neoplasms with different behaviors in each group. Frequent association between the size of the cutaneous neoplasms in dogs and their biological behavior has been previously reported . However, the growth rate of neoplasms may depend on a number of factors such as blood supply, unknown influence, and hormonal stimulation (Stricker & Kumar 2010). Therefore, the size of the neoplasm should not be used as a criterion while deciding to send the skin tumors for histological analysis. Receiving examination requests with incomplete information is part of the routine for numerous veterinary pathology laboratories (Meirelles et al. 2010). This fact made it impossible to include other data such as the relationship between castration and the occurrence of neoplasms in the present study. It is worth remembering that epidemiological data, clinical history, and macroscopic characteristics of the lesion may help pathological diagnosis in numerous situations. CONCLUSIONS Cutaneous neoplasms constituted a large part of diagnoses in dogs in the metropolitan region of Goiânia, Goiás state. Mast cell tumor, hemangiosarcoma, and squamous cell carcinoma were the most common neoplasms. In addition to confirming the previously reported predispositions, our results revealed that the groups of neoplasms showed association with different epidemiological features and the size of the neoplasms showed association with the biological behavior. The present study may encourage new studies in future to improve animal welfare.
2020-10-29T09:02:26.200Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "f5b8726850eb00d7648345da20bd394049e80553", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/pvb/a/LYS7cgCNz8QJDBfqGhRS7ZN/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "55b00fb33ae8cabf018970cccc76178ef6b961bb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
55512626
pes2o/s2orc
v3-fos-license
The role of elevated atmospheric CO2 and increased fire in Arctic amplification of temperature during the Early to mid-Pliocene The mid-Pliocene is a valuable time interval for understanding the mechanisms that determine equilibrium climate at current CO2 concentrations. One intriguing, but not fully understood, feature of the early to mid-Pliocene climate is the amplified arctic temperature response. Current models underestimate the degree of warming in the Pliocene Arctic and validation of proposed feedbacks is limited by scarce terrestrial records of climate and environment, as well as discrepancies 20 in current CO2 proxy reconstructions. Here we reconstruct the CO2 and summer temperature from a re-dated 3.9 +1.5/-0.5 Ma sub-fossil fen-peat deposit on west-central Ellesmere Island, Canada, and investigate fire as a potential feedback to Arctic amplification of warming during the mid-Pliocene. Average CO2 was determined using isotope ratios of mosses to be 440 ± 50 ppm. The estimate for average mean summer temperature is 15.4±0.8°C using specific bacterial membrane lipids, i.e. branched glycerol dialkyl glycerol tetraethers. Macro25 charcoal was present in all samples from this Pliocene section with notably higher charcoal concentration in the upper part of the sequence. This change in charcoal was synchronous with a change in vegetation that saw fire promoting taxa increase in abundance. Paleovegetation reconstructions are consistent with warm summer temperatures, relatively low summer precipitation and an incidence of fire comparable to fire adapted boreal forests of North America, or potentially central Siberia. To our knowledge, this study represents the furthest northern evidence of fire during the Pliocene and highlights the important 30 role of forest fire in the ecology and climatic processes of the Pliocene High Arctic. The results provide evidence that terrestrial fossil localities in the Pliocene High Arctic were probably formed during warm intervals that coincided with relatively high CO2 concentrations that supported productive biotic communities. This study indicates that interactions between Clim. Past Discuss., https://doi.org/10.5194/cp-2018-60 Manuscript under review for journal Clim. Past Discussion started: 12 June 2018 c © Author(s) 2017. CC BY 4.0 License. paleovegetation and paleoclimate were mediated by fire in the High Arctic during the Pliocene, even though CO2 concentrations were only ~30 ppm higher than modern. Introduction Current rates of warming in the Arctic are almost double the rate of global warming.Since 1850, global land surface temperatures have increased by approximately 1.0°C, whereas arctic surface land temperatures have increased by 2.0°C (Jones and Moberg, 2003;Pagani et al., 2010).Such arctic amplification of temperatures has also occurred during other warm climate anomalies in Earth's past.Paleoclimate records from the Arctic indicate that the change in arctic summer temperatures during past global warm periods was 3-4 times larger than global temperature change (Miller et al., 2010).While the latest ensemble of earth system models (ESMs) provide fairly accurate predictions of the modern amplification of arctic temperatures hitherto observed (Marshall et al., 2014), they often under-predict the amplification of arctic temperatures during past warm intervals in Earth's history, including the Eocene (33.9-56 Ma;Huber, 2008;Shellito et al., 2009), and the Pliocene (2.6-5.3Ma; Dowsett et al., 2012;Salzmann et al., 2013) epochs.These differences suggest that either the models are not simulating the full array of feedback mechanisms properly for past climates, or that the full array of fast and slow feedback mechanisms have not fully engaged for the modern Arctic.If the later, the Arctic region has yet to reach the full amplification potential demonstrated in the past. The Pliocene is an intriguing climatic interval that may offer important insights into climate feedbacks.It captures the transition from the preceding Miocene to the Pleistocene when orbital regulators of climate transitioned from the 41,000 year obliquity cycle to the 100,000 year eccentricity cycle, respectively.Atmospheric CO2 values likewise varied (Royer et al., 2007) decreasing from values comparable to modern (Haywood et al., 2016;Pagani et al., 2010;Stap et al., 2016), to lower levels (Raymo et al., 2006); a state transition that may revert in the future under high CO2.Of additional importance, continental configurations were similar to present (Dowsett et al., 2016).While global mean annual temperatures (MATs) during the Pliocene were only ~ 3°C warmer than present day (Fig. 1), arctic land surface MATs may have been as much as 15 to 20°C warmer (Ballantyne et al., 2010;Csank et al., 2011a;Csank et al., 2011b;Fletcher et al., 2017).Further, arctic sea surface temperatures may have been as much as 10 to 15°C warmer than modern (Robinson, 2009), and sea-levels were approximately 25m higher than present (Dowsett et al., 2016).As such, the terrestrial environment of the Arctic was significantly different, with tree line ecosystems at much higher latitudes nearly eliminating the tundra biome (Salzmann et al., 2008). Several mechanisms have been proposed as drivers of arctic amplification, including vastly reduce sea-ice extent (Ballantyne et al., 2013), cloud and atmospheric water vapor effects (e.g.Feng et al., 2016;Swann et al., 2010), vegetation controls on albedo (Otto-Bliesner and Upchurch Jr, 1997), and increased meridional heat transport by the oceans (Dowsett et al., 1992) though it is now considered to be of lesser influence (Hwang et al., 2011).We propose that fire in arctic ecosystems may also be an important proximal mechanism for amplifying arctic surface temperatures during the Pliocene.Deposition of modern black carbon from industrial emissions caused a decrease in albedo of ice that may have accelerated ice melt over the 20th century (McConnell et al., 2007).However, in natural systems fire has complex impacts on local radiative budgets.It influences thermal regimes directly through changes in albedo, both due to the impact of vegetation change on albedo (Chapin III et al., 2005) and production of black carbon.The net radiative impact of forest fires in the modern boreal forest is thought to be a slight cooling due to enhanced albedo when canopy cover is lost, compensating for black carbon effects (Randerson et al., 2006).Fire also influences thermal regimes indirectly through altered vegetation interactions with the cryosphere (Brown et al., 2015;Fisher et al., 2016), and atmosphere (Bonan, 2008).Feng et al. (2016) found that the interaction between aerosols produced by forest fire and clouds may contribute to the reduced seasonality of the Pliocene High Arctic under lower aerosol, preindustrial boundary conditions.Given the sensitivity of arctic surface temperatures to sea-ice extent during the Pliocene (Ballantyne et al., 2013), the effect of industrial black carbon on ice melting (McConnell et al., 2007), and the disagreement between model simulations and observations of surface temperatures at high latitudes during the Pliocene (Dowsett et al., 2012), the role of fire in amplifying temperature in past warm intervals deserves investigation. Although it is generally thought that atmospheric CO2 concentrations of ~ 400 ppm provided the dominant global radiative forcing during the mid-Pliocene, CO2 proxies over the Pliocene do not necessarily agree (Fig. 1).Overall, reconstructions of Pliocene CO2 range between 190 and 440 ppm (Martinez-Boti et al., 2015;Seki et al., 2010).For example, Boron-based Pliocene CO2 reconstructions tend to be lower than (258 ± 35 ppm;mean ± s.d.;Tripati et al., 2009), the alkenone-based isotopic proxy average 357 ± 45 ppm (Pagani et al., 2010;Seki et al., 2010), whereas other estimates suggest values between 330-400 ppm (Seki et al., 2010).While CO2 estimates from stomata and paleosols tend to be less precise, they are within the range of boron and alkenone derived estimates (Royer, 2006).Thus there is no clear consensus on CO2 concentrations in Earth's atmosphere during the Pliocene.Dating uncertainties are an additional confounding factor, further complicating site to site comparisons.This suggests an additional hypothesis to explain why modern Arctic amplification and models agree, but the Pliocene amplification estimates and models disagree; that CO2 levels were higher than currently used for model boundary conditions during the deposition of the proxies used for past temperature estimation at sites with high dating uncertainties. Although direct effects may be small (Feng et al., 2017), reconstructing the CO2 from the same deposits from which paleoclimate and paleoecological proxies are derived, may help reconcile previous estimates and contribute to constraining climate sensitivities during the Pliocene. To advance understanding of arctic amplification during past warm intervals in Earth's history such as the Pliocene this investigation targets an exceptionally well-preserved arctic sedimentary sequence to simultaneously reconstruct atmospheric CO2, summer temperature, vegetation and fire disturbance history from a single site over its deposition. Site description To investigate the environment and climate of the Pliocene Arctic we focused on a fossil site, Beaver pond (BP), located at 78° 33′ N (Fig. 2) on Ellesmere Island.The stratigraphic section located at ~380 meters above sea level (MASL) today includes unconsolidated bedded sands and gravels, and rich organic layers including a thick fossil rich peat layer, up to 2.4 m thick, with sticks gnawed by an extinct beaver (Dipoides spp.).The assemblage of fossil plants and animals at BP has been studied extensively to gain insight into the past climate and ecology of the Canadian High Arctic (Ballantyne et al., 2006;Csank et al., 2011a;Csank et al., 2011b;Fletcher et al., 2017;Mitchell et al., 2016;Rybczynski et al., 2013;Tedford and Harington, 2003;Wang et al., 2017).Previous paleoenvironmental evidence suggests the main peat unit is a rich fen deposit with a neutral to alkaline pH, associated with open water (Mitchell et al., 2016), likely a lake edge fen or shallow lake fen, within a larchdominated forest-tundra environment (Matthews and Fyles, 2000), not a low pH peat-bog.While the larch species identified at the site, Larix groenlandia, is extinct (Matthews and Fyles, 2000), many other plant remains are Pliocene examples of taxa that are extant (Fletcher et al., 2017). The fen-peat unit examined in this study was sampled in 2006 and 2010.The unit sampled spanned the 1 m remaining of Unit III as per Mitchell et al. (2016).The main sequence examined across the methods used in this study includes material above (Unit IV) and below (Unit II) Unit III, with a total sampled profile of 1.65 m.Unit III has been estimated to represent ~20 000 years of deposition based on modern northern fen growth rates (Mitchell et al., 2016).The atmospheric CO2 estimates from this locality were based on 22 sample layers from the 2006 field campaign, and the charcoal was based on 31, while the temperature estimates from specific bacterial membrane lipids were taken from 22 of the sample layers collected in 2006 and an additional 12 samples collected in 2010.The same samples from the 2006 season were analyzed for each of CO2, mean summer temperature and char count where contents of the sample allowed.Pollen was tabulated in 10 samples of these 2006, located at different stratigraphic depths. Geochronology While direct dating of the peat was not possible, we were able to establish a burial age for fluvial sediments deposited approximately 4-5 m above and 30 m to the southwest of the peat.We used a method based on the ratio of isotopes produced in quartz by secondary cosmic rays.The cosmogenic nuclide burial dating approach measures the ratio of cosmogenic 26 Al (t½ = 0.71 Ma) and 10 Be (t½ = 1.38 Ma) in quartz sand grains that were exposed on hillslopes and alluvium prior to final deposition at BP. Once the quartz grains are completely shielded from cosmic rays, the ratio of the pair will predictably decrease because 26 Al has double the radiodecay rate of 10 Be.In 2008, four of the medium to coarse grained quartz samples were collected from a vertical profile of planar crossbedded fluvial sands between 8.7 and 10.4 m below the overlying till surface.The samples were 5 cm thick, separated by an average of 62 cm, and should closely date the peat (the sandy braided stream beds represent on the order of ~104 years from the top of the peat to the highest sample).Quartz concentrates were extracted from the arkosic sediment using Frantz magnetic separation, heavy liquids, and differential leaching with HF in ultrasonic baths.When sample aliquots reached aluminum concentrations <100 ppm (ICP-OES) as a proxy of feldspar abundance, the quartz concentrate was subjected to a series of HF digestion and rinsing steps to ensure that more than 30% of the quartz had been dissolved to remove meteoric 10 Be.Approximately 200.00 mg of Be extracted from a Homestake Gold Mine beryl-based carrier was added to 150.00 g of each quartz concentrate (no Al carrier was needed for these samples).Such large quartz masses were digested because of the uncertainty in the abundance of the faster decaying isotope.Following repeated perchloric-acid dry-downs to remove unreacted HF, pH-controlled precipitation, column chemistry ion chromatography to extract the Be and Al ions, precipitation in ultrapure ammonia gas, and calcination at temperatures above 1000°C in a Bunsen flame for three minutes, oxides were mixed with equal amounts of niobium and silver by volume.These were packed into stainless steel targets for measurement at Lawrence Livermore National Laboratory's accelerator mass spectrometer (AMS).Uncertainty estimates for 26 Al/ 10 Be were calculated as 1σ by combining AMS precision with geochemistry errors in quadrature.For a complete detailed description of TCN methods see Rybczynski et al. (2013).The ages provided here are updated from Rybczynski et al. ( 2013) by using more recent production rate information and considering the potential for increasing exposure to deeply penetrating muons during the natural post-burial exhumation at BP. Atmospheric CO2 Reconstruction In order to reconstruct atmospheric CO2 concentrations during the Pliocene, we derived a method based on the different sensitivity of isotopic discrimination of plant groups to their environment (Farquhar et al., 1989;Fletcher et al., 2008;White et al., 1994).Specifically, we used measurements of stable carbon isotopic discrimination in C3 vegetation to approximate the carbon isotopic signature of the atmosphere, and measurements of carbon isotopic discrimination in bryophytes to estimate the partial pressure of atmospheric CO2, which was then converted to atmospheric CO2 concentration.According to theory (Farquhar et al., 1989), plants discriminate (Δ 13 C) against the heavier isotope in atmospheric CO2, such that: Where a is the fractionation of atmospheric CO2 due to diffusion (~ -4.4 ‰) and b is the fractionation of atmospheric CO2 due to carboxylation by the enzyme rubisco (~ -27 ‰).This physical and chemical discrimination is then modulated by stomatal control of partial pressure of intercellular CO2 -(pi) with respect to the partial pressure of atmospheric CO2 (pa).Therefore isotopic discrimination in C3 plants (Δ 13 CC3) is largely a function of stomatal conductance and in bryophytes that lack stomata (Δ 13 Cbryo) is largely a function of partial pressure in atmospheric CO2 (i.e.pa).While other environmental factors, such as humidity, temperature, light availability, and microclimate also play important roles in isotopic discrimination in bryophytes (Fletcher et al., 2008;Ménot and Burns, 2001;Royles et al., 2014;Skrzypek et al., 2007;Waite and Sack, 2011;White et al., 1994), the first order control on discrimination is the partial pressure of atmospheric CO2 (Fletcher et al., 2008;White et al., Clim. Past Discuss., 1994).Because atmospheric CO2 is relatively well mixed in the troposphere its mean annual concentration does not significantly vary due to location.However, because total atmospheric pressure decreases with elevation the partial pressure of all atmospheric gases, including atmospheric CO2, must also decrease with atmospheric height (h) according to the following exponential function: (2) Such that the partial pressure of atmospheric CO2 at any given height in the atmosphere (pa(h)) can be calculated based on the initial atmospheric partial pressure of atmospheric CO2 (pa(i)) and a reference height (H = 7600 m), where atmospheric pressure goes to 0.37 Pa (Bonan, 2015).Therefore assuming that carbon isotopic discrimination in bryophytes varies in response to the atmospheric partial pressure of atmospheric CO2 ( pi / pa → pa), Eq. ( 2) can be substituted into Eq.( 1), such that the natural logarithm of Δ 13 Cbryo varies as a function of the partial pressure of atmospheric CO2.Furthermore, if the assumptions of this empirical relationship are valid and time invariant, then this empirical relationship can in theory be used to predict the partial pressure of atmospheric CO2 based on carbon isotopic measurements of bryophytes. To test this prediction, we compiled data from four studies investigating carbon isotopic variability of different bryophytes, primarily mosses, along elevational transects at different locations.Based on the elevations, locations, and years in which these samples were collected, the atmospheric partial pressure of atmospheric CO2 was estimated from ERA-interim reanalysis data of total atmospheric pressure (Dee et al., 2011) in conjunction with globally averaged values atmospheric CO2 concentrations (Global View-CO2, 2013).For our analysis we only included measurements of carbon isotopic variability in non-vascular mosses and all isotopic values were normalized to cellulose based on the empirical relationship reported by Ménot and Burns (2001).Carbon isotopic discrimination values for all plant material was calculated as: Δ 13 = ( 13 − 13 )/(1 + 13 /1000) where δ 13 Cplant represents the C isotopic composition of plant cellulose and δ 13 Catm represents the mean annual carbon isotopic composition of atmospheric CO2 of the year when samples were collected (Global View-CO2, 2013), or in the case of subfossil mosses when the samples were growing. In order to derive estimates of atmospheric CO2 concentrations during the Pliocene, we first had to estimate the δ 13 C of atmospheric CO2 during the Pliocene to solve for Δ 13 C of mosses (Eq.( 2)).This was accomplished by simultaneous measurements of δ 13 C in the cellulose of sub-fossil plant buckbean (Menyanthes trifoliata L.) that was also found at the BP site.We also measured δ 13 C of modern buckbean to constrain our estimates of pi / pa.For constraining our reconstructions of Pliocene CO2, carbon isotopic measurements were made on sub-fossil mosses (Scorpodium scorpoides (Hedw.)Limpr.).All plant and moss material were rinsed in deionized water, and dried prior to cellulose extraction according Leavitt and Danzer (1993).All carbon isotopic measurements were performed at University of Arizona's environmental isotope laboratory. Paleotemperature Reconstruction Paleotemperature estimates were determined based on the distribution of fossilized, sedimentary membrane lipids known as branched glycerol dialkyl glycerol tetraethers (brGDGTs) that are well preserved in peat bogs, soils, and lakes (Powers et al., 2004;Weijers et al., 2007c).These unique lipids are thought to be synthesized by a wide array of Acidobacteria within the soil (Sinninghe Damsté et al., 2014;Sinninghe Damsté et al., 2011).Previously, it has been established that the degree of methyl branching (expressed in the methylation index of branched tetraethers; MBT) is correlated with mean annual air temperature (MAT), and the relative amount of cyclopentane moieties (cyclization index of branched tetraethers; CBT) has been shown to correlate with both soil pH and mean annual air temperature (Weijers et al., 2007b).Because of the relationship of the distribution of these fossilized membrane lipids with these environmental parameters, it has been used for paleoclimate applications in different environments including coastal marine sediments (Bendle et al., 2010;Weijers et al., 2007a), peats (Ballantyne et al., 2010;Naafs et al., 2017), paleosoils (Peterse et al., 2011;Zech et al., 2012), and lacustrine sediments (Loomis et al., 2012;Niemann et al., 2012;Pearson et al., 2011;Zink et al., 2010). Improved separation methods (Hopmans et al., 2016) have recently led to the separation and quantification of the 5-and 6methyl brGDGT isomers that used to be treated as one since the 6-methyl isomers were co-eluting with the 5-methyl isomers (De Jonge et al., 2013).This has led to the definition of new indices and improved MAT calibrations based on the global soil (De Jonge et al., 2014), peat (Naafs et al., 2017), and African lake (Russell et al., 2018) datasets. Sediment samples were freeze-dried and then ground and homogenized with a mortar and pestle.Next, using the Dionex TM accelerated solvent extraction (ASE), 0.5-1.0g of sediment was extracted with the solvent mixture of dichloromethane (DCM):methanol (9:1, v/v) at a temperature of 100°C and a pressure of 1500 psi (5 min each) with 60% flush and purge 60 s. The Caliper Turbovap®LV was utilized to concentrate the collected extract, which was then transferred using DCM and dried over anhydrous Na2SO4 before being concentrated again under a gentle stream of N2 gas.To quantify the amount of GDGTs, 1 µg of an internal standard (C46 GDGT; Huguet et al., 2006) was added to the total lipid extract.Then, the total lipid extract was separated into three fractions using hexane:DCM (9:1, v:v) for the apolar fraction, hexane:DCM (1:1, v:v) for the ketone fraction and DCM:MeOH (1:1, v:v) for the polar fraction, using a column composed of Al2O3, which was activated for 2 h at 150°C.The polar fraction, which contained the GDGTs, was concentrated under a steady stream of N2 gas before being then re-dissolved in hexane:isopropoanol (99:1, v:v) at a concentration of 10 mg ml -1 and subsequently passed through a 0.45 µm PTFE filter.Finally, the polar fractions were analyzed for GDGTs on a high performance liquid chromatographyatmospheric pressure positive ion chemical ionizationmass spectrometry (UHPLC-APCI-MS) using the method described by (Hopmans et al., 2016).The polar fractions of some samples were re-run on the UHPLC-APCI-MS multiple times and in those cases the fractional abundances of the brGDGTs were averaged and those values were used in the transfer functions.The overall estimate error of 2.03°C was determined by using the transfer function error (Ter = 2.0°C) and reproducibility error (Tre = 0.32°C), calculated as the average of the standard deviations from the duplicates. Mean summer air temperature (MST) was determined using the distributions of aquatically produced brGDGTs in the calibration developed by Pearson et al. (2011).When this calibration is used the fractional abundances of IIa and IIa′ must be summed because these two isomers co-eluted under the chromatographic conditions used by Pearson et al. (2011): The square brackets denote the fractional abundance of the brGDGT within the bracket relative to the total brGDGTs. Mean annual air temperature (MAT) and surface water pH were also calculated using a novel calibration created using sediments from East African lakes analysed with the novel chromatography method and based upon MBT′5Me (Russell et al., 2018). Vegetation and Fire Reconstruction For charcoal, a total of thirty 2 cm 3 samples were taken at 5 cm intervals from depths from 300 and 301.45 MASL at the BP site, with an additional 2cm -3 sample collected at 301.65 MASL.All samples were deflocculated using sodium hexametaphosphate and passed through 500, 250 and 125 μm nested mesh sieves.The residual sample caught on each sieve was then collected in a gridded petri dish and examined using a stereomicroscope at 20-40X magnification to obtain charcoal concentration (fragments cm -3 ).Charcoal area (mm 2 cm -3 ) was measured for each sample using specialized imaging software from Scion Corporation.For a detailed description of methods see Brown and Power (2013). Vegetation was reconstructed using pollen and spores (herein pollen) at selected elevations at an upper and lower elevation and that corresponded with changes in charcoal.Samples were processed using standard approaches (Moore et al., 1991), whereby 1cm 3 sediment subsamples were treated with 5% KOH to remove humic acids and break up the samples.Carbonates were dissolved using 10% HCl, whereas silicates and organics were removed by HF and acetolysis treatment, respectively. Pollen slides were made by homogenizing 35 µl of residue, measured using a single-channel pipette, with 15 µl of melted In addition to tabulating pollen and charcoal, a list of taxa derived from Beaver Pond was previously compiled in Fletcher et al. (2017).Extant species from this list were selected and their modern observations extracted from The Global Biodiversity Information Facility (GBIF.org,2017).Observation data was grouped by 5° latitude 5° longitude grids cells, and the shared species count calculated using R (R Core Team, 2016).Modern fire frequency was mapped using the MODIS 6 Active Fire Product.The fire pixel detection count per day, within the same 5° latitude 5° longitude grids cells was counted over the ten years 2006-2015, and standardized by area of the cell.The modern climate maps were generated using data from WorldClim 1.4 (Hijmans et al., 2005).The values for the bioclimatic variables mean temperature of the warmest quarter (equivalent to mean summer air temperature; MST) and precipitation of the warmest quarter (summer precipitation) were also averaged by grid cell.The shared species count, climate values, and fire day detections were mapped to the northern polar stereographic projection in ArcMap 10.1. Geochronology The burial dating results with 26 Al/ 10 Be in quartz sand at 10 m below modern depth provides four individual ages.From shallowest to deepest, the burial ages are 3.6 +1.5/-0.5 Ma, 3.9 +3.7/-0.5 Ma, 4.1 +5.8/-0.4Ma, and 4.0 +1.5/-0.4Ma (Table S3), with an unweighted mean age of 3.9 Ma.The convoluted probability distribution function yields a maximum probability age of 4.5 Ma.The optimized ages for the top and bottom sample were 3.6 and 4.0 Ma, in agreement with the individual most probable ages.Optimized ages could not be computed for the two middle samples owing to the large uncertainties in 26 Al/ 27 Al measurement which caused the uncertainties to extend beyond the maximum saturation burial ages (ca.8 Ma).Despite the apparent upward younging of the burial ages, the 1σ-uncertainties overlap rendering the samples indistinguishable.Given the asymmetry in the probability distribution functions, and the inability to convolve all samples, the unweighted mean age is 3.9 Ma, with an uncertainty of +1.5/-0.5 Ma as indicated by the two samples with unsaturated limits.The age of the Beaver Pond peat is stratigraphically younger, however considering time for lateral channel migration and aggradation on the contemporaneous braid plain, the peat age is likely older by 104 to 105 years, i.e. within the uncertainty of the mean burial date. Atmospheric CO2 Reconstruction As expected carbon isotopic discrimination in mosses shows a positive relationship with partial pressure of atmospheric CO2 (Fig. 3) and, as predicted from theory, the natural logarithm of carbon isotopic discrimination in mosses (Δ 13 Cmoss) is responsive to pa.Consistent with the exponential relationship between elevation and atmospheric pressure (Eq.2) the relationship is nonlinear with a greater change in pa and thus a greater change in Δ 13 Cmoss at lower elevations.Despite fitting our model to Ln(Δ 1.097 ‰), suggesting that other non-linear processes and not just pa may be affecting δ13 Cmoss variability with elevation. While there does appear to be a global relationship between pa and Δ 13 C of mosses, there are notable differences among sites.Moss Δ 13 C values tended to be generally lower in the Swiss Alps (mean = 17.4 ‰) and higher in Hawaii (mean = 20.6 ‰) and the slope of the relationship between pa and Δ 13 C appears to vary across sites with the Andes having the smallest slope and Poland having a much greater slope.While these subtle differences are probably due to local climate affects, it appears that pa is the primary physical mechanism explaining the previously reported global relationship between Δ 13 C of mosses and elevation globally across all sites (Ménot and Burns, 2001;Royles et al., 2014;Skrzypek et al., 2007;Waite and Sack, 2011). We also evaluated model performance using a global standard atmospheric sea level pressure of 101.325 kPa, or site specific atmospheric pressure estimates from ERA-interim reanalysis data.We found that the model using site specific atmospheric pressure estimates performed better at predicting Δ 13 Cmoss (RMSE = 1.096 ‰) than the model using global standard atmospheric sea level pressure (RMSE = 1.216 ‰).Thus the optimal model characterizing the observed modern relationship between Δ 13 Cmoss and the pa was the second order polynomial: This polynomial was solved numerically to derive estimates of pCO2 during the Pliocene based on sub-fossil moss samples collected at the BP site. Based on our analysis of cellulose extracted from four different Menyanthes L. (buckbean) plants growing at different locations in the modern boreal forest, we found Δ 13 C of buckbean to be fairly constant 16 ± 0.4 ‰, yielding an estimate of pi / pa in modern buckbean of 0.51.Applying this modern of pi / pa to our δ 13 C measurements from sub-fossil buckbean we obtained estimates of δ 13 Catm during the Pliocene of -6.23 ± 0.9 ‰.Using our empirical transfer function (Eq.9) in combination with these estimates δ 13 Catm, allowed us to approximate atmospheric CO2 concentrations over the Pliocene interval captured at the BP site (Fig. 4).We estimated a mean atmospheric CO2 concentration over this interval of 440 ± 50 ppm with considerable variability between a minimum atmospheric CO2 concentration of 270 ppm and a maximum atmospheric CO2 concentration of 470 ppm.Overall, this proxy approach had a prediction uncertainty of less than 10% of the estimate (1 σ = 35 ppm). Provenance of branched GDGTs Previously, brGDGT derived MAT estimates (-0.6 ± 5.0 °C) from BP sediments were developed using the older chromatography methods that did not separate the 5-and 6-methyl brGDGTs, and a soil calibration (Ballantyne et al., 2010). In marine and lacustrine sediments, bacterial brGDGTs were thought to originate predominantly from continental soil erosion arriving in the sediments through terrestrial runoff, however, a number of more recent studies have indicated aquatically produced brGDGTs could be affecting the distribution of the sedimentary brGDGTs and thus the temperature estimates based upon them (Warden et al., 2016;Zell et al., 2013;Zhu et al., 2011).Since the discovery that sedimentary brGDGTs can have varying sources, different calibrations have been developed depending on the origin of the brGDGTs, i.e. soil calibration (De Jonge et al., 2014), peat calibration (Naafs et al., 2017) and aquatic calibrations (i.e.Foster et al., 2016;Pearson et al., 2011;Russell et al., 2018).Therefore, several studies have recommended that the potential sources of the sedimentary brGDGTs should be investigated before attempting to use brGDGTs for paleoclimate applications (De Jonge et al., 2015;Warden et al., 2016;Yang et al., 2013;Zell et al., 2013).In this study, we examine the distribution of brGDGTs in an attempt to determine their origin and consequently the most appropriate calibration to utilize in order to reconstruct temperatures from the BP sediments. Branched GDGTs IIIa and IIIa′ on average had the highest fractional abundance of the brGDGTs detected in the BP sediments (see Fig. S1 for structures; Table S1).A previous study established that when plotted in a ternary diagram the fractional abundances of the tetra-, penta-and hexamethylated brGDGTs, soils lie within a distinct area (Sinninghe Damsté, 2016).To assess whether the brGDGTs in the BP deposit were predominantly derived from soils, we compared the fractional abundances of the tetra-, penta-and hexamethylated brGDGTs in the BP sediments to those from modern datasets in a ternary diagram (Fig. 6).Since the contribution of brGDGTs from either peat or aquatic production could affect the use of brGDGTs for paleoclimate application, in addition to comparing the samples to the global soil dataset (De Jonge et al., 2014), peat and lacustrine sediment samples were added into the ternary plot to help elucidate the provenance of brGDGTs in the BP sediments. According to Sinninghe Damsté (2016), it is imperative to only compare samples in a ternary diagram like this where all of the datasets were analyzed with the novel methods that separate the 5-and 6-methyl brGDGTs since the improved separation can result in an increased abundance of hexamethylated brGDGTs.Recently, samples from East African lake sediments were analyzed using these new methods (Russell et al., 2018) and so these samples were included in the ternary plot for comparison (Fig. 6).Although the lakes from the East African dataset are all from a tropical area, they vary widely in altitude and, thus, in MAT.We separated them into three categories by MAT (lakes >20°C, lakes between 10-20°C and lakes<10°C).By comparing all the samples in the ternary plot, it was evident that the BP samples plotted closest to the lacustrine sediment samples from regions in East Africa with a MAT <10°C, suggesting that the provenance of the majority of the brGDGTs from the BP sediments was not soil or peat but lacustrine aquatic production. The average estimated surface water pH for the BP sediments (8.6±0.2) calculated using eq.( 8), is within the 6 -9 range typical of lakes and rivers (Mattson, 1999).This value is near the upper limit of rich fens characterized by the presence of S. A predominant origin from lake aquatic production is in keeping with previous interpretation of the paleoenvironment of the BP site, which was at least at times covered by water as evidenced by fresh water diatoms, fish remains and gnawed beaver sticks in the sediment (Mitchell et al., 2016). Aquatic Temperature Transfer Function Since there is evidence that the majority of the brGDGTs in the BP sediments are aquatically produced, an aquatic transfer function was used for reconstructing temperature.When we apply the African lake calibration (Eq.7), the resulting estimated MAT for BP is 7.1 ± 1.0 °C.This value is high compared to other previously published estimates from varying proxies, which have estimated MAT in this region to be in the range of -5.5 to 0.8°C, (Ballantyne et al., 2010;Ballantyne et al., 2006;Csank et al., 2011a;Csank et al., 2011b;Fletcher et al., 2017).A concern when applying this calibration is that it is based on lakes from an equatorial region that does not experience substantial seasonality, whereas, the Pliocene Arctic BP site did experience substantial seasonality (Fletcher et al., 2017).Biological production (including brGDGT production) in BP was likely skewed towards summer and, therefore, summer temperature has a larger influence on the reconstructed MAT.Unfortunately, no global lake calibration set using individually quantified 5-and 6-methyl brGDGTs is yet available.Therefore, to calculate mean summer air temperatures (MST, Eq. 6) we applied the aquatic transfer function developed by Pearson et al. (2011) by combining the individual fractional abundances of the 5-and 6-methyl brGDGTs.The Pearson et al. (2011) calibration was based on a global suite of lake sediments including samples from the Arctic, thus covering a greater range of seasonal variability.The resulting average estimated mean summer temperature was 15.4 ± 0.8 °C, with temperatures ranging between 14.1 and 17.4 °C (Fig. 4).This is in good agreement with recent estimates based on Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE; Fletcher et al., 2017) that concluded that MSTs at BP during the Pliocene were approximately 13 to 15°C. Vegetation and Fire Reconstruction All sediment samples from BP contained charcoal (Fig. 4), indicating the consistent prevalence of biomass burning in the High Arctic during this time period.However, counts were variable throughout the section, with the middle and lower sections (18 fragments cm -3 ) containing less charcoal compared to the upper section upper section (710 fragments cm -3 ).Overall, samples from BP contained on average 100.0 ± 165 fragments cm -3 (mean ± 1 σ), with charcoal area averaging 12.3 ± 20.2 mm 2 cm -3 . The variability of charcoal within any given sample was relatively low with a 1 σ among charcoal area of approximately 2 mm 2 cm -3 . According to GBIF-based mapping exercise, the paleofloral assemblage at BP most closely resembles modern day vegetation found in northern North America, particularly on the eastern margin (e.g.New Hampshire, New Brunswick and Nova Scotia) and the western margin (Alaska, Washington, British Columbia, and Alberta; Fig. 7a), and central Fennoscandia.Of these areas, the western coast of northern North America and eastern coast of southern Sweden has the most similarity to the reconstructed BP climate in terms of MST (Fig. 7b) and summer precipitation (Fig. 7c). While high counts of active fire days are common in the western part of the North American boreal forest, it is not as common in the eastern part of the North American boreal forest (Fig. 7d), likely due to the differences in the precipitation regime.There was also low fire counts in Fennoscandia likely due to historical severe fire suppression (Brown and Giesecke, 2014;Niklasson and Granström, 2004).Therefore, based on our reconstruction of the climate and ecology of the BP site, our results suggest that BP most closely resembled a boreal-type forest ecosystem shaped by fire, similar to those of Washington, British Columbia, Northwest Territories, Yukon and Alaska (but see Sect.4.3). Geochronology The plant and animal fossil assemblages observed at BP suggest a depositional age between 3 and 5 Ma (Matthews Jr and Ovenden, 1990;Tedford and Harington, 2003).This biostratigraphic age was corroborated with an amino-acid racemization age (>2.4 ± 0.5 Ma) and Sr-correlation age (2.8-5.1 Ma) on shells (Brigham-Grette and Carter, 1992) in biostratigraphically correlated sediments on Meighen Island, situated 375 km to the west-north-west.The previously calculated burial age of 3.4 Ma for the BP site isa minimum age because no post-depositional production of 26 Al or 10 Be by muons was assumed.If the samples are considered to have been buried at only the current depth (ca. 10 m, see supplemental data) then the ages plot to the left and outside of the burial field, indicating that the burial depth was significantly deeper for most of the post-depositional history.The revised cosmogenic nuclide burial age is 3.9 +1.5/-0.5 Ma.It is the best interpretation of burial age data based on improved production rate systematics (e.g.Lifton et al., 2014), and more reasonable estimates of erosion rate and ice cover since the mid-Pliocene (see Table S4).As the stratigraphic position of the cosmogenic samples is very close to the BP peat layers, we interpret the age to represent the approximate time that the peat was deposited. Pliocene atmospheric CO2 levels We have derived a transfer function that allows us to predict the partial pressure of atmospheric CO2 in Earths' past based on carbon isotopic measurements in byrophytes.However, many of the studies included in our transfer function identify other mechanisms that may also influence carbon isotopic discrimination in bryophytes.Because these other mechanisms may violate the assumptions of applying this transfer function to the past or contribute error to our reconstructions of atmospheric CO2 concentrations during the Pliocene, we discuss these mechanisms below. It has been suggested that in the absence of stomatal regulation, that surface water may control the gradient in partial pressure (i.e.pi / pa ) in bryophytes (White et al., 1994), due to the greater resistance to diffusion of CO2 in water than in the atmosphere. For instance, Ménot and Burns (2001) found that most mosses growing along an elevational transect in Switzerland experienced discrimination with elevation in response to decreased partial pressure, except one species Sphagnum cuspidatum Ehrh. ex Hoffm., which grows almost exclusively in wet hollows.In a study of Hawaiian bryophytes Waite and Sack (2011) found consistent slopes of less isotopic discrimination with elevation in all species, however, species growing on young substrate showed significantly less isotopic discrimination.The most likely explanation is that lack of canopy cover on the older substrates lead to greater photosynthetic rates, which lead to reduced pi.Lastly, decreased discrimination of mosses growing along an elevational transect in Poland (Skrzypek et al., 2007), was found to be highly correlated with temperature. Although temperature is the primary factor driving most metabolic reactions, it does not provide a physical mechanism explaining the relationship between elevation and isotopic discrimination in mosses.Skrzypek et al. (2007) found slightly different relationships between elevation and carbon isotopic discrimination in mosses growing on the windward versus leeward side of their elevational transects suggesting that changes in lapse rate may also play a factor.Collectively, these studies suggest that microclimatic factors may explain differences in isotopic discrimination of mosses within and among different sites possibly contributing to different intercepts for sites reported in Fig. 3, and that dry vs. moist lapse rates may also play a role in regulating the different slopes among sites.In fact, the greatest elevational range reported among sites was for the elevational transect in the Andes (320 to 3100 m), but this site did not experience the widest range in Δ 13 Cmoss.This tropical transect had a very moist lapse rate resulting in the least change in atmospheric temperature and humidity with elevation.Nonetheless, by projecting these data as a function of partial pressure we provide a physical mechanism to explain variations in moss carbon isotopic values globally and we help reconcile the previously reported empirical relationships, such as elevation, temperature, and over-story, all of which tend to be covariates of decreasing partial pressure with elevation.While differences in microclimate and lapse rate are clearly important factors in regulating Δ 13 Cmoss, these factors contribute to the global error in our model for predicting pa and ultimately to uncertainties in our estimates of atmospheric CO2 concentrations during the Pliocene. Our reconstructions of CO2 concentration for this mid-Pliocene interval are within the range of previously reported CO2 estimates, tending to agree with alkenone estimates from Pagani et al. (2010).This suggests that CO2 concentrations during this warm Pliocene interval were above 400 ppm.In fact our mean Pliocene value (440 ± 50 ppm) is not statistically different from the alkenone based estimates (357 ± 47 ppm) previously reported by Pagani et al. (2010).Generally, our estimates showed sustained atmospheric CO2 estimates of approximately 450 ppm with only four anomalously low values (Fig. 4).These estimates could represent an actual reduction in atmospheric CO2, or they might be artefacts of sampling or analysis.It should be noted that poor preservation and a possible shift in dominant moss species to Drepanocladius spp. was evident in samples corresponding to these two anomalously low CO2 estimates.While one of these samples contained only 0.17 mg/C and a δ 13 C value of -20.9 ‰, the other contained 0.88 mg/C and a δ 13 C value of -25.0 ‰.Thus it is conceivable that the sample corresponding to the atmospheric CO2 estimate of 270 ppm, might be approaching our minimum detection limit and should be verified in subsequent studies.Overall, these CO2 estimates are consistent with recent estimates derived from both alkenones and from boron isotopes (Martinez-Boti et al., 2015;Seki et al., 2010). It should also be noted that changes in growth rate due to phosphorus availability and biases in shell size are known to contribute uncertainty to alkenone-derived CO2 concentration estimates (Seki et al., 2010).Similar assumptions may affect boron-derived estimates of CO2 concentrations.For instance, a recent update on the global boron cycle estimates the mean residence time of boron to be ~ 1.5 Ma and suggests that boron isotopes may not be sensitive to ocean pH on timescales less than 1 Ma (Schlesinger and Vengosh, 2016).This may help explain the apparent lack of variability in boron isotope and boron/calcium based CO2 estimates during the Pliocene (Hönisch et al., 2009;Tripati et al., 2009); however, boron isotopes do seem to reproduce the CO2 variability measured in ice cores over the Pleistocene (Hönisch et al., 2009).Our atmospheric CO2 estimates are in the range of previous estimates, although the values seem slightly high and are variable, suggesting that these estimates probably represent atmospheric conditions during a Pliocene warm interval and not necessarily an integrated average over the entire Pliocene. There are numerous assumptions based on known uncertainties in our CO2 reconstruction approach.First of all, our empirically based approach requires some estimate of the isotopic ratio of atmospheric CO2 during this time, which we derive from C3 vegetation (Fletcher et al., 2008;White et al., 1994).Here we estimate the isotopic composition of the atmosphere over the Pliocene to be δ 13 C = -6.23 ± 0.9 ‰, which is within the range of values recorded over glacial-interglacial intervals in ice cores δ 13 C = -6.2 to -7.0 ‰ (Bauska et al., 2016) and consistent with estimates derived from carbon isotope measurements of foraminifera (Ravelo et al., 2004).If we assume that the isotopic composition of atmospheric CO2 was -8.2 ‰ during the Pliocene and similar to today due to greater transfer of lighter carbon from the terrestrial reservoir to the atmospheric reservoir, that would result in reduced Δ 13 Cmoss and decreases in our mean estimate of atmospheric CO2 to approximately 420 ppm.This adjustment to our original estimate of δ 13C of atmospheric CO2 would bring our atmospheric CO2 estimate more in line with previous reconstructions, but is still within the range of error of our original estimate. Another critical assumption of our approach is that the total pressure of the atmosphere has not changed at the BP site since the Pliocene either through increased partial pressure of constituent gases or more likely through changes in elevation due to dynamic isostacy.The current elevation of the site is approximately 380 MASL with a summertime total atmospheric pressure of approximate 88.5 kPa.If we assume that the site was at 0 m during the Pliocene that would increase the total summertime atmospheric pressure to 93.9 kPa and would decrease our Pliocene CO2 estimates to about 420 ppm.However, estimates of suggesting that our assumptions regarding elevation at the site probably have a negligible impact on our estimates of Pliocene atmospheric CO2 concentrations, especially given the uncertainty of the proxy approach.Therefore the assumptions to our approach in estimating past CO2 may be leading to estimates that are biased slightly high relative to previous estimates.When these assumptions are considered, our estimates still suggest atmospheric CO2 concentrations exceeding 400 ppm during this Pliocene warm interval. Fire, vegetation, climate The vegetation reconstruction indicates that open Larix-Betula parkland persisted in the basal (300.3-300.4MASL) parts of the sequence.Groundcover was additionally dominated by shrub birch, ericaceous heath and ferns.While the regional climate may have been somewhat dry, the record suggests that, locally, a moist fen environment dominated by Cyperaceae, existed near the sampling location.Shrubs including Alnus and Salix likely occupied the wetland margins. The corresponding relatively low concentration of charcoal may reflect lower severity fires or higher sedimentation rates.If the former, it is posited that a surface fire regime existed.This premise is supported by the fire ecology characteristics of the dominant vegetation.Larix does not support crown fires due to leaf moisture content (de Groot et al., 2013) and self-pruning (Kobayashi et al., 2007).The persistence and success of larch in modern-day Siberia appears to be driven by its high growth rate (Jacquelyn et al., 2017) tolerance of frequent surface fire due to thick lower bark (Kobayashi et al., 2007) and tolerance of spring drought due to its deciduous habit (Berg and Chapin III, 1994).Arboreal Betula are very intolerant of fire and easily girdled.However, they are quick to resprout and are often found in areas with short fire return intervals.Like Larix, arboreal Betula have high moisture content of their foliage and are not prone to crown fires.Betula nana L., an extant dwarf birch, is a fire endurer that resprouts from underground rhizomes or roots (Racine et al., 1987) thus regenerating quickly following lower severity fires (de Groot et al., 1997).The vegetation and fire regime characteristics are similar further up the sequence at 301.15-301.25 MASL, with the exception that ferns increased in abundance while heath decreased. In the upper part of the sequence , where charcoal was abundant, the Larix-Betula parkland was replaced by a mixed boreal forest assemblage with a fern understory.Canopy cover was more closed compared to the preceding intervals.The forest was dominated by Larix and Picea, with lesser amounts of Pinus.While Betula remained part of the forest, it decreased in abundance possibly due to increased competition with the conifers.Based on exploratory CRACLE analyses of climate preferences using GBIF occurrence data (GBIF.org,2018a, b, c, d) of the dominant taxa (Larix-Betula vs. Larix-Picea-Pinus), the expansion of conifers could indicate slightly warmer summers (MST ~15.8 ºC vs. 17.1 ºC).This result is in contrast to the stable MST estimated by bacterial tetraethers, although within reported error, and the small change is certainly within the climate distributions of both communities.The analyses also suggests that slightly drier conditions may have prevailed during the three wettest months (249-285mm vs. 192-219mm).While the interaction between climate, vegetation and fire is complex, the aforementioned changes in climate could have directly altered both the vegetation and fire regime, which in turn further promoted fire adapted taxa.In addition to regional climatic factors, community change at the site The high charcoal content suggests that fire was an important disturbance mechanism, although it could also reflect a slow sedimentation rate.If the former, it is likely that frequent, mixed severity fires persisted.While Larix is associated with surface fire, Picea and Pinus are adapted to higher intensity crown fires.A crown fire regime may have established as conifers expanded, altering fuel loads and flammability.For example, black spruce sheds highly flammable needles and its lower branches can act as fuel ladders facilitating crown fires (Kasischke et al., 2008), and was previously tentatively identified at BP (Fletcher et al., 2017).While it has thin bark and shallow roots maladapted to survive fire (Auclair, 1985;Brown, 2008;Kasischke et al., 2008), it releases large numbers of seeds from semi-serotinous cones, leading to rapid re-establishment (Côté et al., 2003).The documentation of Onagraceae pollen at the top of the sequence could potentially reflect post-fire succession. For example, the species Epilobium angustifolium L. is an early-seral colonizer of disturbed (i.e.burnt) sites, pollinated by insects. It is possible that the Larix-Betula parkland dominated intervals correspond to the peat-and sand-stratigraphic Units II and III described by Mitchell et al. (2016), whereas the mixed boreal forest in the upper part of the sequence is contemporaneous with Unit IV, described as peat and peaty sand, coarsening upwards.While it is clear that the vegetation and fire regimes changed through time at this Arctic site, CO2 and temperatures appear more stable, or at least to have no apparent trend.Thus, it is suggested that the fire regime at BP was primarily regulated by regional climate and vegetation, and perhaps additionally by changing local hydrological conditions.Regarding climate, MST remained high enough throughout the sequence to allow for fire disturbance and the pollen suggests that temperatures may have marginally increased in the upper part of the sequence. Alternatively, other climate variables, such as the precipitation regime, or local hydrological change may have initiated the change in community.Up-sequence changes in vegetation undoubtedly influenced fine fuel loads and flammability.Indeed, the fire ecological characteristics of the vegetation are consistent with a regional surface fire regime yielding to a crown fire regime. Betula and Alnus, which occurred earlier in the depositional sequence, are favored by beaver in foraging (Busher, 1996;Haarberg and Rosell, 2006;Jenkins, 1979).Moreover, the presence of sticks cut by beaver in Unit III reveals that beavers were indeed at the site, moistening the local land surface.The lack of beaver cut sticks and changes in sediment in Unit IV may indicate that the beavers abandoned the site, possibly in response to changes in vegetation (i.e.increased conifers and decreased Betula) limiting preferred forage or due to lateral channel migration, as evidenced by the coarsening upward sequence described by Mitchell et al. (2016).As a result, the local land surface may have become somewhat drier, contemporaneous with the change towards Larix-Picea-Pinus forest and a mixed severity fire regime.Matthews and Fyles (2000) (Flannigan et al., 2009;Ryan, 2002), and although historically rare, is becoming more frequent in the tundra in recent years (Mack et al., 2011).The modern increase in fire frequency is likely as a consequence of atmospheric CO2 driven climate warming and feedbacks such as reduced sea ice extent (Hu et al., 2010), because the probability of fire is highest where temperature and moisture are conducive to growth and drying of fuels followed by conditions that favor ignition (Whitman et al., 2015).Young et al. ( 2017) confirmed the importance of summer warmth and moisture availability patterns in predicting fire across Alaska, highlighting a July temperature of ~13.5 °C as a key threshold for fire across Alaska. The abundance of charcoal at BP demonstrates that climatic conditions were conducive for ignition and that sufficient biomass available for combustion existed across the landscape.Mean summer temperatures at BP likely exceeded the ~13.5 °C threshold (Young et al., 2017) that drastically increases the chance of wildfire as demonstrated here from brGDGT derived temperatures and corroborated by previous studies with a seasonal component (Csank et al., 2011b;Fletcher et al., 2017).An increase in atmospheric convection has been simulated in response to diminished sea-ice during warmer intervals (Abbot and Tziperman, 2008), but this study did not confirm if this increase in atmospheric convection was sufficient to cause lightning ignitions.An alternative ignition source for combustion of biomass on Ellesmere Island during the Pliocene is coal seam fires, which have been documented to be burning at this time (Estrada et al., 2009).However, given the interaction of summer warmth and ignition by lightning within the same climate range as posited for BP, we consider lightning the most likely source of ignition for Pliocene fires in the High Arctic. Fire return intervals cannot be calculated from the BP charcoal counts due to the absence of a satisfactory age-depth model and discontinuous sampling.As strong interactions are observed between fire regime and ecosystem assemblage in the boreal forest (Brown and Giesecke, 2014;Kasischke and Turetsky, 2006), and in response to climate, comparison with modern fire regimes for areas with shared species compositions and climates may inform a potential range of mean fire return interval (MFRI). The modern area with the most species in common with BP is central northern Alaska (Fig. 7A).The area over which shared species were calculated is largely tundra, but includes the ecotone between tundra and boreal forest.Other zones that share many species with BP are continuous with Alaska down the western coast of North America to the region around the border of Canada and the United States, the eastern coast of North America in the region around the border of Canada and the United States (~50°N), and central Fennoscandia.Of these zones, the MST of Alaskan tundra sites (6-9°C) are less similar to BP (15.4°C) than ~50°N on both western and eastern coastal North American sites and central Fennoscandia (12-18°C, Fig. 7B). The eastern coast of North America has higher rainfall during the summer (>270 mm), than the west coast and Alaska (Fig. 7C), which correlates to the timing of western fires.The low summer precipitation for much of the west (<200 mm), is consistent with previously published summer precipitation estimates for BP (~190 mm).As a result, the fire regime of the west coast ~50°N may be a better analogue for BP than the east coast of North America.In central Fennoscandia there is also a west vs. east coastal variation in summer precipitation with the western, Nordic part of the region experiencing higher summer precipitation (252->288 mm), than the more similar eastern, Swedish part of the region (~198 mm).Comparison to modern fire detection data (Fig. 7D) suggests that the two regions most climatically similar to BP, ~50°N western North America and central Sweden, have radically different fire regimes.This is likely caused by historical fire suppression in Sweden that limits utility of very modern data for comparison in this study (Brown and Giesecke, 2014;Niklasson and Granström, 2004).To understand the fire regimes as shaped by climate and species composition rather than human impacts, we considered both the modern and recent Holocene reconstructions for these regions (Table 1).This shows that, a) within any region variation arises from the complex spatial patterning of fire across landscapes, and b) that the regions most similar to BP (~50°N western North American and eastern Fennoscandian reconstructions for the recent Holocene) have shorter fire return intervals than the cooler Alaskan tundra or wetter summer ~50°N eastern North American coast. While the shared species for Siberia appears low, the number of observations in the modern biodiversity database used is likewise lowperhaps causatively so.Given the similar climate to BP on the Central Siberian Plateau and some key aspects of the floras in Siberia such as the dominance of larch, we considered the fire regime of the larch forests of Siberia.Kharuk et al. (2016;2011) studied MFRIs across Siberia, from 64°N to 71°N, the northern limit of larch stands.They found an average MFRI across that range of 110 years, with MFRI increasing from 80 years in the southern latitudes to ~300 in the north (Table 1).Based on similarity of the climate variables, the more southerly MFRIs (~80 years) may be a better analogue.Key differences between boreal fires in the North America compared to Russia are a higher fire frequency with more burned area in Russia, but a much lower crown fire and a difference in timing of disturbance, with spring fires prevailing in Russia compared to mid-summer fires in western Canada ( de Groot et al., 2013;Rogers et al., 2015). Critically, the charcoal record reveals that biomass burning could have been a potential feedback mechanism amplifying or dampening warming during the Pliocene due to its prevalence through time, and the complex direct impacts on the surface radiative budget and direct and indirect effects on the top of the atmosphere radiative budget (Feng et al., 2016).Further investigation is warranted to better characterize the fire regime to improve accuracy of fire simulations in earth system models of Pliocene climate. CONCLUSION The record of high CO2 supports the hypothesis that Pliocene Arctic terrestrial fossil localities probably represent periods of higher warmth that supported higher productivity.The novel temperature estimates presented here suggest that summer temperatures were considerably warmer during the Pliocene (~15.4°C)compared to modern day Eureka, Canada (~4.1°C; Fig. 2).This highlights the increasing influence of arctic amplification of temperatures as CO2 exceeds modern levels.Our Clim.Past Discuss., https://doi.org/10.5194/cp-2018-60Manuscript under review for journal Clim.Past Discussion started: 12 June 2018 c Author(s) 2017.CC BY 4.0 License. Clim. Past Discuss., https://doi.org/10.5194/cp-2018-60Manuscript under review for journal Clim.Past Discussion started: 12 June 2018 c Author(s) 2017.CC BY 4.0 License.may have been further influenced by local hydrological conditions, such as channel migration, pond infilling and ecosystem engineering by beaver (Cantor spp.). similarly indicated that the Pliocene BP environment was characterized by an open larch dominated forest-tundra environment, sharing most species in common with those now found in three regions, including central Alaska to Washington in western North America, the region centered around the Canadian/US border in eastern North America, as well as Fennoscandia in Europe (Fig 7a).Wildfire is a key driver of ecological processes in modern boreal forests Clim.Past Discuss., https://doi.org/10.5194/cp-2018-60Manuscript under review for journal Clim.Past Discussion started: 12 June 2018 c Author(s) 2017.CC BY 4.0 License. reconstruction of the paleovegetation and ecology of this unique site on Ellesmere Island suggests an assemblage similar to forests of the western margins of North America and eastern Fennoscandia.The evidence of recurrent fire and concurrent changes in taxonomic composition suggests that fire played an active role in Pliocene forests, shaping the environment and influencing the climate of the Arctic during the Pliocene.The importance of fire in the modern boreal forest suggests that fire may have had direct and indirect impacts on Earth's radiative budget at high latitudes during the Pliocene, although the net Clim.Past Discuss., https://doi.org/10.5194/cp-2018-60Manuscript under review for journal Clim.Past Discussion started: 12 June 2018 c Author(s) 2017.CC BY 4.0 License. Figure 3 . Figure 3. Sensitivity of carbon isotopic discrimination to the partial pressure of atmospheric CO2 in mosses from two different elevational transects.Moss carbon isotope data collected from an elevational transect in the Swiss Alps (Ménot and Burns, 2001) 5 Figure 4 . Figure 4. Reconstruction of atmospheric CO2, mean summer temperature, and fire for the Canadian High Arctic during the Pliocene from the 2006 series unless noted.Atmospheric CO2 concentrations estimated from carbon isotopic measurements of mosses and plants (red; ± 2 σ).Mean summer temperature reconstructed from a brGDGT based proxy (blue; ± 2 σ) and relative 2010 data point 5
2018-12-05T16:12:35.341Z
2018-06-12T00:00:00.000
{ "year": 2018, "sha1": "72f37d4410a7ded674b05e233daea5559165f50b", "oa_license": "CCBY", "oa_url": "https://cp.copernicus.org/articles/15/1063/2019/cp-15-1063-2019.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "72f37d4410a7ded674b05e233daea5559165f50b", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
252412113
pes2o/s2orc
v3-fos-license
Single charge control of localized excitons in heterostructures with ferroelectric thin films and two-dimensional transition metal dichalcogenides Single charge control of localized excitons (LXs) in two-dimensional transition metal dichalcogenides (TMDCs) is crucial for potential applications in quantum information processing and storage. However, traditional electrostatic doping method with applying metallic gates onto TMDCs may cause the inhomogeneous charge distribution, optical quench, and energy loss. Here, by locally controlling the ferroelectric polarization of the ferroelectric thin film BiFeO3 (BFO) with a scanning probe, we can deterministically manipulate the doping type of monolayer WSe2 to achieve the p-type and n-type doping. This nonvolatile approach can maintain the doping type and hold the localized excitonic charges for a long time without applied voltage. Our work demonstrated that ferroelectric polarization of BFO can control the charges of LXs effectively. Neutral and charged LXs have been observed in different ferroelectric polarization regions, confirmed by magnetic optical measurement. Highly circular polarization degree about 90 % of the photon emission from these quantum emitters have been achieved in high magnetic fields. Controlling single charge of LXs in a non-volatile way shows a great potential for deterministic photon emission with desired charge states for photonic long-term memory. Introduction High-quality quantum light sources are highly desired to implement quantum key distribution and linear optical quantum computation for information processing. 1-5 Solid-state quantum emitters have attracted more attention because of their addressability and integrability. Many solid-state single-photon sources are based on confined low dimensional materials, including nitrogen vacancies in diamond and semiconductor quantum dots. [6][7][8][9] Recently, nonclassical quantum emissions have been observed in semiconductor transition metal dichalcogenides (TMDCs), which were ascribed to the LXs introduced by the point defects or electronic perturbations in the monolayer TMDCs. 6,[10][11][12][13][14][15] By intentionally introducing local strains, these quantum emitters can be site-controlled and arrayed. [16][17][18] Quantum emitters in tungsten-based TMDCs systems have been investigated in many studies. 6,12,19 For example in WSe 2 , quantum emitters have been demonstrated from the intervalley defect excitons associated with the hybridization of dark excitons by point-like strain perturbations or vacancies. [20][21][22] Since atomically thin layers of TMDCs can be stacked vertically using a simple mechanical exfoliation method, quantum emitters can be integrated into a range of functional heterostructure devices, which makes the electrical charge control possible. 23,24 Recently, the LXs have been studied in the vertically assembled heterostructures using gated electrostatic doping. 25,26 The electric field has been used to tune the band energy of quantum emitters by Stark effect, and to control the fine structure splitting (FSS) by exciton wave function modulation. 27,28 Normally, neutral LXs at zero magnetic field possess FSS with orthogonal linear polarizations caused by electron-hole exchange interactions. In contrast, charged LXs do not have FSS due to the disappearance of exchange interactions by the excess electrons or holes. 29,30 Therefore, deterministic control the charge state of LXs is essential to realize desired exciton recombination. Typically, the charge density of TMDCs can be controlled by gate-bias tuning. 29,31,32 In this way, a transverse p-n junction has been demonstrated with controlling the gate voltage. However, the metal gate would result in an inhomogeneous charge distribution and direct contact between metal and the monolayer TMDCs would also lead to an optical quench. When the gate voltage is removed, the charge density cannot be maintained. To overcome these, integrating ferroelectric films (FE) with TMDCs provide an strategy on controlling the charge of the LXs, since the spontaneous polarization of FE that can be reversed with external stimuli, and can keep the ferroelectric polarization for a long time. [33][34][35] In addition, the TMDCs/FE heterostructure largely eliminates interface problems introduced by electrode contacts. 36,37 By using scanning probe technique, ferroelectric domains with good retention can be formed with TMDCs in a non-volatile manner, allowing the design of devices not limited by physical source, drain and gate electrodes. 36 The integrated heterostructures could also provide a way for the seamless integration of data processing and storage. 38 In this work, a 10% Zn-doped BFO ferroelectric thin film and a monolayer WSe 2 were used to build a gate-free heterostructure WSe 2 /BFO, with which the carrier doping of WSe 2 can be controlled by the ferroelectric polarization of BFO. We demonstrate that the carrier doping of WSe 2 in the downward polarization (P down ) region is p-type, while in the upward polarization (P up ) region it is n-type at room temperature. At low temperature without magnetic field, the photoluminescence (PL) peaks of LXs are mostly doublets at the domain wall, while the singlets are mainly located in the P down and P up regions. The lack of FSS is a feature of charged LXs, 29,30 while neutral LXs are more likely to be produced at the domain wall. The presence of charged and neutral LXs is further confirmed by magneto-optical measurements. Highly circular polarization of PL of about 90 % from quantum emitters is observed under a high magnetic field. Using electrical polarization of BFO controlling the single charges of LXs provides a method for achieving quantum emitters with desired charging states in a non-volatile way. Scanning probe microscopy A schematic plot of a WSe 2 /BFO heterostructure is shown in Fig. 1(a), and the fabrication process is summarized in the section of Methods. The BFO film has been grown on 20 nm strontium ruthenate (SRO) buffered (001) strontium titanate (STO) substrates. Mechanical exfoliation and dry transfer were used to create the van der Waals interface of WSe 2 /BFO. Optical microscopic images of the monolayer WSe 2 before and after the transfer are shown in Supplementary ( Figure S1). The atomic force microscopy (AFM) image is shown in Fig.1 (b). The orange line in Fig. 1(e) shows that the thickness of WSe 2 is about 1 nm, indicating a monolayer, which is also consistent with Raman and PL results at room temperature (as presented in Supplementary Figure S2). The ferroelectric polarization of BFO can be reversed by applying a voltage. 34 The piezoelectric force microscopy (PFM) image of the WSe 2 /BFO heterostructure are presented in Fig.1(c). The Zn-doping BFO film exhibits distinct downward polarization (P down ) self-poling polarization in white box region and perfect polarization-voltage (P-V) loop with a large remanent polarization value ~60 μC cm -2 (as shown in Fig. 1(d)). In order to obtain an upward polarization (P up ) region, we applied a -15 V voltage at the metal probe to reverse the downward polarization of the BFO film. As shown by the green line in Fig. 1(e), the out of plane PFM phase of BFO is different by about 180° between the P up and P down regions, confirming that the polarization direction can be effectively switched by the external electric field. The upper surface of the BFO is rich with positive charges in the P up region, and the opposite is true in the P down region. The junction between P down and P up is the domain wall, as labelled P dw . The domain wall is at the boundary of positive and negative charges, so it is possible to form a neutral environment. The geometry of the monolayer WSe 2 in the PFM image matches that of the AFM image, indicating that the monolayer WSe 2 was not damaged during polarization reversal with applying voltage. The out-of-plane PFM phase of BFO is different by about 180° in the P up and P down regions. Photoluminescence spectra at zero magnetic field. To elucidate the role of the ferroelectric nature of BFO on the charge density of monolayer WSe 2 , the temperature-dependent PL measurements have been performed from 35 K to 273 K, as shown in Fig. 2(a). Because of the large binding energy and direct band gap of monolayer TMDCs, direct band recombination with different type of carrier doping can be resolved by PL spectroscopy even at room temperature. 39,40 Different from the intrinsic exciton (X in ), the localized exciton is quickly quenched as the temperature rises. The emission of 1.73 eV at 35 K corresponds to the neutral exciton X 0 . The energy difference between the neutral and charged excitons allows us to determine the carrier doping type of the device. In the P up region, two different trions states of negatively charged exciton (X in -) are located at 1.688 eV and 1.696 eV under 35 K, which correspond to the triplet trion (X inT -) and the singlet trion (X inS -). While in the P down region, the positively charged exciton Xin+ is singlet state. [41][42][43][44] The emission of negatively charged intrinsic exciton X in in the P up region is red-shifted by about 10 meV relative to the X in + in the P down region at low temperature. The PL from P up region also undergoes a red shift of about 15 meV at 273 K, which is consistent with previous reports on PL with metal electrostatic gating. [41][42][43][44] This demonstrates that the direct Magneto photoluminescence spectra measurements To further investigate magnetic optical properties of LXs, the magneto-PL measurements were performed in three regions of P down , P up , and P dw . Fig. 3 shows the magnetic response of the labeled PL peaks of LXs in Fig. 2 (b-d). The typical X-shaped dispersion can be observed in all three regions. The FSSs at zero magnetic field of the doublets in all three regions are approximately 700-800 μeV, which are consistent with the previously reported values. 52,53 The emission of the LXs at zero magnetic field is linearly polarized. This is because the monolayer WSe 2 valley degrees of freedom and spin degrees of freedom are locked, then the eigenstates at zero magnetic field are a linear superposition of the left-hand circularly polarized excitonic state |K˃ and the right-hand circularly polarized excitonic state |K'˃, and they have equal weights. 54 As the magnetic field increases, the degeneracy is gradually lifted. | ˃ = (| ˃ + ( where L refers to the low energy peak, while U denotes the high energy peak, μ B is the Boltzmann constant, g represents the Landé factor, and δ denotes the FSS at zero field, N L and N U are magnetic field dependent normalization constants. The energy splitting (Δ) between high energy peak and low energy peak as a function of applied magnetic field is shown in Fig. 3(d-f). For doublets, we extract the exciton g-factor by using the expression. In contrast to the behaviors of doublets at low magnetic fields, in which the energy splitting increases quadratically with the magnetic field, the energy splitting of the singlets in P down and P up is a linear function of magnetic field due to the absence of zero field FSS. The g-factors of singlets (peak d 2 , d 3 Unlike the PL peaks of LXs in Fig. 3 with distinct Zeeman splitting, we also found several PL peaks whose high energy branch component is strongly suppressed over the entire magnetic field range. As displayed in Fig. 4(a-c), the PL peak shifts into the low energy direction as the magnetic field increases, while the high energy branch component is unresolved. Fig. 4(d-f) illustrate the variation of the energy with increasing magnetic field. For peak dw 4 in the P dw region, the variation curve of energy with magnetic field is parabolic, whereas the peak d 5 in P down and peak u 4 in P up are straight lines. The solid line represents the fitting by eqn (5), from which we can extract the g-factor and zero field FSS. The g-factors of peak d 5 , peak dw 4 , and peak u 4 are 8.76, 10.60, and 7.65, respectively. The zero field FSS values cannot be extracted for peak d 5 and peak u 4 in P down and P up regions. The lack of zero field FSS is the hallmark of charged LXs. This matches well with the results we discussed earlier that the charged LXs were generated in the P down and P up regions via the charge control of WSe 2 by the ferroelectric polarization of BFO. On the other hand, for peak dw 4 in the P dw region, a zero field FSS of 497.23 μeV can be obtained, which is a little smaller than the reported value of WSe 2 . 10,51 The missing of the high energy branch in the magnetic spectrum can be explained by two reasons. One is thermal relaxation, with which carriers preferably occupy the lower energy levels. 6,26,55,56 Second, strong asymmetry might induce the high energy branch with a small optical oscillator strength. 10,54 In our experiment, the strong asymmetry of the confining potential might be induced by the unevenness of the substrate or wrinkling during the transfer process Polarization-resolved magneto photoluminescence spectra measurements According to the eqn (1)-(4), the circular polarization degree of LXs increases with magnetic field increasing. Fig. 5 (a and b) depict the circularly polarized PL spectra of peak u 4 with a linearly polarized excitation. The PL intensity of σ− emission is significantly higher than that of σ+ with a positive magnetic field, while σ+ emission is much stronger for a negative magnetic field. Fig. 5(c) illustrates the degree of circular polarization degree extracted from the integral intensity of PL peaks. The degree of circular polarization is defined as Device fabrication To fabricate WSe 2 /BFO heterojunction, the Bi1 .3 Fe 0.9 Zn 0.1 O 3 films were grown firstly on ∼20 nm strontium ruthenate (SRO)-buffered (001) strontium titanate (STO) substrates with a laser molecular beam epitaxy system. The WSe 2 monolayer films on scotch tape was prepared by the mechanical exfoliation method. The monolayer was then transferred onto the BFO film using PDMS as a transfer medium, resulting in a heterostructure. AFM and PFM characterization was done a commercial AFM system (Asylum Research, MFP-3D). Ti/Pt coated silicon probes with a tip radius of about 28 nm were used for collecting and recording the PFM images. PL measurement The samples were placed on a three-dimensional piezoelectric stage with a superconducting magnet cryostat, in which a vertical magnetic field of ±9 T and a low temperature of 4.2 K can be provided. All PL measurements were performed using a confocal microscope system. A large numerical aperture objective lens of 0.82 was used to excite the sample and collect PL light. The sample was excited with a continuous wave 532 nm laser. The PL signal was coupled into a single-mode fiber and acquired through a grating spectrometer with a liquid nitrogen-cooled charge-coupled device. Conclusion We Conflicts of interest The authors declare no competing interests.
2022-09-22T15:07:24.493Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "5fba6cfbe05414d1c229ef3e11be4f4ff741f846", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "22fa98f3f8ced51568065c5c8e8797b9de71dbc4", "s2fieldsofstudy": [ "Physics", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
55710444
pes2o/s2orc
v3-fos-license
Self-Strengthening Movement of Late Qing China: an Intermediate Reform Doomed to Failure Despite of strong economy including highest GDP gross and self-sufficient feudal economy system, the late Qing Empire fell behind the world trend with its isolationist trade policies. As the Western world caught up technologically, economically, and politically, the former biggest economy had suffered from consecutive losses in wars. In order to preserve the feudal regime, the initiative reform, termed the Self Strengthening Movement was grandly carried out. However, without the true support from the supreme power on one hand, and without the support of the populace on the other, the Movement was an intermediate reform in attempt to preserve the royal system and forestall its continued decline. In policy, the reforms envisioned Western-style modernization without adjusting the political order, yet the entrenched conservatism of the Qing Imperial Court proved to be the decisive hindering factor in the failure of the Movement. However, neither GDP gross nor GDP ranking in the world could provide a comprehensive reflection of the level of economic development or the international standing of a state.As the Western world caught up technologically, economically, and politically, this ancient world order that China had basked in for millennia was slowly collapsing, replaced by a new, Western-led imperialist order.The industrial revolution in Europe yielded great technological advances for the West, and resulted in more effective systems of communication, transportation, war mongering and a greater thirst for resources.As Western nations and westernizing nations including Japan began to pursue more zealously the resources and material riches of East Asia, China found itself at the center of colonial desire.The West, eager to enter the lucrative Chinese market, petitioned China for an opportunity to engage in mutual trade. However, as China denied the West this opportunity, tensions built up, culminating in the Opium Wars, where the technological disparity between China and the West was unmistakably exposed.Since 1840, the former biggest economy had suffered from consecutive losses in wars.What the Qing Empire had gradually lost was not only its territory but also the international status.In any case, the late Qing China was by no means a strong entity, but a weak, semi-colonial country that frequently ceded territory and paid indemnities. Slowly, as t the façade of Chinese superiority began to crumble away, China found itself out of touch with the modern world.It no longer knew how to interact with the outside world, nor did it comprehend its position within it. This new position of China in the world elicited a concerned response from the Qing court.Certain court factions denied the existence of Chinese backwardness, instead resorting to traditionalism and conservative Chinese thought as methods of preserving Qing control over China.However, other court factions saw reform as essential in maintaining China's control over its own affairs.Liberal-minded court factions unanimously agreed that reform was crucial in preventing the collapse of the Qing polity; the nature of this reform, however, was widely disputed.Some saw reform as a mere introduction of Western technologies and their application to the Chinese military in the form of guns, artillery, steel naval ships, and modern infantries to strengthen China's military force and fend off foreign imperialist interests.Others envisioned a grander plan of introducing Western industrial practices and instituting many Western material characteristics, such as railways, postage systems, or roads in Chinese society.A few radicals even pushed for modern westernization of the education system from traditional Confucian studies and an abandonment of the 2000-year-old absolute monarchical system in favour of a constitutional monarchy.However, dissent, reactionary forces, and a lack of an organized initiative limited the grand plans of reform from ever becoming reality.The failure of this initiative, termed the Self Strengthening Movement and the Hundred Day's Reform, ultimately dictated the future of the Qing as a state and the fate of China as a nation. Historiography A variety of historical narratives exist, critically analyzing the Self-Strengthening Movement and its effects.Certain perspectives regard disunity amongst the reformers and fierce conservative opposition as being the main factors hampering the success of the movement.Others take into account traditional Chinese statecraft and philosophies when analyzing the demise of the reforms.In a lesser-known epistemological analysis of the movement, doubt has been cast on the very nature of the movement and the influence of this view on later appraisals of the movement and its success.Still other theories see the Qing as doomed to fail and the Self-Strengthening Movement as merely another manifestation of its corruption and institutional inefficiencies. Scholars such as Li Chien Nung, Samuel Chu, and Benjamin Elman are representatives of those advocating historiographical theories which lambast conservative opposition, disunited reform policies, and institutional corruption in causing both the failure of the Self-Strengthening Movement and the demise of the Qing.According to Chu (1965), for instance, "China suffered from a lack of unified leadership working toward reform and modernization…[whereas] the vast majority of the ruling official-gentry class was conservative in outlook and regarded innovations as possible threats to the basis upon which its privileged position in Chinese society was founded."Additionally, they also champion the view that the disorderly, regionalist, and factional structure of the Qing court and administration, as well as corrupt officials, made it hard to channel the resources for both reform and military efforts against foreign aggressors.Further, Elman (2004) states, "lack of leadership, vested interests, (and) lack of funding contributed to the inadequacies of the late Qing state.Finally, they hold that the indecisive and indeterminate plans and policy of the court rendered the Qing state and China vulnerable to foreign military advances.Li, Chu, and Elman believe that a lack of firm leadership in both reform activities and normal functions of state was the most characteristic features of this period of Chinese history. Other academics such as Michael Gasster (1972) and Kwang-Ching Liu, however, have claimed that the failure of the Self-strengthening Movement was due to the intrinsic flaws in the philosophy of the movement.They do not downplay the flaws of the Qing polity, nor do they reject the lack of consistency in the reform movement as a factor.However, they see the reforms as a defense mechanism, as a method for preserving the Chinese world order that had existed for over two millennia against the new encroaching imperialism of the West.Gasster notes, "All that they did [Western education, technology, diplomacy, etc.], however, they considered means of defense.Each step had to be justified on the grounds that it would help to keep the foreigners out; at the same time, each experiment had to be guaranteed not to impinge on the essentials of Chinese life."Yet, the reforms, if too radical, would never have gained any momentum from the Qing establishment. Luke S. K. Kwong (1984) argues that the Self-Strengthening Movement itself never actually failed.Kwong believes that since the Movement was appraised as a military movement, later generations have seen the reforms as a failure due to the military losses near the end of the Dynasty.However, if the reforms were seen merely as an adaptive strategy to reform the nation, it can be argued that ideas and technologies from the West were imported and spread throughout China through trade, the various academies set up by the reformists, and by the students sent abroad to study western academic subjects.Thus, a flawed interpretation of the movements by academics created the illusion of the failure of the Self-Strengthening Movement. From the historiographical analysis, two main groups emerge, those focusing on the political struggles of the Qing imperial administration and those focusing on the philosophy behind the reforms.Both narratives are based on an underlying question -was the Qing destined to be vanquished by a new world order set up by imperialist minded Western states, or did the Qing's internal issues prove to be conducive to its demise? The Nature of the Reform Movement By the mid-19 th century, China began to see its millennia-old illusions of superiority slowly erode away.Its world order, consisting of the tribute system, the Mandate of Heaven, and the isolationist trade policies, began to seem irrelevant and obsolete.As the West began to exert its force and influence over the Qing Empire, scholars and officials within the Empire would see a need to emulate the technologies, organizational hierarchies, and cultural traditions of the Occident in an attempt to prevent China from becoming subjugated by the will of Western Imperialism.This process of endeavoured reform and emulation became known as the "Self-Strengthening Movement". In terms of scope, the reforms envisioned Western-style modernization in a variety of fields: military and industrial technology, intellectual and academic thoughts, reorganization of the military organizational and diplomatic system, economic restructuring, and more.Reforms as radical as adjusting the political order were also proposed. However, in reality, the reforms were mainly limited to material matters, such as the improvement of weaponry and transportation infrastructure, with few instances of actual intellectual and institutional reform.It was thought that modern materials such as steamships, guns, and cannons would help provide the Qing with the physical force necessary to repel foreign troops.The Qing establishment believed there was no need for any alterations to be done to social and academic institutions; in fact, once any reform activity began to threaten a traditional Chinese practice or philosophy that activity was promptly terminated, treated as an attempt gone wrong. Yet perhaps the scope of the reform movement may not have been as limited as previously thought.With the massive amounts of conservative opposition to the reforms, it may have been good judgement and prudence that restricted reform activities to simply technological affairs.Considering the strong repudiation of Western-style schools and education by the court in the 1860's and 1870's, pursuing more wide-ranging reforms in education, politics, the military, and the economy with a comparable zeal may have simply been foolish.Despite the Qing government's choice to limit the scope of its reforms, the fact that such reforms were even realized in the first place implies that China had begun to take its first steps towards modernizing and adapting to the new world order.Thus, activities of reformers such as Li-Hung Chang, Tseng-Kuo-fan, and Tso Tsung-t'ang in initiating reform activities such as the Fuchow artillery, Fukien Shipyard, and Peiyang Navy, rather than being in vain, may have helped sow the seeds for Western-style modernization in the Republican period after the demise of the Qing in 1911. A quick analysis of events lends some comparative insight into the efforts and achievements of the Chinese Self-Strengthening Movement.The Movement itself was as a haphazard attempt to preserve the Qing Empire and forestall its continued decline.Through this, China could preserve its traditions and institutions by fending off Western encroachment into its territories, politics, and economics.Its sole purpose, for most scholars and officials, was to protect and shield the old systems of tribute, isolationism, and Sino-centrism in a rapidly evolving and changing world dictated not by China's terms but by the Occidental powers (Gasster, 1972). the Self-Strengthening Movement neglected to consider the possibility that China had to change with the world, rather than the world with China. Ethnic Dynamics and Effects on Governing Ideologies The Qing dynasty was one of two dynasties in Chinese imperial history ruled by a non-Han Chinese people.The Aisin-Gioro clan, the imperial family of the Qing, and its Manchu nobility constituted a minority within the Chinese population.Consequently, in order to justify its rule over a large Han majority population, the Qing adhered to strict Confucian policies and methods of governance as a measure of legitimacy, a method aimed at gaining support from the indigenous Han.This tactic proved successful in co-opting the Han gentry and contributing to a continuation of Chinese-style institutions and ways of thought. After the Manchu entered Central China and established the Qing regime, most of traditional Chinese culture was adopted by the Aisin-Gioro clan as a basis for its administration.The contributions of Manchu were especially manifested in the safeguard of "the big unification" that all previous dynasties had maintained and the promotion of such identity among all ethnic groups including the Han group.While advocating harmony between Han and Manchu, early Qing emperors, including Emperor Kangxi, Emperor Yongzheng and Emperor Qianlong, showed their initiatives in absorbing traditional Chinese culture, and resolved effectively the resistance of the Han people towards the Qing regime.The legal status of the Manchu ruling class was finally recognized by the Han intellectuals.Gradually, the Han group showed great obedience to and submitted themselves to the Manchu ruling class. Traditional Chinese conservative thought, embodied in Confucian principles, values Chinese ways of life over foreign ones.It regards the customs of foreign people as inherently inferior and commonly ascribes to them the moniker of being "barbaric."This ethnocentric view of the world and of China's relation to it manifested itself in many Chinese imperial traditions and institutions, such as the tribute system and the imperial examination system, traditions in which the Han took great pride.The Manchu dynasty used such pride to its advantage, and in propagating a sense of Chinese greatness and superiority in tradition, established its legitimacy as a protector of China's heritage and supremacy.However, as the West rose to prominence and began to make its mark on the world, this mindset would make it problematic and inherently illogical for China to adapt to a world dominated by Western imperialism. Reasons for Opposition The Self-Strengthening movement itself recognized the weaknesses of China's technological and intellectual capital.As evidenced in a memorial to the throne from Li Hung-chang in 1872, reformists believed that technological disparities between China and the West were a major vulnerability in China's dealings with the West.Reformists saw value in not only bringing Western-style military hardware to China, but also in pursuing educational reform and Western studies.As evidenced from a letter from the Chinese Minister to England and France Kuo Sung-tao to Li Hung-chang, an adoption of foreign studies was crucial to China's attempt at Westernizing.Yet, the deeply entrenched conservative atmosphere of the Manchu court made such thought sacrilege, a betrayal of traditional Chinese philosophies and technologies.Conservative court officials even believed that Western learning would produce within Chinese scholars an affinity for the West, and result in disastrous effects for the Qing.In a memorial to the throne submitted by Wo-jen, Grand Secretary and head of the Hanlin Academy (Imperial Institute), Western studies and pedagogues were seen as useless and ultimately debilitating to Qing power. Effects of Conservatism on the Self-Strengthening Movement The Self-Strengthening Movement was never an official policy or directive of the Qing court; rather, it was a loose collection of activities and objectives pursued by reform-minded officials.Without a unified movement under which reformists could rally or a central court backing to fortify the image of the Movement, the reformists and the Self-strengthening Movement were vulnerable to critique and a barrage of opposition from conservative court officials.The Movement's supporters were outnumbered and overcome by its opponents, and such a situation would hinder the Movement's momentum. Xenophobia coupled with restrictions on Western learning also contributed to a misunderstanding of, and subsequently, an ignorance of the West by court officials.Court suspicion of reforms and reformists was a key result of such a mindset, one that would persuade the court in closing down artilleries, schools, and factories and dissuade the populace from participating in activities associated with the West. A conservative body of scholar-officials and their support from the Imperial court would prove to be the main hindrance to the success of the Self-strengthening Movement.Their writings, arguments, and actions would ultimately undermine the activities of reformists during the period from 1863 to 1895. A Collection of Dispersed Regional Activities -Lack of a Unified Vision When one mentions the 1863-1895 reform movement, names such as Li-Hung-chang, Tseng Kuo-fan, and Tso Tsung-t'ang emerge as leaders of the Self-strengthening Movement.Indeed, such individuals were prominent proponents of reform activities and did produce results in their respective efforts.However, while they may have conversed with each other about the Movement and even collaborated on reform activities on certain occasions, they and other reformists never created an official reform policy complete with a list of guidelines and goals that could be applied to the whole state, nor were they ever unified in their attempts at reforms.Rather, their endeavours were individual and were never part of any grand vision for a reconstructed China. The absence of a unified and coordinated reform policy was suddenly revealed to the Qing court and all its officials when China's reforms were put to the test in the First Sino-Japanese War.The Peiyang fleet, a product of China's Self-strengthening era through the mid to late 1800's and the flagship fleet of the Chinese Navy, was sent to confront the newly Westernized Japanese Imperial Navy in battle.Yet a lack of cooperation and cohesion within the Qing in dealing with the war effort led to China's defeat and the loss of the entirety of the Peiyang fleet-which was much more powerful than its Japanese rival in both the size and equipment, yet much inferior in terms of administration-through either destruction or surrender to the Japanese forces. Dispersed reform-minded undertakings could not produce the wide-sweeping and broad changes of the Qing system that reformists had intended; they could only create results which were transient in nature with superficial effects.Thus, the Qing gradually saw its efforts come to nothing and its vision of a strong and powerful China slowly disintegrate into national humiliation as wars were lost and humiliating treaties were signed. Corruption Corruption was perhaps the most characteristic attribute of the waning days of the Qing Dynasty.Many members of the ruling scholar-gentry class often failed to carry out their duties properly, and frequently avoided reporting truthfully to the court about regional issues, and evaded contributing financially to national war efforts and policy initiatives.Officials and eunuchs bought and sold positions and promotions, accepted bribes, and frequently pocketed sums of money intended for public projects.In many cases, they prevented the allocation of public funds for genuine utilitarian purposes, believing such expenses to be profligate. Corruption on the part of officials, eunuchs, and the imperial family would leave the Qing financially debilitated and unable to appropriate funds for public affairs.As capital became increasingly concentrated in private hands, the Qing state would suffer from a lack of adequate finances in administering its daily duties and interacting with the West. Cixi and the Overthrow of Patriarchy At all times and in all over the world, patriarchy has always held sway in human society, especially in the political arena where male politicians make up an overwhelmingly majority.Their female counterparts, on the contrary, have only turned into a handful of great politicians.However, one note is of importance that patriarchy is not necessarily equal to absolute dominion over female at every political level.In China, the power of morality is never inferior to that of patriarchy.There is no lack of precedents that female played a dominant part in the patriarchal society.Although females were subject to wifely submission and virtue, they enjoyed treatment with filial piety from their offspring demanded by ethical codes, and had enormous authority over their sons.Filial piety and loyalty are key values in Chinese traditional morality, and result in sons' absolute obedience to their mothers.Therefore, latent challenges from female may even overthrow patriarchy.From empress dowagers to ordinary women, female tried to seize power and establish themselves in the court or at home through their moral authority, inventing loopholes in the patriarchal structure.In palace politics, such ambition manifests itself best in the autocracy of empresses or empress dowagers. From the succession of Emperor Tongzhi in 1862 to the demise of Qing Empire in 1911, power in Qing court had been held in female hands.During the twilight and eventful years of the late Qing Dynasty, Manchu women, from Empress Dowager Cixi to Empress Dowager Longyu, seized authority and exerted full control of political and military affairs, making the emperor a mere figurehead.The "ancestral rule" of the Qing Dynasty that female have no access to politics became nothing but a dead letter. After the death of the Eastern Empress Dowager Ci'an, the imperial court, supposedly under the tutelage of the Kuang-Hsü Emperor, gradually shifted into the hands of the Western Empress Dowager Cixi.From thereon, court policy and authority would be subject to the whims and fancies of Cixi and her entourage of eunuchs.The court was transformed from an institution of scholar-officials and the Emperor who competently administered empire to a den of corruption that fuelled the personal and often conflicting objectives of both Cixi and the eunuchs. A culture of corruption and misconduct rocked the Qing state to the core in its twilight years.Officials were no longer devoted to proper management of the Empire, bribery and corruption were widespread, and the monopolization of power by Cixi and the eunuchs spelt out a further deterioration of the political climate at the hands of self-interest. Conclusion The Self-Strengthening Movement in late Qing Dynasty, born of Chinese ethnocentrism, would ultimately be marred by such thought itself.Its goals of expanding Western technology, industry, and education to China would be seen as unnecessary and harmful to traditional Chinese society and culture.Its activities would be terminated at once if traditional Chinese practice were to be threatened.Its proponents would be denounced as deluded and misled.Conservatism and traditionalism in the Qing were caused in large part by the instability and numerical disadvantage of the Manchu court vis à vis the Han majority.Therefore, the primary cause to the failure of the reform lies in the intermediateness-lack of firm support from the supreme power system as well as the strong Chinese traditional culture. Meanwhile, no powerful individual would come to champion the Movement in its undertakings.No grand councillors, grand secretary, eunuch, imperial prince, member of the royal family and not even the Emperor himself would show a strong commitment to the ideals and goals envisioned by the Self-strengthening Movement.The Self-strengthening Movement was thus limited to purely regional activities, none able to bring about substantial long-term benefits for the Qing. Corruption and dysfunction by scholar-officials, institutional disorder, and the monopolization of power in the hands of the Empress Dowager Cixi and eunuchs would also spell out troubles for the reform.The Empire was prevented from functioning in a utilitarian manner.Instead, it was bogged down by self-interest and greed, its administrative tasks neglected, resulting in the haphazard and disjointed implementation of reform activities. The Self-Strengthening Movement and its supporters envisioned a reborn China, a China that could interact with the West on its own terms.Yet, rather than envisioning a China that would adapt to a new world order as one among many sovereign states, the reformists envisioned a China that would return to its former position in the world as a hegemon, with all other foreign entities mere vassals and "barbarians".Through the emulation of the West and the adoption of Western ways, the Qing hoped to one day take up again this role.Such a hope never manifested itself.The Qing Empire would succumb to foreign conflicts, internal rebellions, famines, Han nationalism, and foreign spheres of interest infringing on its sovereignty.Whether or not the Self-strengthening Movement could ever have saved the Qing as an entity, the Qing's inability to adapt and assimilate itself into a new world order was both a cause of the Dynasty's demise and a result of its conservative mindset.The Self-strengthening Movement was certainly a victim of such a mindset. After the collapse of the Qing in 1911, China would transit through many shifts in political and economic philosophies in an attempt to become a stronger state, from Confucianism to Republicanism and from Capitalism to Communism.Nearly a hundred years after the end of the Self-strengthening Movement, China would begin to rise again as a major economic and political power.Such would be the result of a radical change in mindset, although this time, it would be from China's willingness to set aside its communist dogma and adopt market-style economic policies.While China's past and culture are still deeply imbedded within its national conscience, its society and technological knowledge have transformed and facilitated its smooth integration into the global age.
2018-12-05T18:27:57.012Z
2016-08-25T00:00:00.000
{ "year": 2016, "sha1": "65dcad383e749152a97ca2001a5b9aa9b3cc615d", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ach/article/download/62542/33573", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "65dcad383e749152a97ca2001a5b9aa9b3cc615d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
232404801
pes2o/s2orc
v3-fos-license
Two-Stage Clustering of Human Preferences for Action Prediction in Assembly Tasks To effectively assist human workers in assembly tasks a robot must proactively offer support by inferring their preferences in sequencing the task actions. Previous work has focused on learning the dominant preferences of human workers for simple tasks largely based on their intended goal. However, people may have preferences at different resolutions: they may share the same high-level preference for the order of the sub-tasks but differ in the sequence of individual actions. We propose a two-stage approach for learning and inferring the preferences of human operators based on the sequence of sub-tasks and actions. We conduct an IKEA assembly study and demonstrate how our approach is able to learn the dominant preferences in a complex task. We show that our approach improves the prediction of human actions through cross-validation. Lastly, we show that our two-stage approach improves the efficiency of task execution in an online experiment, and demonstrate its applicability in a real-world robot-assisted IKEA assembly. I. INTRODUCTION There are many assembly, service, repair, installation, and construction applications where many different workers may need to perform the same task. For example, consider the task of replacing a bearing on a machine tool located at the shop floor. Even though these tasks have fixed guidelines, there is some variability in the way each worker performs a task because of individual preferences. Robots can help in improving the efficiency of such tasks by adapting to the individualized preferences of human workers and proactively supporting them in their task. While the space of possible preferences can be very large, previous work has shown that people can be grouped to a few "dominant" preferences: In human-agent teams, users cluster to a set of "reasonable" behaviors, where people in the same cluster have similar beliefs [1]. Similar groupings exist in game playing [2]- [5] and education [6]- [9]. What makes the problem particularly challenging in complex tasks, is that these dominant preferences exist in different resolutions. In an IKEA assembly study that we use as a proof-of-concept throughout the paper, we observed that some participants preferred to assemble all shelves in a row, while others alternated between assembling the shelves and assembling the boards (Fig. 1). Moreover, within the first group, participants also differed in how they connected the shelves to the boards: some connected all shelves to boards on just one side first (as in Fig. 1(a)), while others preferred to connect each shelf to boards on both sides. There are two scenarios where learning the dominant preferences can help. First, a new worker may be asked to perform the same task that we have collected demonstrations for. A robotic assistant would need to infer the preferences of the new worker at different resolutions by associating them with previously learned dominant preferences, and proactively assist them by anticipating their next action. Second, a worker that demonstrated task A may be assigned to work on a slightly different task B, e.g., assemble an IKEA bookcase of different shape. While the task has changed, e.g., the number and type of shelves, some action sequences may be shared between A and B, e.g., connecting shelves to the boards. The robot should be able to use its knowledge about the worker's preference when assisting them in task B. Both these scenarios require learning preferences at different levels of abstraction. We propose abstracting action sequences to sequences of events, which are transferrable units shared among different tasks and workers. A highlevel preference is captured by a sequence of events, while a lower-level preference captures how each event is executed. Using insights from clickstream analysis [10], we propose clustering users on two levels, over events and over actions within each event, as opposed to clustering based on the sequences of individual actions as in previous work [11]. We show the applicability of our method on the first scenario, where a new worker executes the same task that we have demonstrations for. If the task has n dominant preferences at all levels, then we need to observe n × m workers. n is a number usually between three to five [11], while larger values of m allow for more robust inference, since there may be variability in the exact sequence of actions of the workers with the same preference. We conduct a user study where 20 users assemble an IKEA bookcase, and we show that we can learn the high and low-level preferences of the users, which enables accurate prediction for a new user performing the same task. Through an online assembly experiment we show that assisting users this way improves task efficiency. We finally show the applicability of the system in a real-world robot-assisted IKEA assembly demo. II. RELATED WORK Modeling human preferences. User preferences for collaborative tasks are often measured through surveys [12], [13]. Preferences can then be modelled as a feed-forward neural network that maps task metrics to the survey responses of users [14]. Human preferences for robot actions can also be measured from EEG signals [15], or from a short window of human arm motion [16]- [18] or gaze pattern [19]. Past work has also focused on incorporating user preferences in task assignment and scheduling, where the user preferences are included as a constraint [20], [21] or in the objective function [22] of the scheduling problem. For task planning, like in our approach, preference is considered as the subset of action sequences from the set of multiple sequences that solve the same task [23]. Related work includes learning user preferences during an assembly task from demonstrations [24], [25], via interactive reinforcement learning [23], [26] or active reward learning [27]- [29], where previous demonstrations can be used as priors [30], [31]. Human feedback can also be used to directly modify the policy instead of the reward function [32]. Clustering dominant preferences. While each user can have a different preference, our goal is to cluster the users to a small set of dominant preferences. Such groups of similar users, also called personas, can be built manually from questionnaires [33] or through collaborative filtering [34], [35]. Other related work includes identifying different driver styles [36]- [38] using features from vehicle trajectories, human motion prototypes [39] for robot navigation and human preference stereotypes for human-robot interaction [40]. Most relevant to ours is prior work in identifying dominant user preferences from sequences of user actions in a surface refinishing task [11]. Users with similar action transition matrices were clustered using a hard Expectation Maximization (EM) algorithm to obtain the dominant clusters. However the task was simple and thus one-stage clustering of the transition matrices was able to capture the user preferences which were largely encoded in the final position of the robot. Clustering sequences. The problem of grouping users based on their preferred sequence of doing tasks is similar to the problem of clickstream analysis [10], [41]- [43]. A clickstream is a sequence of timestamped events generated by user actions (clicking or typing) on a webpage and hence is comparable to a sequence of actions. Prior work clusters clickstreams of multiple users based on their longest common sub-sequence [41] or frequency of sub-sequences [43]. To cluster users at different resolutions, prior work uses Levenshtein distance to form macro and micro preference clusters [10]. Recent work uses a similarity graph [42] where similarity between users is measured by comparing the subsequences of their clickstreams. Hierarchical clustering is used to partition the similarity graph into high-level clusters which are then further partitioned based on features that were unused for the high-level clustering. We bring these insights from clickstream clustering to the robotics problem of clustering the action sequences of users in assembly tasks. The proposed method consists of two phases (see Fig. 2): III. METHODOLOGY (1) an offline training phase which takes as input a set of user demonstrations of the entire assembly task and learns the dominant preference clusters at different resolutions, and (2) an online execution phase where we estimate the probability of a new user belonging to one of the clusters based on their observed actions, and predict the next robot action. Based on our observation that users prefer to perform actions that require the same parts in a row; we first convert each user demonstration into a sequence of such events. Thus each event in a demonstration requires a specific set of parts to be supplied by the robot i.e. a specific set of secondary actions (non-critical actions like supplying parts). The high-level preference of each user is thus the order in which they perform the events. Further for each event, the users may have a different low-level preference of the order in which the set of parts should be supplied i.e. order in which the secondary actions must be performed. We learn the high and low-level preferences in the offline phase by clustering users based on their sequence of events and sequence of secondary actions respectively. Accordingly in the online execution phase we first infer the high-level preference of a new user and then infer the low-level preference to determine the next secondary action to execute. IV. OFFLINE TRAINING PHASE We assume a set of demonstrated action sequences X, with one x ∈ X per user. Similar to prior work [44], we distinguish the actions A in the demonstrated sequences into two types: primary actions (a p ∈ A P ) which are the task actions that must be performed by the user, and secondary actions (a s ∈ A S ) which are the supporting actions that can be delegated to the robot. For instance, a primary action is connecting a shelf, while a secondary action is bringing the shelf to the user. Therefore, In the training phase, each user demonstration x is some sequence of primary and secondary actions. For example, demonstration [a s 1 , a s 2 , a p 1 , a p 2 , . . . , a s M , a p N ] has M secondary and N primary actions. We wish to model the online execution as a turn-taking model where at each time step t, a set of secondary action s t is performed, followed by a primary action a p t . We choose this model as at each time step t, we want the robot to predict the next primary action (a p t+1 ) of the user and proactively perform the set of secondary actions (s t+1 ) that comes before it. Thus, we group consecutive secondary actions into a set of secondary actions s and insert a N OOP (no-operation) action between consecutive primary actions to obtain, x = [s 1 , a p 1 , s 2 , a p 2 , . . . , s N , a p N ]. Where in this example, s 1 = [a s 1 , a s 2 ] is the set of secondary actions that must be executed before the primary action a p 1 , while s 2 = [N OOP ] means that no other secondary action is required to be executed before a p 2 . A. Converting User Demonstrations to Event Sequences We first convert each user demonstration to a sequence of events. An event e is defined as -consecutive primary actions that require the same set of secondary actions. Thus an event from time step t a to t b is the tuple e ta: with similar preceding secondary actions . Two events are equal if they share the same set of secondary actions -{s ta , . . . , s t b }. set e = (p, s) 8: 2) Clustering Secondary Action Sequences: To learn the low-level preferences for each event e, we cluster participants based on the sequence of secondary actions of the event to determine the dominant low-level clusters z l ∈ Z e L . We use secondary actions since, from a robot's perspective, primary actions that require the same assistive response (i.e. preceding secondary actions) can be considered identical. (a) e boards is 1 st event for user A (b) e boards is 2 nd event for B and C 3) Intuition: Consider the following example with two events e boards and e shelves (see Fig. 3), and three users with event sequences A: [e boards , e shelves ], B: [e shelves , e boards ], C: [e shelves , e boards ]. User A prefers to assemble the boards first and then connect shelves to the boards, while users B and C prefer to connect the shelves before assembling the boards. The high-level clustering assigns users B and C to the same cluster, different than A. Here, the set of secondary actions to execute e shelves for users B and C, is different to that for user A. B and C need both the shelf and the board to be supplied as it is the first event, while A only needs the shelves, as the boards were previously supplied for e boards . Now assume that B performs e shelves by connecting each shelf to the boards on only one side as in Fig. 3(b), while C connects each shelf to boards on both sides (not shown). In this case, even though B and C share the same set of secondary actions for event e shelves , the sequence of the actions would be different. The robot will perform one N OOP after supplying a shelf to B as it waits for B to connect the shelf to the two boards on one side; whereas for C, the robot will perform three N OOP s after supplying the shelf as C connects that shelf to the two boards on both sides. Therefore the low-level clustering will place B and C in different low-level clusters. In summary, first stage clustering enables learning the set of secondary actions required, while second stage specifies the order these actions should be performed. On the other hand, one-stage clustering as in prior work [11] would assign A, B and C in three different clusters, losing the information that B and C share the same set of secondary actions! C. Clustering Method For each stage, we apply hierarchical clustering [45], [46] considering the distance metric: Levenshtein distance [47] (d lev ) used in clickstream analysis [10], [48] with a custom modification of the levenshtein distance (d mod ). We consider the operators: add and delete as in Levenshtein distance and instead of the substitute operator we use the operation shif t 1 . delete(i) removes the i th element of a sequence. add(i) inserts an element into the i th position of a sequence given that it is empty. shif t 1 (i) shifts the i th element to neighbouring the i+1 or i−1 position in the sequence given that it is empty. Each operation has a cost of 1. The shif t 1 operation allows us to consider sequences that only have two elements in swapped positions as closer than sequences that have two completely different elements in those positions. The number of clusters depends on a distance threshold. We generate clusters for increasing distance thresholds, and select the optimal distance based on the variance ratio criterion (VRC) [49] (also called calinski-harabasz score) which is a common metric for distance-based clustering [50]. VRC is the ratio of the sum of between-clusters dispersion and the sum of inter-cluster dispersion. Therefore for a high VRC the clusters are well separated from each other and the samples within the cluster are dense. V. ONLINE EXECUTION PHASE In the online execution phase we infer the high and low level preferences of new users as they are executing the task. At each time step t, as the user performs a primary action, the robot predicts the next secondary action. If the robot prediction is incorrect, the user performs the desired secondary action according to their preference. 1) Inferring high-level preference: At each time step t, we observe the primary action of a new user and append it to the actions observed so far. We then convert the current sequence of actions x 1:t of the new user to a sequence of events x e 1:t in the same way as in the offline phase. We use Bayesian inference to predict the high-level preference by computing the probability of observing the event sequence Total no. users in z h We then determine the high-level preference as z * h = arg max z h ∈Z H p(z h |x e 1:t ). If there are two high-level clusters with the same maximum probability, we select one randomly. 2) Inferring low-level preference: Once we infer the highlevel preference z * h of the new user, we identify the most likely event sequence x e * in that cluster. We assume that the new user follows that sequence x e * to index the event ongoing at time step t + 1 i.e. e t+1 (in a slight abuse of notation). Given the sequence of secondary actions s et+1 performed so far within the event e t+1 , we use Bayesian inference to infer the low-level preference z l ∈ Z We select the most likely low-level preference z * l , identically to the high-level preference case. The robot can then perform the most likely secondary action s t+1 in z * l to proactively assist the user. If the user accepts s t+1 we append that to x 1:t . If the user rejects s t+1 and performs a different secondary action s t+1 instead, we append s t+1 to x 1:t . VI. USER STUDY We wish to show that the proposed method can effectively identify the dominant preferences of users in an IKEA bookcase assembly task, and use the found preferences to accurately predict the next secondary action of a new user. A. Study Setup We conducted a user study where subjects assembled an IKEA bookcase in a laboratory setting shown in Fig. 4-left. We divided the space into a storage area where all the parts were placed and a workcell where the user assembled the bookcase. We recruited 20 subjects out of which 18 (M = 11, F = 7) successfully completed the study. We provided each subject with a labelled image of the bookshelf (Fig. 4-right) and demonstrated how the connections are made. Participants then practiced the connections for five minutes. We did not provide any instructions regarding the order of the assembly. We informed participants that they had to assemble the shelf as fast as they can, and asked them to plan their preferred sequence beforehand, to ensure that their plan is well-thought. The study was approved by the Institutional Review Board (IRB) of the University of Southern California. Participants were compensated with 10 USD and the task lasted about one hour. Measurements: We recorded a video of the users assembling the shelf. We annotated all participants' actions in the video using the video annotation tool ELAN [51]. The annotation was done by 2 independent annotators who followed a common annotation guide. B. Analysis of User Preferences We considered bringing any part from the storage to the workcell as a secondary action and all connections in the assembly as primary actions. The bookcase had 4 types of parts: long boards, short boards, connectors and shelves (total 17 parts), and 32 different connections. Thus each user demonstration was a sequence of N = 32 time steps. Event Sequences. We first visualize the sequence of events for each user (shown in Fig. 5). Users [0, 1,4,5,11,16] had the same event sequence: short and long board connections (shown in grey), connector and board connections (green), and shelf and board connections (yellow). Similarly, other groups of users like [12,13,15], and [3,9,10,14,17] also had the same event sequences. Fig. 5: Event sequences. 'boards' refers to an event of connecting long and short boards, 'con' refers to an event of connecting assembled boards using connectors, 'shelves' refers to an event of connecting shelves. The assembly task is fairly complex; the 32 primary actions can be ordered in more than 24! ways! However most users preferred to perform similar actions in a row: 14 users performed all long and short board connections in a row, and 7 users connected all connectors and all shelves in a row. Dominant clusters. To find the high-level preferences, we cluster the event sequences using the modified Levenshtein distance metric (Sec. IV-C) which results in the hierarchy shown in Fig. 6(a). We partition the users at a distance threshold of d mod = 4 (shown as dotted line) based on the VRC [49] to obtain three dominant high-level clusters (shown in grey rectangles). Therefore we see that users cluster to a small set of dominant preferences despite not being provided any instructions regarding the order of assembly. Users had different preferences for the same event as well. For example, within the event of performing all shelf connections in a row, shown in yellow in Fig. 5, clustering the sequences of secondary actions of all users results in the hierarchy shown in Fig. 6(b). Partitioning at the optimal distance threshold d mod = 0 gives us two dominant low-level clusters (shown in grey rectangles). Users 0, 4 and 16 prefer to connect each shelf to boards on one side before connecting on the other side, whereas users 5, 1 and 2 prefer to connect each shelf to boards on both sides at a time. C. Evaluation & Results We want to show that our two-stage clustering approach is better at predicting the next secondary action of the robot for a new user, as compared to a one-stage approach. We make the following hypothesis: H1. Clustering users based on just their sequence of events improves the accuracy of predicting the next secondary action, compared to clustering based on the sequences of individual primary actions. H2. The two-stage clustering of users improves the accuracy of predicting the next secondary action, compared to clustering based on just the sequences of events. Experiment details. Using the demonstrated action sequences in the user study, we performed leave-one-out crossvalidation, where we removed one participant and generated the preference clusters from the rest of the demonstrations. At each time step, the system was provided the corresponding primary action of the removed participant. We then inferred the high and low level preference of the participant based on the history of actions provided so far (Sec. V) and selected the secondary action that must be performed before the next primary action of the user. At the end, we compare the crossvalidation accuracy of predicting the next secondary action at each time step, averaged over 100 trials for each new user. (a) Importance of high-level (b) Importance of low-level 1) Importance of High-level Clusters: We compare the prediction results from clustering based on sequence of events (without second-stage clustering) to clustering based on the sequence of primary actions. While clustering based on event sequences leads to three dominant clusters, clustering based on primary action sequences generates only two dominant clusters for a distance threshold of 63 (based on VRC), resulting in high variability in the sequences in each cluster. A two-tailed paired t-test showed a statistically significant difference (t(17) = −3.232, p = 0.004) in prediction accuracy averaged over all timesteps and trials, between the event-based method (M = 0.796, SE = 0.041) and the baseline (M = 0.693, SE = 0.041). This result supports hypothesis H1. 2) Importance of Low-level Clusters: We also compare the prediction accuracy with and without the second stage clustering, averaged over all timesteps and trials. A paired t-test showed a statistically significant difference (t(17) = −2.34, p = 0.03) between predicting with event sequences only (M = 0.796, SE = 0.041) and predicting with the two-stage framework (M = 0.820, SE = 0.043). This result supports hypothesis H2. Interpretation of Results. We observe that clustering based on event sequences outperforms clustering based on primary actions. The baseline performs poorly after timestep t = 12 ( Fig. 7(a)) as most users switch to shelf connections; there, the order of primary actions (which shelf to connect) is different among users, but they require the same secondary actions (bringing a shelf) and thus belong to the same event. The drops in accuracy for our method are caused by errors in prediction for participants that did not belong to any cluster, e.g. users 3, 6, 7 and 8 in Fig. 5, as well as by "branching" points, where participants that had identical sequences started to differentiate. For instance, participants 14 and 15 differentiate at t = 12, where performance drops for all methods. Similarly, we observe in Fig. 7(b) that the two-stage clustering and clustering based on events only perform similarly up to t = 18, from where users differentiate on how they connect the shelves to the boards, which is where the lowlevel preferences benefit prediction. We wish to show that predicting the secondary action to assist a new user improves the efficiency of the task. Therefore, we implemented an online shelf assembly game (see supplemental video). We first conducted an online study where 20 users demonstrated their preference for playing the game and used that to learn the dominant high and low level preferences. As the game was simple, users had one dominant high-level preference and two low-level preferences on how to connect the shelves, which interestingly matched closely with how users clustered when connecting shelves in the real world ( Fig. 6(b)). We then conducted an online experiment with 80 participants recruited through Amazon Mechanical Turk, where users played a shelf assembly game three times each for with and without assistance. The order of assistance versus no assistance was counterbalanced to reduce any learning effects. VII. ONLINE ASSEMBLY EXPERIMENT H3. We hypothesize that providing assistance to users by predicting the next secondary action improves the efficiency of the task, as compared to providing no assistance. We compare the average time taken by 52 (out of 80) users who completed all game trials and survey questions, when playing with and without assistance using our twostage inference method (see Fig. 8). A paired t-test showed a statistically significant difference (t(51) = 2.155, p = 0.036) in the time required to assemble with (M = 55.794, SE = 3.439) without assistance (M = 66.948, SE = 6.05). Moreover in the subjective responses the users gave a high rating when asked if the assistance was according to their preference (Q1), was helpful (Q2) and reduced their effort (Q4). Accordingly they gave a low rating when asked if the assistance made their task more difficult (Q3). This informs us that performing the secondary actions as per our proposed method can indeed be helpful for the users. VIII. ROBOT-ASSISTED IKEA ASSEMBLY Lastly we show how our proposed method can be used in a human-robot collaborative assembly setting. We demonstrate the online execution phase in the IKEA assembly task with two participants 1 , who were instructed to perform the task in two different ways. We used the same setup as shown in Fig. 4 with a Kinova Gen 2 robot arm in the storage area. The human-robot collaboration followed a turn-taking model where the user and the robot alternated in performing one primary action and one set of secondary actions, respectively. Therefore, the participant always stayed inside the workcell and performed all the connections, while the robot brought the required parts from the storage area to the workcell (Fig. 1). The experimenter inputted each primary action performed by the participant into the system, and the system inferred the subject's preference cluster and selected the next secondary action for the robot to perform. The supplemental video shows the demonstration of the system. We observed that the robot assistance allowed the user to stay inside the workcell in a comfortable position. We hypothesize that this can be beneficial, not just in improving the efficiency of the task but also in reducing the human effort, and we plan to explore this in future work. IX. CONCLUSION We propose a two-stage clustering approach, inspired by clickstream analysis techniques, to identify the dominant preferences of users at different resolutions in a complex IKEA assembly task. We show how this approach can enable prediction of the next action required of the robotic assistant and how it can improve the efficiency of task execution. A limitation of our work is that we do not model the effect of robot actions on the user [52], [53], or the utility of actions with respect to team performance. Future work can also consider the confidence in the prediction, to decide if a robot should request additional information from the user. While we have focused on predicting the actions of a new worker on the same task that we have demonstrations for, our insights can be applied to the problem of having the same worker perform a new, slightly different task. Even though the order of events may be different, and some events may be task-specific, our system can learn the user's preference(s) on the events that are shared among the two tasks and proactively assist the user for these events. We are excited about demonstrating the applicability of our approach to this problem in future work.
2021-03-30T01:16:17.703Z
2021-03-27T00:00:00.000
{ "year": 2021, "sha1": "a5cdefb4b8d06e44704afc725dc236e14b2962e0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2103.14994", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a5cdefb4b8d06e44704afc725dc236e14b2962e0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246906336
pes2o/s2orc
v3-fos-license
Editorial: The Immunology of Adverse Drug Reactions 1 Infection and Immunity Program, Monash Biomedicine Discovery Institute and Department of Biochemistry and Molecular Biology, Monash University, Clayton, VIC, Australia, 2 Clinical Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, United Kingdom, 3 Department of Dermatology, University Hospitals Southampton NHS Foundation Trust, Southampton, United Kingdom, 4 Centre for Antibiotic Allergy and Research, Department of Infectious Diseases, Austin Health, Heidelberg, VIC, Australia INTRODUCTION The immune system has evolved for both breadth and specificity of recognition to protect the body against a wide array of infectious and oncogenic challenges. Unfortunately, this recognition can also extend to certain therapeutic drugs causing drug hypersensitivity in affected individuals. These unwanted responses range in both severity and pathways of immune activation, eliciting deleterious, and in some cases potentially fatal, immune responses. Such adverse events place significant strain on health care systems and prevent use (in susceptible individuals) of key medications that are well tolerated by most patients at therapeutic doses. Here, we bring together experts in the field of adverse drug reactions, incorporating both clinical and laboratory-based researchers, addressing critical areas of prediction, diagnosis and mechanistic understanding of these reactions. T CELL-MEDIATED DRUG HYPERSENSITIVITY 10 articles within this collection focus primarily on T cell-mediated drug hypersensitivity reactions (DHRs), also termed delayed-type drug hypersensitivity or type IV drug hypersensitivity (under the Gell and Coombs classification) (1). T cell-mediated DHRs involve Human Leukocyte Antigen (HLA)dependent activation of T cells induced by the culprit drug and/or a metabolite via a range of mechanisms including: i) generation and presentation of covalently drug modified peptides (i.e. hapten/prohapten model) (2,3), ii) labile interaction of the drug/metabolite with the HLA and/or TCR to trigger T cell activation (i.e. p.i. concept) (4) and iii) interaction of the drug within the HLA peptide-binding cleft to change the array and conformation of HLA-bound peptides (i.e. altered peptide repertoire) (5,6). Furthermore, many of these reactions are associated with distinct HLA alleles, suggesting specific interactions and that HLA screening could be used as a predictive tool to prevent prescription to individuals carrying risk alleles. However, as discussed by Li et al., this is not an effective solution for many HLA-associated adverse reactions due to negative predictive values less than 100% and low positive predictive values, necessitating large numbers of individuals to be tested in order to prevent a single adverse event. In light of this, Li et al. explore the evidence for further risk modifiers . Surprisingly, HLA-B*13:01-restricted CD8 + T cells were not identified, instead the metabolite (nitroso-SMX) and SMX responsive T cell clones were CD4 + and HLA class II restricted, suggesting HLA-B*13:01 was not directly involved in presentation of the immunogenic antigens. Aligned with previous studies (7,8), there were multiple modes of presentation observed, consistent with covalent modification of presented peptides by nitroso-SMX (in either antigen processing dependent or independent manners) and MAS-RELATED G PROTEIN-COUPLED RECEPTOR X2 Two review articles (Mackay et al. and McNeil) focus on the emerging role for Mas-related G protein-coupled receptor X2 (MRGPRX2) mediated mast cell activation in antibodyindependent immediate hypersensitivity reactions. With a focus on anaphylaxis, Mackay et al. discuss strategies to pinpoint the role of MRGPRX2 and isolate biomarkers, further considering roles for MRGPRX2 agonists and antagonists in therapeutic applications. Moreover, McNeil interrogates the relationship between peak serum concentrations, as well as localised areas of increased concentration in specific tissues/locations, and the EC 50 of known MRGPRX2 agonists in mild-moderate immediate hypersensitivity reactions. Both studies highlight the need for further investigation to understand the role of MRGPRX2 in adverse events and to provide clear diagnostic criteria. CONCLUDING REMARKS Collectively these articles highlight that, as for many fields, phenotypic, diagnostic, predictive and mechanistic studies traversing the clinic to the laboratory bench (and computer) and back are critical to understanding the complex biological interactions that characterise drug hypersensitivity. Future genetic and mechanistic analyses will build upon clinical observations, with the capacity to identify new biomarkers and signatures of disease that feedback to improve diagnosis, prediction, and prevention. AUTHOR CONTRIBUTIONS PTI drafted the manuscript. All authors edited and approved the manuscript.
2022-02-18T14:19:01.272Z
2022-02-18T00:00:00.000
{ "year": 2022, "sha1": "7033ecc3df2c12545a7ec4c554d079966a26c14c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "7033ecc3df2c12545a7ec4c554d079966a26c14c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6826548
pes2o/s2orc
v3-fos-license
Second-line Treatment of Non-Small Cell Lung Cancer: Focus on the Clinical Development of Dacomitinib Dacomitinib is a second-generation, irreversible, covalent pan-HER tyrosine-kinase inhibitor (TKI). It showed potent EGFR signaling inhibition in experimental models, including first-generation TKI-resistant non-small cell lung cancer (NSCLC) cell lines. This preclinical efficacy did not translate into clinically meaningful treatment benefits for advanced, pretreated, molecularly unselected NSCLC patients enrolled in two parallel phase III trials. Dacomitinib and erlotinib showed overlapping efficacy data in chemotherapy-pretreated EGFR wild-type (WT) patients in the ARCHER 1009 trial. Similarly, it failed to demonstrate any survival benefits as compared to placebo in EGFR WT subsets progressing on chemotherapy and at least one previous first-generation TKI (erlotinib or gefitinib) in the BR.26 trial. In the case of EGFR-mutant NSCLCs, a pooled analysis of the ARCHER 1009 and ARCHER 1028 trials comparing the efficacy of dacomitinib vs. erlotinib in chemotherapy-pretreated, EGFR TKI-naïve patients showed a trend to a longer progression-free survival (PFS) and overall survival in favor of dacomitinib that did not reach statistical significance, with a higher rate of treatment related adverse events (mainly skin rash, paronychia, and gastrointestinal toxicities). On the other hand, the clinical activity in patients with EGFR-mutant NSCLCs with acquired TKI resistance that were included in phase II/III trials was equally poor (response rate <10%; PFS 3–4 months). Therefore, with the results of the ARCHER 1050 trial (NCT01774721) still pending, the current clinical development of dacomitinib is largely focused on EGFR-mutant, TKI-naïve patients. Here, we review the most relevant clinical data of dacomitinib in advanced NSCLC. We discuss the potential role of dacomitinib in pretreated EGFR WT and EGFR-mutant (TKI-naïve and TKI-resistant) patients. Finally, we briefly comment the available clinical data of dacomitinib in HER2-mutant NSCLC patients. Dacomitinib is a second-generation, irreversible, covalent pan-HER tyrosine-kinase inhibitor (TKI). It showed potent EGFR signaling inhibition in experimental models, including first-generation TKI-resistant non-small cell lung cancer (NSCLC) cell lines. This preclinical efficacy did not translate into clinically meaningful treatment benefits for advanced, pretreated, molecularly unselected NSCLC patients enrolled in two parallel phase III trials. Dacomitinib and erlotinib showed overlapping efficacy data in chemotherapy-pretreated EGFR wild-type (WT) patients in the ARCHER 1009 trial. Similarly, it failed to demonstrate any survival benefits as compared to placebo in EGFR WT subsets progressing on chemotherapy and at least one previous first-generation TKI (erlotinib or gefitinib) in the BR.26 trial. In the case of EGFR-mutant NSCLCs, a pooled analysis of the ARCHER 1009 and ARCHER 1028 trials comparing the efficacy of dacomitinib vs. erlotinib in chemotherapy-pretreated, EGFR TKI-naïve patients showed a trend to a longer progression-free survival (PFS) and overall survival in favor of dacomitinib that did not reach statistical significance, with a higher rate of treatment related adverse events (mainly skin rash, paronychia, and gastrointestinal toxicities). On the other hand, the clinical activity in patients with EGFR-mutant NSCLCs with acquired TKI resistance that were included in phase II/III trials was equally poor (response rate <10%; PFS 3-4 months). Therefore, with the results of the ARCHER 1050 trial (NCT01774721) still pending, the current clinical development of dacomitinib is largely focused on EGFR-mutant, TKI-naïve patients. Here, we review the most relevant clinical data of dacomitinib in advanced NSCLC. We discuss the potential role of dacomitinib in pretreated EGFR WT and EGFR-mutant (TKI-naïve and TKI-resistant) patients. Finally, we briefly comment the available clinical data of dacomitinib in HER2-mutant NSCLC patients. In the absence of significant differences in terms of efficacy, the choice between pemetrexed-or docetaxel-based second-line chemotherapy is largely driven by three factors: histology, as pemetrexed is restricted to non-squamous tumors, type of platinum doublet used during first-line treatment, with pemetrexed being increasingly incorporated into the first-line or maintenance treatments, and differences in toxicity profiles. On the other hand, when deciding between chemotherapy and erlotinib, apart from clinical factors, EGFR mutation status is the main biomarker that determines treatment selection. The IPASS trial definitely demonstrated that the clinical activity of EGFR tyrosine-kinase inhibitors (TKIs) in treatment-naïve patients was restricted to those with EGFR-mutant tumors (EGFR-sensitizing mutations). As the clinical activity of EGFR TKIs in TKI-naïve, EGFR-mutant tumors is comparable between treatment-naïve or platinum-pretreated patients (9), first-or second-generation EGFR TKIs are the preferred treatment options in patients with EGFR-mutant tumors. On the contrary, in patients with EGFR wild-type (WT) cancers, RRs and survival were significantly lower with gefitinib-compared to platinum-based chemotherapy in the IPASS study (10). However, whether this was also true in the second-line setting, a clinical context in which the efficacy of docetaxel-or pemetrexed-based chemotherapy hardly reaches 10% of RRs, has been a matter of extensive debate in the past few years. Some molecularly unselected randomized trials, initiated at a time where no definitive predictive biomarkers for the benefit or EGFR TKIs were discovered yet, initially suggested similar efficacy outcomes between erlotinib and second-line chemotherapy (11)(12)(13). More recent data, including molecularly selected or molecularly stratified randomized trials and large meta-analysis, have confirmed that second-line chemotherapy is superior to EGFR TKIs in patients with EGFR WT tumors, at least in terms of RRs and PFS. OS differences did not reach statistical significance (14)(15)(16). In this therapeutic scenario, and considering that EGFR pathway activation might hypothetically contribute to cancer progression even in tumors with no EGFR activating mutations (17), to investigate if a more potent pan-HER inhibition with dacomitinib would add any clinical benefit seemed a rational approach, either from a biological or a clinical perspective. In addition, as the majority of patients with EGFR-mutant tumors treated with first-generation EGFR TKIs develop acquired resistance by ERBB-dependent mechanisms (18), and considering that dacomitinib showed activity in gefitinib-resistant preclinical lung cancer models (19), it was also rational to test its clinical activity in patients with EGFRmutant, TKI-resistant cancers. Herein, we will succinctly discuss the potential role of second-line dacomitinib in EGFR WT and EGFR-mutant NSCLC. DACOMiTiNiB: PReCLiNiCAL AND eARLY CLiNiCAL DATA iN NSCLC Dacomitinib is a second-generation, irreversible, covalentbinding pan-HER TKI. As compared to first-generation EGFR TKIs, it has comparable inhibitory activity against the WT EGFR kinase in vitro. However, dacomitinib is more potent than gefitinib against cell lines harboring common EGFR-sensitizing mutations (del19, L858R). Moreover, it has inhibitory activity against gefitinib-resistant exon 20 insertions and acquired resistance exon 20 T790M mutations in preclinical lung cancer models. Unlike gefitinib or other first-generation TKIs, dacomitinib, as a pan-ERBB inhibitor, also inhibits the activity of both WT and mutant HER2 kinase (19,20). Three phase I trials, conducted both in Western and Asian patients, established that the maximum tolerated dose of dacomitinib was 45 mg daily, and this dose level was selected for further clinical evaluation. The most frequent dose-limiting drug-related adverse events were skin and gastrointestinal toxicities (21)(22)(23). The three trials consistently demonstrated that plasma concentrations and other pharmacokinetic parameters proportionally increased with increasing doses of oral dacomitinib (21)(22)(23), with no apparent food effect (21). Dacomitinib's half-life was estimated at 59-85 h in the phase I trial conducted in the United States (21). A modest preliminary clinical activity was observed in small cohorts of NSCLC patients previously treated with first-generation EGFR TKIs and/or chemotherapy. No objective responses were seen in EGFR TKI-resistant patients whose tumors harbored EGFR T790M mutations (21)(22)(23). DACOMiTiNiB FOR PReTReATeD NSCLC PATieNTS Clinical Data in EGFR wT or NSCLCs Unselected by EGFR Status The clinical activity of dacomitinib in pretreated NSCLC patients has been evaluated in four clinical trials (24)(25)(26)(27). They are mostly molecularly unselected trials and, consequently, the vast majority of the patients included had EGFR WT tumors. An overview of the four clinical trials and the efficacy data in the overall study population are summarized in Table 1. Two phase II trials initially suggested some degree of clinical activity in pretreated NSCLC patients. The ARCHER 1002 trial was a single-arm study that tested the activity of dacomitinib in patients that were refractory to one or two lines of chemotherapy and erlotinib. On the basis that KRAS mutant cell lines were primarily resistant to first-or second-generation EGFR TKIs, this study was enriched with patients with KRAS WT tumors. The trial failed to meet its primary end point, as dacomitinib yielded a disappointing 5.2 and 4.8% of RRs in the overall and adenocarcinoma subsets, respectively. Patients with EGFR WT/KRAS WT tumors included in this trial had comparable RRs (5%), PFS (8 weeks), and OS (26 weeks) to those of the overall study population (25) ( Table 1). The second phase II trial (ARCHER 1028) compared the activity of dacomitinib and erlotinib in molecularly unselected patients progressing on one or two prior chemotherapy regimens. In this case, the trial met its primary endpoint, showing a statistically significant increase in PFS (2.86 vs. 1.91 months, HR 0.66, CI 95% 0.47-0.91) in favor of dacomitinib in the overall study population. Objective responses were also higher in dacomitinib treated patients (17 vs. 5.3%, p = 0.01). However, no differences in OS were noted (HR 0.80, CI 95% 0.56-1.10, p = 0.20) ( Table 1). Comparable degree of PFS increment to the overall population was observed in EGFR WT NSCLCs (HR 0.70, CI 95% 0.47-1.05) and EGFR WT/KRAS WT NSCLCs (HR 0.61, CI 95% 0.37-0.99). Dacomitinib did not improve OS compared to erlotinib in patients with EGFR WT cancers (24). This modest clinical activity served as the basis to launch two subsequent randomized phase III trials in similar therapeutic scenarios to their respective phase II trials. Unfortunately, both phase III studies were negative. First, in the BR.26 trial, whereas dacomitinib statistically significantly improved RRs (7 vs. 1%, p = 0.001) and PFS (2.66 vs. 1.38 months, HR 0.66 CI 95% 0.55-0.79) compared to placebo in patients progressing on chemotherapy and EGFR TKIs, it failed to demonstrate improved OS (primary end point; HR 1.00) ( Table 1). Similarly, no trend for a clinically meaningful incremental efficacy was observed in patients with EGFR WT tumors or patients with both EGFR and KRAS WT NSCLCs compared to the overall patient population (27). And finally, Dacomitinib failed to improve the efficacy of erlotinib (control arm) in second-or third-line settings (ARCHER 1009), either in the overall population ( Table 1) or in patients with EGFR WT tumors. In the latter subgroup, dacomitinib had overlapping objective RRs, PFS (1.9 vs. 1.9 months; HR 0.94, CI 95% 0.79-1.13), and OS (6.8 vs. 7.6 months; HR 1.07, CI 95% 0.90-1.29) compared to erlotinib. Results were almost identical for patients with either KRAS or EGFR WT NSCLCs (26). Clinical Data in EGFR-Mutant, TKi-Naïve NSCLCs In the particular case of pretreated, TKI-naïve subsets, a pooled analysis of the ARCHER 1009 and ARCHER 1028 trials comparing the efficacy of dacomitinib vs. erlotinib showed a comparable median PFS (14.6 vs. 9.6 months, respectively; HR 0.71, p = 0.14) and OS (26.6 vs. 23.2 months, respectively; HR 0.73, p = 0.26) outcomes that somehow favored dacomitinib (28) ( Table 2). Both ARCHER 1028 and ARCHER 1009 trials showed that on target adverse events related to the inhibition of EGFR WT in normal tissues were significantly increased with dacomitinib compared to erlotinib, mainly skin rash, paronychia, and gastrointestinal toxicities (24,26). These data are in line with the recently published LUX-Lung 7 trial, where afatinib significantly delayed PFS and the emergence of EGFR TKI resistance, albeit with a higher incidence of treatment related adverse events (29). Clinical Data in EGFR-Mutant, TKi-Pretreated NSCLCs In the context of EGFR TKI acquired resistance, the clinical efficacy of dacomitinib in patients with EGFR-mutant lung cancers progressing on first-generation EGFR TKIs that were included in these trials was disappointingly low, with an overall RR of about 8% ( Table 2). No objective responses were reported among patients whose tumors harbored the secondary acquired resistance EGFR T790M mutation. In general, the PFS and OS data did not differ to those of the unselected patient population either (25,27). Clinical Data in HER2-Mutant, TKi-Naïve NSCLCs In the largest prospective phase II study conducted to date in patients with HER2-mutant or HER2-amplified tumors (n = 30; 83% had received at least one line of previous chemotherapy), dacomitinib showed only modest efficacy, with an objective RR of 12%, 3 months of median PFS, and 9 months of median OS. No responses were seen in patients with tumors harboring the most common HER2 activating mutation (c. 2324_2325ins12) (31). Intriguingly, tumors with this genotype did respond to afatinib in other series (32). No responses were seen either in patients with HER2-amplified cancers (n = 4) (31). More studies are needed in order to determine which molecular contextures (i.e., possible coexistence with HER2 amplification) and what specific HER2 genotypes are true predictive targets for the benefit of dacomitinib. CONCLUSiON AND FUTURe PeRSPeCTiveS Dacomitinib has failed to improve overall outcomes in pretreated NSCLC patients. An irreversible pan-HER inhibition is not superior to erlotinib in patients with no EGFR-sensitizing mutations and does not prolong OS compared to placebo in heavily pretreated patients either. Also, dacomitinib does not overcome EGFR T790M-mediated acquired resistance in EGFR-mutant NSCLCs at tolerable doses in humans. In non-T790M-mediated resistance, in which functional activation of HER pathway or acquired HER2 activating mutations have been described in some cases (18,33), no reliable clinical data are available, but a robust activity in this clinical setting seems unlikely. With these clinical data, together with recent regulatory approvals of third-generation, EGFR-mutant selective TKIs (e.g., osimertinib) with potent activity against the T790M mutation (34), current development of dacomitinib is focused to TKI treatment-naïve, molecularly selected patients with EGFR-mutant and HER2-mutant lung cancers. In a small phase II trial including a total of 45 treatment-naïve patients with tumors harboring common EGFR-sensitizing mutations, dacomitinib achieved an overall RR of 75.6% and a median PFS of 18.2 months (30). In this regard, whether second-generation EGFR TKIs in TKI-naïve patients are superior to first-generation TKIs in EGFRmutant NSCLCs is not fully answered to date. In the LUX-Lung 7 trial, afatinib significantly increased RRs (70 vs. 56%; p = 0.0083), median PFS (11 vs. 10.9 months; HR 0.73, CI 95% 0.57-0.95; p = 0.0195), and median time to treatment failure (13.7 vs. 11.5 months; HR 0.73, CI 95% 0.58-0.92; p = 0.0073) over gefinitib. However, there were no OS differences among treatment arms in this phase IIb trial (n = 319). Pre-specified subgroup analysis according to mutation type (exon 19 deletions vs. L858R mutations) did no show significant differences in OS either. Overall, treatment-related adverse events (mainly skin rash and diarrhea) and serious adverse events were more common with afatinib (33). Therefore, this trial suggests that the emergence of acquired resistance might be delayed with second-generation compared to first-generation TKIs, but whether these modest differences are clinically relevant for patients is arguable for many physicians. The ARCHER 1050 trial (NCT01774721) comparing first-line dacomitinb vs. gefitinib has recently completed accrual and will hopefully give a definitive answer in this regard, establishing the true role of front-line dacomitinib in EGFR-mutant NSCLCs.
2017-05-17T19:13:51.428Z
2017-04-05T00:00:00.000
{ "year": 2017, "sha1": "58086e9647909520b2b2e617f6e0d65f213f0b57", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2017.00036/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58086e9647909520b2b2e617f6e0d65f213f0b57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14202119
pes2o/s2orc
v3-fos-license
Track-Based Particle Flow One of the most important aspects of detector development for the ILC is a good jet energy resolution sigma_E/E. To achieve the goal of high precision measurements sigma_E/E = 0.30/sqrt(E(GeV)} is proposed. The particle flow approach together with highly granular calorimeters is able to reach this goal. This paper presents a new particle flow algorithm, called Track-Based particle flow, and shows first performance results for 45 GeV jets based on full detector simulation of the Tesla TDR detector model. Introduction The International Linear Collider (ILC) will provide the potential for high precision measurements at center-of-mass energies between several hundred GeV and one TeV. Many interesting physics processes in this regime will be composed of multi-jet final states originating from hadronic decays of heavy gauge bosons. To reach the goal of high precision measurements it is suggested to achieve a mass resolution for W → q ′ q and Z → qq decays which is comparable to their widths. This leads to a jet energy resolution of σ E /E = 0.30/ E(GeV) considering the typical di-jet energies ranging from 100 to 400 GeV. Studies based on full detector simulation have shown that particle flow algorithms (PFA) are able to reach this goal [1,2]. The basic concept of any PFA is to reconstruct the four-momenta of all visible particles in an event. The four-momenta of charged particles are measured in the tracking detectors, while the energy of photons and neutral hadrons is obtained from the calorimeters. The accuracy of momentum measurement in the tracking systems is by orders of magnitude better than the accuracy of energy measurement in the calorimeters. This leads to a theoretical limit on the jet energy resolution of approx. σ E /E = 0.20/ E(GeV), considering the characteristic ratio of charged and neutral particles in a jet. The given limit is obtained if the PFA is able to disentangle all charged from close-by neutral showers. Since this is not possible in a realistic PFA the performance degrades due to this confusion. Any PFA relies strongly on pattern recognition in the highly granular calorimeters and it is not possible to distinguish between pure detector and algorithmic effects on the reconstructed jet energy. Hence, for reliable detector optimisation studies using a PFA it is necessary to study different PFA and compare their results. The Track-Based Particle Flow Algorithm The Track-Based PFA is a new proposal of a PFA at the ILC. Basis of this PFA is a collection of tracks. Sequentially, the tracks are extrapolated into the calorimeter and correlated energy depositions are assigned to the track. Related MIP-like track segments are identified as well. Additionally, a collection of photon candidates can be used as an input to improve the performance of the reconstruction. As soon as all tracks have been extrapolated their assigned hits are removed from the collection of calorimeter hits. Afterwards, a clustering procedure is applied on the remaining hits to reconstruct neutral particles. A simple particle identification (PID) is done for charged and neutral particles. The Track-Based PFA is implemented in C++ within the Marlin [3, 4] framework. Events for the reconstruction are created with Mokka [3], a GEANT4 [5] simulation of the Large Detector Concept (LDC) [6]. LCIO [7] serves as persistent data format. The Track-Based PFA consists of six main stages: i) Photon Finding: Photon finding is done with the "PhotonFinderKit" proposed by [8]. Only ECAL hits are taken into account. The output of this stage is a collection of clusters labeled as photon candidates. ii) Tracking and Track-Extrapolation: Tracks, either provided by Monte Carlo information or by realistic tracking [9], are the basis of the Track-Based PFA. The tracks are sequentially extrapolated into the calorimeter using a trajectory interface. The trajectory is given by a simple helix model at the moment, not taking into account energy loss. If such an extrapolation traverses one of the photon candidates it is removed from the collection of photon candidates, since it could be the electro-magnetic core of a hadron shower or an electron. iii) MIP-Stub Finding: The collection of MIP-like energy depositions along a track extrapolation is done by a simple geometrical procedure. A system of two cylindrical tubes are assigned to the track extrapolation, surrounding it concentrically. Calorimeter hits in the vicinity of the track extrapolation are sorted with respect to their path lengths on the extrapolated trajectory. Starting from hits with small path lengths those hits are assigned to the MIP-stub which are located within the inner cylindrical tube. Hits beyond the outer cylindrical tube are not taken into account. As soon as a hit located in-between both tubes is found the procedure is stopped. The position of the last hit collected and its direction given by the tangent on the trajectory at this point are stored as initial parameters for the clustering procedure performed in the next step. iv) Clustering and Cluster-Assignment: The Trackwise Clustering, proposed in [10], has been modified to take the start point and direction given by the MIP-stub finding into account. Additionally, it is adapted to produce more but smaller clusters. The center of gravity and orientation is calculated by the inertia tensor of each cluster. The clusters are assigned to the track by proximity and direction criteria. The track momentum is taken into account to prevent from assigning clusters with too much energy. v) Particle Identification and Removal of "Charged"Calorimeter Hits: The PID of charged particles is done by a cut on the fraction of energy deposited in ECAL compared to the HCAL. It distinguishes only between electrons and charged pions. Additionally, muons are identified if a MIP-stub has been assigned to the track only. Afterwards, all calorimeter hits assigned to tracks are removed from the collection of calorimeter hits. vi) Clustering and Particle Identification on "Neutral" Calorimeter Hits: The Trackwise Clustering is applied on the remaining calorimeter hits using the direction to the interaction point as a start direction. The PID is done in the same way as for the charged particles assigning a PID of photons or neutral kaons. All reconstructed particles are filled into a collection assigned to the event. The Track-Based PFA described in this note is included in the MarlinReco package [3]. Figure 1 shows an example of a reconstruction of 45 GeV jets from a of Z 0 decaying in light-quarks (uds) at √ s = 91.2 GeV using the Track-Based PFA (circles). The detector simulation has been done with Mokka, using the TESLA TDR detector model [1,3]. The initial direction of the quarks is restricted to a polar acceptance of | cos θ| < 0.8. The tracks as described in stage ii) of Section 2 are reconstructed by Monte Carlo information. Additionally, a histogram is shown which indicates the same reconstruction using a perfect assignment of hits to tracks by Monte Carlo information (dashed lines). It is getting close to the theo-retical limit of approx. 0.20/ E(GeV). The performance of the reconstruction is measured by the root-mean-square of the smallest range of reconstructed energies containing 90% of the events (RMS 90 ) [11]. The Track-Based PFA reaches a jet energy resolution of 0.41/ E(GeV) for a polar acceptance of | cos θ| < 0.8. There are two other PFA available within the Marlin framework. The first one (Wolf) reaches approx. 0.52/ E(GeV) [12] for the same detector model and physics process, whereas the second one (PandoraPFA) already reaches the goal of 0.30/ E(GeV) [2].
2007-10-12T14:12:54.000Z
2007-10-12T00:00:00.000
{ "year": 2007, "sha1": "627f2a731fefd83123a08c712a70d0fe38ed67e3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "627f2a731fefd83123a08c712a70d0fe38ed67e3", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
254019736
pes2o/s2orc
v3-fos-license
Recent Evolution of a Maternally Acting Sex-Determining Supergene in a Fly with Single-Sex Broods Abstract Sex determination is a key developmental process, yet it is remarkably variable across the tree of life. The dipteran family Sciaridae exhibits one of the most unusual sex determination systems in which mothers control offspring sex through selective elimination of paternal X chromosomes. Whereas in some members of the family females produce mixed-sex broods, others such as the dark-winged fungus gnat Bradysia coprophila are monogenic, with females producing single-sex broods. Female-producing females were previously found to be heterozygous for a large X-linked paracentric inversion (X′), which is maternally inherited and absent from male-producing females. Here, we assembled and characterized the X′ sequence. As close sequence homology between the X and X′ made identification of the inversion challenging, we developed a k-mer–based approach to bin genomic reads before assembly. We confirmed that the inversion spans most of the X′ chromosome (∼55 Mb) and encodes ∼3,500 genes. Analysis of the divergence between the inversion and the homologous region of the X revealed that it originated very recently (<0.5 Ma). Surprisingly, we found that the X′ is more complex than previously thought and is likely to have undergone multiple rearrangements that have produced regions of varying ages, resembling a supergene composed of evolutionary strata. We found functional degradation of ∼7.3% of genes within the region of recombination suppression, but no evidence of accumulation of repetitive elements. Our findings provide an indication that sex-linked inversions are driving turnover of the strange sex determination system in this family of flies. Introduction Sex is an ancient feature shared by most eukaryotes, yet the sex determination systems regulating the development of males and females vary widely among animals (Beukeboom and Perrin 2014) and can evolve rapidly (Saunders and Veyrunes 2021). Why such a fundamental developmental process as sex determination is variable remains an outstanding question (Bachtrog et al. 2014). Insects include many examples of this diversity and are therefore an excellent model for understanding changes in sex determination systems. Although most insects have genetic sex determination mechanisms with distinct sex chromosomes, different chromosomes act as the sex chromosomes in different species, and species differ in whether males (XY and X0 systems) or females (ZW and Z0 systems) are the heterogametic sex, and in divergence between the sex chromosome pair (Bull 1983;Beukeboom and Perrin 2014). There are also examples of complete loss of sex chromosomes, where sex is linked to ploidy differences (e.g., haplodiploidy) or elimination or silencing of paternally derived chromosomes in males. Another remarkable case, where sex is determined chromosomally but in a way that fundamentally differs from the standard XY or ZW systems, is that of monogenic sex determination. Here, sex is determined by the genotype of the mother instead of that of the zygote: Mothers are genetically predetermined to produce either only male offspring or only female offspring. Monogenic sex determination has evolved in three clades of flies (Diptera): blowflies (Chrysomyinae, Ullerich 1958), gall midges (Cecidomyiidae, Benatti et al. 2010), and fungus gnats (Sciaridae, Metz 1938). Little is known about control of sex determination in blowflies (Scott et al. 2014). However, in the fungus gnat and gall midge species in which karyotypes have been characterized, monogeny appears to be associated with chromosomal inversions (Carson 1946;Crouse 1979;Benatti et al. 2010). None of these inversions has yet been characterized, and little is known about the nature and the molecular evolution of these regions. Neither the evolutionary history of monogeny nor how selection acts on sex determining regions that occur outside the context of conventional sex chromosomes is thus currently understood. Suppression of recombination through chromosomal inversions occurs in some sex chromosomes (Vicoso 2019), and several scenarios can favor a lack of recombination (Wright et al. 2017;Connallon et al. 2018;Jay et al. 2022;Lenormand and Roze 2022). Prevailing theory posits that this process involves selection for suppressed recombination between the sex-determining locus on the Y or W chromosome, and sexually antagonistic alleles maintained polymorphically at partially sex-linked loci, potentially encompassing increasingly large portions of the sex chromosome in a stepwise process (Charlesworth and Charlesworth 1980;Rice 1987;Charlesworth et al. 2005). However, several alternative hypotheses have recently been proposed, including a role for local adaptation (Connallon et al. 2018), regulatory evolution (Lenormand and Roze 2022), and the buildup of deleterious mutations . Y-or W-linked inversions may create regions that never or rarely recombine with their homologous X-or Z-linked regions. This creates sex-specific transmission and ensures that the affected regions are always heterozygous, unlike autosomal inversions. Such regions are likely to accumulate adaptive mutations specific to one sex or the other (Connallon et al. 2018). If the region completely fails to recombine, it is liable to accumulate deleterious mutations and transposable elements (TEs) (Felsenstein 1974). As a result, the nonrecombining Y or W chromosomes undergo functional degradation (Bachtrog et al. 2008) and become a reservoir for repetitive sequences (Chalopin et al. 2015). In the present study, we investigated a female-limited, nonrecombining X-linked inversion associated with monogeny in the fungus gnat B. (Sciara) coprophila. This species has been studied extensively since the 1920s (Metz 1938) and has a complex chromosome inheritance system ( fig. 1). Like all members of Sciaridae, it reproduces through paternal genome elimination, where males fail to transmit paternally derived chromosomes to their offspring as they undergo several rounds of maternally controlled chromosome elimination targeting the paternal genome (Metz 1938;Gerbi 2022). In all studied members of the Sciaridae, the somatic cells of males have an X0 karyotype, while those of females are XX. However, sex is determined by maternally controlled X elimination during early embryogenesis, rather than X inheritance. All zygotes begin with three X chromosomes, one inherited from the mother and two from the father-the result of aberrant spermatogenesis involving the nondisjunction of the sister chromatids in the second meiotic division. During the seventh to ninth embryonic cleavage divisions, either one or both paternal X chromosomes are eliminated from somatic cells, resulting in the zygotes developing into females (XX) or males (X0), respectively. The eliminated X chromosomes fail to divide at anaphase and are left behind on the metaphase plate (DuBois 1933). Germ cells in both sexes eliminate a single paternal X during a resting stage later in development. In B. coprophila and many other Sciaridae, females are monogenic and produce single-sex progenies. Nonmonogenic sciarids are "digenic" and produce mixedsex broods, although both monogenic and digenic species determine sex through X chromosome elimination. Both reproductive strategies occur in multiple Sciaridae genera (Metz 1938), though their evolutionary relationship to one another remains unclear. Early cytological observations suggested that two monogenic species, B. coprophila and Bradysia impatiens, possess single long inversions spanning most of the X chromosome (henceforth the inverted chromosome is denoted by X′), for which femaleproducing females are heterozygous (Carson 1946;Crouse 1979). Polytene chromosome staining indicates that such inversions are absent in digenic species (McCarthy 1945a(McCarthy , 1945b as well as in at least one species exhibiting mixed reproductive strategies (Rocha and Perondini 2000). Through a series of cytogenetic studies, Crouse (1979) deduced the structures of the chromosomes in B. coprophila and demonstrated that the X′ inversion is paracentric and spans most of the length of the chromosome, leaving the two ends of the chromosome, which still synapse with the X, noninverted. The genome sequence of B. coprophila, with all three autosomes and the X chromosome, has recently been published (Urban et al. 2021;, though the sequence and precise nature of the X′ inversion remain unknown as the reference genome was generated from X0 males, which lack the X′. Here, we have shown through comparative analysis of X and X′ chromosomes in B. coprophila that the structure of the X′ is likely more complex than previously thought. Rather than a single paracentric inversion, we found that it resembles a supergene composed of multiple linked inversions that all emerged <0.5 Ma. Our finding that the X′ is young is intriguing given that monogeny is shared by multiple Sciaridae genera (Metz 1938) and suggests that inversions may drive the turnover of reproductive Baird et al. · https://doi.org/10.1093/molbev/msad148 MBE strategies in this family. We used a novel process of k-mer binning to assign short reads to chromosomes prior to assembly, allowing assembly of ∼55 Mb corresponding to X′ supergene sequence despite its high sequence similarity to the ancestral X chromosome. With assembly and annotation of the X′, we compared patterns of evolution between the two homologous sequences and found that the supergene shows some early signs of degradation characteristic of other norecombining sex chromosomes and supergenes. We discuss the implications of our findings for disentangling the evolutionary relationship between the strange genetic properties of sciarid flies and in light of the evolution of sex chromosomes and sex-linked adaptive inversions. X-X′ Divergence Reveals Recent Evolution and Stratification of the X′ Chromosome We set out to identify the breakpoints of the long paracentric inversion previously described in the literature. The size of the X chromosome in B. coprophila is estimated as 50-67 Mb (Gabrusewycz-Garcia 1964; Rasch 2006; Urban et al. 2021;Hodson et al. 2022), and the X′ inversion spans almost the entire chromosome length (Crouse 1979). We therefore expected the inversion to be slightly shorter than the X. We produced whole-genome sequencing Illumina libraries from X0, XX, and X′X individuals, which when aligned against the recently updated chromosome-scale reference genome that contains sequences for chromosomes X, II, III, and IV, but not X′ (Bcop_v2, Urban et al. 2022), resulted in mapping rates of 93.58%, 96.68%, and 96.34%, respectively. That the X′X libraries have approximately the same mapping rate as the XX libraries indicate there is high enough sequence identity between the X and X′ to reliably call structural (SVs) and single-nucleotide variants (SNVs). We found that the lower mapping rate of the X0 reads was explained mostly by a higher microbial content in those libraries (supplementary Text S1, Supplementary Material online). In an attempt to identify the breakpoints of the long paracentric inversion on the X′, we searched for SVs that could be attributed to the X′ using both Illumina shortread and PacBio long-read alignments from X′X samples, using XX and X0 samples as a control. However, this analysis demonstrated that, in the X′X samples, the region of the X chromosome corresponding to the inverted region FIG. 1. Sex determination and X chromosome inheritance in B. coprophila. While oogenesis is regular, sperm receive two X copies due to X nondisjunction. The mother's genotype determines offspring sex: All zygotes begin with three X chromosomes and lose either one or two paternal X chromosomes via targeted paternal genome elimination, resulting in female and male development respectively. XX females produce only sons whereas females heterozygous for the X′ (X′X) produce only daughters. Recent Evolution of a Maternally Acting Sex-Determining Supergene · https://doi.org/10.1093/molbev/msad148 MBE on the X′ is highly enriched for discordant paired-read and split long-read alignments that yield long, overlapping SV signals. We interpreted the entangled and contradictory nature of many individual SV calls as suggesting the presence of multiple complex rearrangements and transpositions throughout the region rather than one single paracentric inversion ( fig. 2A, table 1, and supplementary fig. S1 and table S6, Supplementary Material online). In contrast, HiC reads from X′X and X0 genotypes mapped against the X chromosome clearly revealed the two "main" breakpoints observed cytologically, as well as three repeat regions that likely correspond to folds in the X chromosome (Crouse 1979), but did not clearly show additional breakpoints along the chromosome ( fig. 2B and C). Nonetheless, SNV calls from alignments of X′X Illumina reads to the X chromosome revealed multiple distinct segments of the inversion with different SNV densities, again suggesting that multiple adjacent and/or nested inversions may have occurred at different times, perhaps in a stepwise fashion ( fig. 3). We used these SNV calls to delineate putative evolutionary strata using a change-point analysis, and we estimated divergence for each stratum ( fig. 3, table 2, and supplementary table S1, Supplementary Material online). We found that the region of recombination suppression spans between ∼4.1 and 62.9 Mb on the 67.2-Mb X chromosome. All strata were predicted to have emerged <0.5 Ma. Dxy values calculated from all sites across the chromosome region were 0.0006 for the Baird et al. · https://doi.org/10.1093/molbev/msad148 MBE youngest stratum and 0.0159 for the oldest stratum, corresponding to divergence in years of 0.008 and 0.107 Ma, respectively (assuming a similar mutation rate to Drosophila; see Materials and Methods). Notably, some of the youngest strata had exceptionally low divergence. Estimates for neutrally evolving (synonymous) genic sites ranged from 0.099 to 0.335 Ma for the youngest and oldest strata, respectively. Taken together, our findings suggest that a stepwise set of genomic rearrangements formed the X′ chromosome; we therefore set out to target the X′ sequence for de novo assembly. De Novo Assembly of the X′ Sequence We attempted assembly of PacBio reads from X′X individuals, followed by chromosome assignment of scaffolds using sex differences in read depth across the genome (supplementary Text S2 and fig. S2, Supplementary Material online). This yielded a genome size of only 291 Mb, which was comparable with the size of the male (X0) genome (Urban et al. 2021). Moreover, we were able to assign only ∼3.6 Mb as putative X′ sequence (supplementary table S2, Supplementary Material online). High sequence identity between reads originating from the X and X′ chromosomes was likely leading to their collapsing together upon assembly. To overcome this, we used a process akin to haplotype resolution of diploid sequences by trio binning (Koren et al. 2018). Our approach utilizes differences in k-mer frequencies in Illumina reads between sexes to assign them to chromosomes prior to assembly. Taking advantage of high homozygosity due to over a century of inbreeding (Metz 1938) and the fact that X′ is limited to X′X individuals, we assigned k-mers specific to X′X female reads as likely to belong to the X′ ( fig. 4A). We used these k-mers to extract the short reads from the X′X data set as putative X′-specific reads. In contrast to long reads, which have a high likelihood of false k-mer matches due to high sequencing error rates, we found that short reads (75-150 bp) can effectively be binned with k-mers due to their low error rate and short length (supplementary Text S3, Supplementary Material online). The resulting putatively X′-specific Illumina reads assembled into 61.7 Mb across 42,564 contigs, with an N50 of 10 kb and a largest contig length of 87 kb (supplementary fig. S3, Supplementary Material online). We performed reference-based scaffolding of these contigs, using the regular X chromosome scaffold as a reference (Urban et al. 2022), to produce a single scaffold corresponding to the X′ supergene. To gap-fill and polish the X′ scaffold, we combined it with the remaining chromosomes II, III, IV, and X (Urban et al. 2022) and then used PacBio reads from X′X individuals competitively mapped against all chromosomes to fill some remaining gaps (supplementary fig. S4, Supplementary Material online) and Illumina reads from X′X individuals to polish the final assembly. The resulting ∼55-Mb scaffold is the first model of the X′ sequence contained within the long paracentric inversion breakpoints defined by Crouse (1979) (table 3). Due to using the uninverted X as a reference for scaffolding the X′ contigs, this scaffold may be an inaccurate structural representation of the X′ chromosome; thus, we did not attempt to use this assembly to infer information about the structure of the X′. However, alignment of Illumina reads from all three genotypes (X0, XX, and X′X) to all chromosomes strongly supports its correspondence to the X′ chromosome ( fig. 4B). As expected, we observed that 1) X′X individuals had haploid coverage (1n) across the X′ sequence and the corresponding inverted region of the X chromosome compared with the autosomes, 2) XX and X0 individuals had very low coverage of the X′, and 3) XX females had relatively equal coverage across the X and autosomes in XX females ( fig. 4B). Functional Degradation of the X′ Chromosome We identified 3,470 protein-coding genes totaling 4.2 Mb, that is, 7.7% of the 54.7-Mb X′ supergene scaffold. The portion of the X chromosome homologous to the supergene spans 58.8 Mb and contains 3,429 genes totaling 4.9 Mb, that is, 8.3% of the region. Thus, the proportion of the chromosome corresponding to coding sequence is slightly but not significantly lower on the X′ supergene relative to the homologous X region (χ 2 = 0.00027, df = 1, P = 0.9869). We also found both the X′ and X to have lower gene densities than the autosomes. The difference between gene densities of the X′ and X may reflect an increase in noncoding DNA within the supergene or, alternatively, may be a consequence of the assembled X′ sequence having more gaps than the X chromosome sequence (table 3). We identified 2,321 single-copy homologs between the X′ and X. A further 527 genes from the X′ and 679 genes from the X across 296 orthologous groups (OGs) were categorized as duplicates. In 64 duplicate OGs, the X′ and X chromosomes had the same number of gene copies. Of the remaining duplicate OGs, 162 had more copies on the X compared with the X′, while 70 had more copies on the X′. These may be due to Xand X′-specific duplication events or ancestral duplication and subsequent loss in one or the other chromosome. Unlike X mutations, X′ mutations should not be purged by recombination; thus, deletions and duplications on the X′ are the more likely explanation. We also found that 622 genes across 603 OGs were specific to the X′ and that 429 (across 359 OGs) were specific to the X. Gain of novel genes and whole-gene deletions from the X′ are possible explanations for these finding, although sequence divergence, misassemblies, or gaps may have also led to homologs not being found. We investigated the possibility that the X′ has undergone patterns of degeneration similar to other nonrecombining sex chromosomes by analyzing functional degradation of genes and accumulation of repetitive elements. Of the 2,321 single-copy gene OGs, we found that 123 (5.3%) contain X′-specific mutations that are likely to compromise gene function (including frameshift and/ or gain or loss of stop or start codon mutations, supplementary table S3, Supplementary Material online). We further analyzed expression of single-copy homologs and found that fewer X′ genes are transcribed compared with their X-linked homologs: Across four life stages in females, 2,191 X′ homologs are expressed compared with 2,237 X homologs, although only the larval (χ 2 = 6.07, df = 1, P = 0.014) and adult (χ 2 = 6.80, df = 1, P = 0.009) stages had significantly fewer X′ copies expressed (fig. 5A). Overall, 7.3% of genes were classified as either silenced, disrupted, or both ( fig. 5B). The fly stock used in this study was derived from a laboratory stock maintained by Metz (1938), in which X′X females carry an X′-linked, irradiation-induced mutation, Wavy, which alters wing phenotype. As such, it is worth noting that this mutation may cause estimates of degradation to differ from wild-type flies, though its molecular nature is unknown. Among the genes classified as degraded, we identified several with functions in wing development, including the wing polarity protein STAN and the "held out wings" protein HOW, which are both required for regular wing development in Drosophila (Zaffran et al. 1997;Adler 2012) and may serve as candidates for the Wavy mutation. Since X′X females eliminate one rather than two X chromosomes from their embryos to produce only daughters, we may expect that this results from the silence or disruption of a maternal-effect X-linked gene on the X′ chromosome, which is somehow involved in the control of chromosome elimination. Among the genes classified as degraded, we found several candidates involved in chromatin regulation, chromosome segregation, and cohesion (supplementary Text S4 and table S4, Supplementary Material online). MBE We also checked X′X females for evidence of dosage compensation (DC) of X-linked genes that have corresponding degraded X′ homologs. We expected that if these genes were dosage compensated, they would be upregulated in X′X females to match the expression of those genes in XX females where both copies are functional. We compared expression of X-linked genes in X′X and XX female samples (supplementary fig. S5, Supplementary Material online). In total, only four X-linked genes were significantly upregulated in the X′X females (supplementary table S5, Supplementary Material online), none of which were genes that we classified as pseudogenized. Thus, we found no evidence of DC of degraded genes. We also analyzed TE content across the genome. We found that TE density was lower on the sex chromosomes compared with the autosomes (table 3), but the difference was nonsignificant (χ 2 = 0.0065, df = 1, P = 0.936). We did not find an enrichment of repetitive sequences on the X′; TE density within the X′-specific sequence was nonsignificantly lower (10.51%) compared with the homologous portion of the X (17.32%, χ 2 = 0.060, df = 1, P = 0.807). The fact that the X′ sequence was assembled with short reads may have resulted in limited power to detect repetitive sequences compared with the X from the reference genome. Our analyses comparing structural differences between the X and X′ suggested that repetitive sequences, such as TEs, may have different or additional locations within the X′ sequence not present on the X ( fig. 2A, table 1, and supplementary fig. S1 and table S6, Supplementary Material online). However, the distribution of TEs and TE superfamilies across our assembled X′ sequence appeared to be similar to that of the same chromosome region on the X (fig. 5C). Discussion Recently evolved sex chromosomes can provide crucial insights into understanding the evolution and turnover of Recent Evolution of a Maternally Acting Sex-Determining Supergene · https://doi.org/10.1093/molbev/msad148 MBE sex chromosomes. Bradysia coprophila is particularly unusual in that it exhibits a major transition from an X0-like system to one not unlike a ZW system, though only half of females are heterogametic and the sex chromosome is maternally acting rather than acting in the zygote. Thus, Bradysia offers a chance to study the early differentiation between nonrecombining regions of sex chromosomes in a unique evolutionary context. Our analysis and assembly of the ∼55-Mb X′ sequence revealed that it evolved very recently (<0.5 Ma), that it is more complex in structure than previously thought, and that it is beginning to undergo functional degradation characteristic of classical sex chromosome evolution. The X′ Chromosome Evolved Recently and Shows Signs of Degradation In insects, emergences and turnovers of sex chromosomes are common (Vicoso and Bachtrog 2015). Drosophila neo-Y chromosomes are among the most well-studied cases. In Drosophila, males are achiasmatic, so Y-fused autosomes become instantly sex linked and nonrecombining. In the absence of recombination, deleterious mutations and TEs accumulate irreversibly because offspring inherit the full mutational load of their parents (Muller 1964 (Vieira et al. 2003;Zhou and Bachtrog 2015;Nozawa et al. 2021). Our finding that ∼7.3% of genes on the <0.5-Ma B. coprophila X′ chromosome are pseudogenized is consistent with this expected trajectory of sex chromosome evolution. Moreover, because the chromosome is present in only half of females, its effective population size is half that of other sex-limited chromosomes. As such, it should exhibit an accelerated rate of decay as the effects of drift should increase the rate of evolution further at sites under purifying selection (Charlesworth et al. 1987), though the effective population size of the sex chromosomes will depend on the relative reproductive success of males and females (Vicoso and Charlesworth 2009). To explore the potential effects of drift on sex-limited chromosome degeneration further, it will be essential to compare the X′ chromosomes of other Sciaridae species, which may have evolved independently and at different times, such as that of B. impatiens (see below, Carson 1946). MBE Sex chromosome degeneration is sometimes accompanied by the evolution of DC mechanisms to reestablish diploid expression of the X chromosome in males (Ohno 1967). In the ancestral Drosophila X chromosome, this is achieved by hypertranscription of X-linked genes in males (Schulze and Wallrath 2007). The neo-X chromosomes of various Drosophila species have achieved DC via transposon-mediated cooption of DC machinery, though the younger neo-Xs are yet to achieve global DC (Marín et al. 1996;Ellison and Bachtrog 2013;Zhou et al. 2013). In B. coprophila, there is evidence for DC in X0 males through upregulation of X expression (Urban et al. 2021). As for the X′X females, we found no evidence that X-linked genes with degraded X′ homologs show DC in X′X females. Given the young age of the X′, there may not have been sufficient time for the establishment of DC mechanisms to compensate for degraded X′ genes. Furthermore, because the X′ chromosome is present in only half of females, conflict between XX and X′X females over gene expression may hinder the evolution of DC. Accumulation of repetitive sequences occurs rapidly in recently evolved sex chromosomes (Chalopin et al. 2015). The neo-Y chromosome of D. miranda has undergone massive TE accumulation and has expanded significantly as a result (Bachtrog et al. 2008;Mahajan et al. 2018), though the repetitive landscape of younger (<0.5 Ma) Drosophila neo-Ys (Nozawa et al. 2021) has not been analyzed and thus TE accumulation in nonrecombining regions over these shorter timescales is unclear. Despite the recent divergence between X and X′ in B. coprophila, one would expect higher TE content within the X′. Differences in TE accumulation may be affected by TE content: The proportion of DNA transposons relative to retrotransposons varies widely between lineages. We find that B. coprophila, similar to the Musca, Aedes, and Culex genera, has a higher proportion of DNA transposons than Drosophila (Petersen et al. 2019). Compared with the cut-and-paste mechanisms of DNA transposons, the copy-and-paste mechanisms of retrotransposons may lend themselves to more rapid accumulation (Kim et al. 2012). However, our approach using X′-specific k-mers to assemble the X′ may result in failure to assemble repetitive sequences that are shared by other chromosomes, which may explain why we found a lower TE content on the X′ compared with the homologous X region. Understanding the dynamics of TE accumulation in this peculiar system will require a more contiguous assembly of the X′ as well as examination of other X′ chromosomes in Sciaridae species (see below). A Role for Adaptive Stepwise Expansion of the X′ Over the last decade, there has been a growing recognition of the importance of clusters of linked loci within inversion-based supergenes in driving the evolution of diverse and complex phenotypes. These include Batesian mimetic morphs of butterflies (Joron et al. 2011), divergent social behaviors in ants (Yan et al. 2020), mating compatibility in fungi (Branco et al. 2018), as well as several polymorphisms in birds including plumage color (Funk et al. 2021), reproductive strategies (Küpper et al. 2016), and sperm morphology (Kim et al. 2017). This study of a recent transition in the sex-determining system in flies presents another case. It has been argued that supergenes may be more widespread than previously recognized, that they are important for cosegregation of adaptive variation within a species, and that they may even occasionally result in the spread of complex phenotypes across species boundaries (Schwander et al. 2014;Thompson and Jiggins 2014). The evolutionary trajectories of supergenes and sex chromosomes show similarities: Some supergenes have evolved in a stepwise manner or have undergone functional degradation, and sex chromosomes also play an important role in adaptation and speciation (Presgraves 2008;Tuttle et al. 2016;Branco et al. 2018;Kim et al. 2022). Furthermore, the evolutionary fates of inversions differ depending on whether they arise on sex chromosomes or on autosomes, with the probability of spread of an inversion through a population being higher on sex chromosomes. This is further affected by sex-biased migration patterns, dominance of locally adapted alleles, and chromosomespecific deleterious mutation load (Connallon et al. 2018). Indeed, X-linked genes are predicted to disproportionately contribute to local adaptation due to exposure of recessive alleles to selection, and sex-linked inversions are therefore more likely to sweep to fixation compared with autosomal inversions (Lasne et al. 2017). For these reasons, X-linkage of this supergene in Sciaridae may have favored its initial emergence as well as its enlargement along the chromosome. Rather than one long paracentric inversion, our analysis suggests the X′ chromosome has undergone multiple rearrangements, which may be explained by multiple adjacent and/or overlapping inversions, smaller inversions nested within larger ones, or some combination thereof, which have accumulated in a stepwise process to suppress recombination along the chromosome. Some of the smaller strata we identified appeared to be far less diverged than others, which may represent uninverted gaps between inversion breakpoints. Alternatively, the inversion(s) of the X′ may lead to complex pairing with the X, which may result in varying recombination rates along the chromosome and produce regions of differing divergence. Further resolution of the X′ structure will be required to determine the precise formation of the chromosome. Nonetheless, it appears that the rearrangements on the X′ accumulated rapidly less than 0.5 Ma. Expansion of the nonrecombining region through additional inversions may have adaptively captured female-beneficial alleles at nearby loci, as sex chromosome evolution theory posits (Charlesworth and Charlesworth 1980;Rice 1987;Charlesworth et al. 2005), although this may be hindered by genetic conflict between XX and X′X females. An alternative explanation for the stratification of the X′ is that sex determination relies on more than one locus, that is, it is polygenic, and that successive inversions have emerged to control the sex ratio. Among digenic Sciaridae, sex ratios vary Recent Evolution of a Maternally Acting Sex-Determining Supergene · https://doi.org/10.1093/molbev/msad148 MBE significantly: In Bradysia ocellaris and Bradysia matrogrossensis, broods frequently depart from the expected 50:50 sex ratio and are often heavily skewed in either direction (Rocha and Perondini 2000). The sex ratio in B. ocellaris is heritable, and majority male production can evolve from majority female production and vice versa in as few as six generations (Davidheiser 1947). Taken together, these observations suggest that multiple loci are involved in sex determination. Monogenic Sciaridae presumably evolved from digenic ancestors, which may have occurred through the adaptive linkage of sex-determining alleles through inversions. The young age of the X′ indicates that repeated evolution of monogeny in Sciaridae may have been favored under certain circumstances over the ancestral digenic sex determination system. Evolutionary Perspectives on Monogenic Reproduction in Fungus Gnats Within the fungus gnat clade Sciaridae, origins of monogeny and the relationship between the monogenic and digenic reproductive strategies remain poorly understood. At least one other monogenic species, B. impatiens, is known to harbor an X-linked inversion polymorphism (Carson 1946). Monogeny also occurs in many other Sciaridae, including other Bradysia species, but also in more distantly related members of other genera such as Lycoriella and Corynoptera (Metz 1938). In this respect, our finding that the B. coprophila X′ chromosome evolved <0.5 Ma has intriguing consequences for understanding the evolution of this reproductive strategy. Unlike the B. coprophila X′, the B. impatiens X′ nonrecombining region is terminal (Carson 1946), suggesting that the X′ chromosomes in the two species may not be homologous by descent (alternatively, the region has expanded in B. impatiens or the terminal portion has reinverted in B. coprophila). Another possibility is that the X′ chromosome in B. coprophila may be older than our findings suggest but appears younger due to occasional recombination through gene conversion or double crossovers, which occur within large inversions (Navarro et al. 1997). While crossing-over requires synapsis between chromosomes, gene conversion, that is, the nonreciprocal copying of stretches of sequence between sister chromatids to repair mismatch errors during replication, does not (Szostak et al. 1983;McMahill et al. 2007). Furthermore, in B. impatiens, dicentric chromatids were observed to form through pairing between the X and X′ along the length of the inversion (Carson 1946). If such pairing occasionally occurs in B. coprophila, it may prevent sequence divergence between the two chromosomes. Nonetheless, the distribution of monogenic reproduction among sciarids indicates multiple evolutionary origins. For example, within Bradysia alone, both monogenic and digenic species exist, and the same pattern is found within other genera (Metz 1938;Steffan 1974). If sex determination involves multiple loci, inversions may have emerged in some lineages to fix the production of sex-biased broods in a particular direction. However, this raises the question: What drives the turnover between reproductive strategies in this clade? Haig (1993) suggested that female production evolved as a response to a male-biased sex ratio. Fungus gnats carry a unique type of chromosome only found in the germline (germline-restricted chromosomes [GRCs]), in addition to their sex chromosomes and autosomes. The GRCs are disproportionately transmitted by males and so may have distorted the sex ratio in their favor. Presence of GRCs in Sciaridae does appear to correlate with monogenic reproduction, and many (but not all) digenic species lack them (Hodson and Ross 2021). In support of this is the observation that a monogenic lab-reared line of B. impatiens reverted to digenic reproduction following loss of its GRCs (Crouse et al. 1971). However, the function of the GRCs remains unknown. Interestingly, the GRCs of B. coprophila were recently found to have introgressed into Sciaridae following an ancient hybridization event with Cecidomyiidae , a clade that shares many features with Sciaridae including paternal genome elimination, GRCs, sex determination by chromosome elimination, and monogenic reproduction (Benatti et al. 2010). It is thus tempting to speculate that GRCs may have spread throughout Sciaridae via similar introgression events and that this may have also facilitated the spread of monogenic reproduction. Most or our knowledge about sex determination in Sciaridae comes from the study of several closely related Bradysia species (Metz 1938), though the more early diverging Trichosia splendens is also known to share the strange genetic features of Bradysia (Fuge 1994). These diverged sciarid genera have the same X-linked Muller elements (A and E), although Phytosciara flavipes has X-linked portions of other elements, which indicates there may be some different derived states (Anderson et al. 2022). It will thus be important to survey a wider sample of sciarid species to obtain a more comprehensive understanding of sex determination and sex chromosome evolution in this clade. Nonetheless, if female-determining inversions were to repeatedly evolve, individuals lacking inversions would be selected to increase their male production as an evolutionary response, with the expected result being that the X′X genotype is maintained at 50% in the population by frequencydependent selection. However, the fact that we observe X′ degeneration could mean that X′X females will, over time, have reduced fitness, which should favor the invasion of individuals capable of digenic reproduction, unless occasional recombination keeps the X′ from degrading significantly or DC mechanisms are able to evolve. Future work on the role of GRCs in sciarid sex determination, and on the relationship between the unusual genetic aspects across different sciarid species, will be required to elucidate the origins and turnover of sex determinations strategies in this clade. Data Collection The B. (formerly Sciara) coprophila strain used in this study was obtained from the Sciara stock center at Brown Baird et al. · https://doi.org/10.1093/molbev/msad148 MBE University (https://sites.brown.edu/sciara/). Data were produced at Edinburgh University and the Carnegie Institution for Science in Baltimore. At Edinburgh, DNA was extracted using the Qiagen DNeasy Blood and Tissue Kit, modified for high-molecular weight (HMW) extractions. DNA from 50 to 60 X′X heads (i.e., soma) was used for sequencing on the Illumina NovaSeq S1 platform for paired-end 150-bp reads with 350-bp inserts; DNA from 30 to 40 X′X heads was used for PacBio Single-Molecular, Real-Time long-read sequencing. DNA samples were quantified using the Qubit (Thermo Fisher). HiC data were sequenced from 50 whole X′X females, which were ground using a DiagoCine Powermasher II with a Biomasher II attachment; libraries were prepared and sequenced by Science for Life Laboratory in Stockholm, Sweden. Illumina data from males (X0) previously generated for were used for the k-mer analysis (see below). At Carnegie, DNA was extracted using DNAzol (Thermo Fisher) from 20 to 36 pooled whole-body individuals, two replicates per genotype (X′X, XX, and X0), quantified with the Qubit, analyzed for purity with Nanodrop (Thermo Fisher), analyzed for HMW integrity with 0.5% agarose gel electrophoresis, prepared for sequencing using Illumina Nextera reagents, and sequenced on the Illumina NextSeq platform to generate 75-bp paired-end reads of ∼150to 400-bp fragments. X0 PacBio data were from male embryos and were generated previously as part of assembling the original somatic reference genome (Urban et al. 2021). X0 HiC data were from male pupae and were generated previously for chromosome-scale scaffolding of the somatic reference genome (Urban et al. 2022). Illumina and HiC reads were adapter and quality trimmed using fastp v0.2.1 (Chen et al. 2018), and quality was assessed before and after trimming using FastQC (Andrews 2010). Analysis of X-X′ Divergence and Evolutionary Strata To identify SVs, 75-bp paired-end Illumina reads and PacBio reads from X′X and X0 samples were aligned to the X0 reference genome (Urban et al. 2022) with BWA-MEM (Li 2013) to force-map X′ reads to the X chromosome. SAMtools (Li et al. 2009) was used to sort, merge, and index BAM files prior to calling SVs from Illumina alignments with Smoove v0.2.8 (Layer et al. 2014;Pedersen et al. 2020) and PacBio alignments with Sniffles v2.0 (Smolka et al. 2022). svtools (Larson et al. 2019) was used to convert variant files to bedpe files. To target fixed variants between X and X′, at least four reads were required to support a variant call. The R/ Bioconductor (R Core Team 2022) package Sushi (Phanstiel et al. 2014) was used to plot SVs. HiC reads were aligned to the reference genome using Juicer (Durand et al. 2016), and HiC contact heatmaps were produced using the script HiC_view.py (Mackintosh et al. 2022). To call SNVs, the Illumina alignments were processed with Picardtools (Anon 2019) before calling variants with the GATK-4 best practices pipeline (McKenna et al. 2010;DePristo et al. 2011). The Python scripts parseVCF.py and popgenWindows.py (https://github. com/simonhmartin/genomics_general) were then used to parse the variant files and calculate the density of SNVs (i.e., heterozygous sites) across 100-kb windows, respectively. The R (R Core Team 2022) change-point package bcp (Erdman and Emerson 2008) was used to identify breakpoints between putative evolutionary strata. Two methods were employed to estimate the ages of strata. We assumed a neutral mutation rate and a similar mutation rate to Drosophila melanogaster, that is, between 2.8 × 10 −9 (Keightley et al. 2014) and 4.9 × 10 −9 (Assaf et al. 2017), and a 24-to 40-day generation time for B. coprophila (supplementary Text S5, Supplementary Material online). First, the density of heterozygous sites (i.e., number of heterozygous sites divided by the number of homozygous and heterozygous sites) across 100-kb windows in each stratum (see above) was taken as a proxy for Dxy. Generations since divergence were calculated as "Dxy/2 * r," where r is mutation rate. Second, we targeted only synonymous (neutral) mutations. SnpEff (Cingolani, Patel, et al. 2012) and SnpSift (Cingolani, Platts, et al. 2012) were used to annotate variants and count synonymous variants per gene. The script partitionCDS.py (Mackintosh et al. 2022) was used to annotate degeneracy for all genic sites. Divergence in generations for single-copy homologs within each stratum was then calculated as "V/ (2 * r * S)," where V is the number of synonymous variants, r is mutation rate, and S is the number of synonymous sites. Genome Assembly and Annotation A genome assembly was initially generated de novo from PacBio reads from X′X females, but only ∼3.6 Mb of sequence from this assembly could be assigned to the X′ inversion because high sequence similarity between X and X′ chromosomes and high read error rates (∼11-16% on average) caused them to collapse upon assembly (supplementary Text S2 and fig. S1, Supplementary Material online). To assemble the X′ inversion, putative X′-specific 27-mers were identified using KAT (Mapleson et al. 2016), counted using KMC3 (Kokot et al. 2017) and FASTA files of 27-mers were obtained using custom Python scripts . Cookiecutter (Starostina et al. 2015) was used to bin X′ and non-X′ (A + X) reads from X′X Illumina reads, whereby reads are binned if they contain k-mers from a given k-mer library. The X′-specific reads were assembled using SPAdes (Bankevich et al. 2012) to yield 61.77 Mb spanning 42,564 contigs with an N50 of 10.32 kb and a largest contig length of 86.95 kb. These contigs were then stitched together with the reference-based scaffolder RagTag (Alonge et al. 2021) using the regular X chromosome as a reference (Urban et al. 2022) into a single 52.43-Mb scaffold containing 9,409 gaps. The X′ scaffold was then combined with the male X0 genome to produce an X′X Recent Evolution of a Maternally Acting Sex-Determining Supergene · https://doi.org/10.1093/molbev/msad148 MBE genome. PacBio long reads from an X′X female were mapped to the X′X genome with minimap2, and gaps were plugged using Racon (Vaser et al. 2017). Illumina reads from an X′X female were subsequently mapped to the X′X genome with minimap2 (Li 2018), and Racon (Vaser et al. 2017) was used to polish the assembly. After gap-plugging and polishing, the X′ inversion scaffold totaled 54.75 Mb and contained 3,486 gaps. The genome was soft-masked prior to annotation. To this end, de novo repeat families were created using RepeatModeler v2.0 (Flynn et al. 2020), which were combined with known dipteran repeat families from RepBase (Bao et al. 2015). RepeatMasker v4.0 (Smit et al. 2015) was used to soft-mask the genome, which was then annotated using BRAKER2 (Stanke et al. 2008;Lomsadze et al. 2014;Hoff et al. 2016;Brůna et al. 2021) with RNAseq alignments and homology-based data sets (supplementary Text S6, Supplementary Material online). Functional information for the 26,887 protein sequences in the resulting BRAKER2 gene annotation set was then obtained by finding the best BlastP (Altschul et al. 1990) hits in several protein databases and with InterProScan (Jones et al. 2014, supplementary Text S5, Supplementary Material online). The reasonaTE module of the TranposonUltimate v1.03 pipeline (Riehl et al. 2022) was used to annotate TEs in the genome (supplementary Text S7, Supplementary Material online). Analysis of Functional Degradation Homologous X-X′ gene copies were identified with OrthoFinder (Emms and Kelly 2019). To identify disrupted X′-linked genes, genotyped variant files for X′X and X0 75-bp paired-end Illumina reads mapped against the Bcop_v2 (X, II, III, and IV) genome (as described above) were filtered for SNVs and indels using SnpEff (Cingolani, Patel, et al. 2012) and SnpSift (Cingolani, Platts, et al. 2012). SNV and indel counts for 2,321 single-copy homologs were then analyzed in RStudio (R Core Team 2022). Two replicates of each genotype (X′X and X0) were aligned, and only consensus variant calls between replicates were considered to identify fixed differences between X and X′ and exclude low-frequency variants. Variants specific to the X′ were identified by excluding common variants also found with X0 male data, which may represent polymorphisms on the X or errors in the X chromosome reference sequence (Urban et al. 2022). Genes with loss or gain of start or stop codons or indels causing frameshifts were counted as disrupted. To identify silenced genes, RNAseq reads were binned as X or X′ in origin using elements of a pipeline developed for (Marshall et al. 2020) to avoid mismapping between the two chromosomes (supplementary Text S8, Supplementary Material online). Expression of the 2,321 single-copy homologs was quantified using Kallisto (Bray et al. 2016), and counts were normalized with EdgeR (Robinson et al. 2010). Genes with zero counts of transcripts per million (TPM) and those with TPMs in the bottom 0.1% of nonzero TPM counts within a sample (to account for stochastic mismapping of RNAseq reads) were assumed to be nonexpressed. X′ genes were counted as "silenced" if the X copy was expressed but the X′ copy was not, and if genes were both silenced and contained pseudogenizing mutations, we counted them as "silenced and disrupted." Analysis and plotting of counts was carried out with R Studio (R Core Team 2022). To examine DC of X genes that have corresponding degraded X′ homologs, the same pipeline (supplementary Text S8, Supplementary Material online) was used to compare expression of X-linked genes between X′X and XX females (supplementary Text S9, Supplementary Material online). In X′X females, X-linked genes with degraded X′ homologs should have upregulated expression if they are dosage compensated to match expression in XX females. Upregulated genes were identified as those with a log2fold-change (FC) of >0.5. Supplementary Material Supplementary data are available at Molecular Biology and Evolution online.
2022-11-28T14:12:44.099Z
2022-11-25T00:00:00.000
{ "year": 2023, "sha1": "88cd848943d9232a3b1554a622a9c5843835700b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mbe/advance-article-pdf/doi/10.1093/molbev/msad148/50691661/msad148.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9bc1b29585a6db3c0cd07d209a34a3a255120e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252616849
pes2o/s2orc
v3-fos-license
Decolonization of Indigenous Knowledge Systems in South Africa: Impact of Policy and Protocols This article analyses the protection of indigenous knowledge in South Africa, exploring if and how indigenous knowledge is aligned with existing policy and protocol frameworks as enacted by the government. Indigenous knowledge is mainly preserved in the memories of elders and shared through oral communication and traditional practices. The question arises: How can knowledge generated in indigenous knowledge systems research be recovered and protected to benefit indigenous knowledge owners and accessible for future generations? The study utilised literature review to critically analyse the policy, protocols, and strategies relating to the protection and preservation of indigenous knowledge systems. Decolonial theory and knowledge ontology and modelling framework were also used as underpinning theories to guide the study. Recommendations suggest the need for decolonizing indigenous knowledge systems through collaborative approach with indigenous knowledge holders and their communities. in ancient wisdom but rearticulated by bright young people, not for self-promotion, consumerism, personal gain and greed but for community well-being and the respect and preservation of nature in all its manifestations. Academics, indigenous knowledge holders and their communities, traditional and political leaders in governments need to be prepared to dismantle the colonial and its current manifestations by engaging in decolonial and anti-colonial strategies for the protection and preservation of indigenous knowledge. This study viewed digital preservation as decolonial strategy to protect and provide long-term access to indigenous knowledge, and provide ways in which indigenous researchers can share knowledge with participants for the benefit and acknowledgement of the communities or research participants as knowledge holders. Future research should thus focus more on developing indigenization and decolonization strategies. There is thus a need for increased funding and capacitating of information professionals in the digital preservation INTRoDUCTIoN South Africa is one of the most diverse countries in the world and it is regarded as a rainbow nation to describe the unity of various cultural, racial or ethnic groups in the country. It is also regarded as a rich repository of knowledge referred to as indigenous knowledge. Indigenous knowledge is the traditional, cultural and community knowledge produced and owned by local people in their specific communities and passed on from one generation to the next generation, through practice and oral channels (Govender et al., 2013). In addition, Ngulube (2002) describes indigenous knowledge as mainly tacit and derived from local experiments, innovations, creativity and experiences, embedded in the minds and activities of communities with long histories of close interaction. This knowledge serves as the basis for problem solving, communication, teaching and decision-making in the indigenous communities where it is embedded (Furutnani et al., 2018). Indigenous knowledge has also been the basis for agriculture, education, health care and the wide range of other activities that sustain a society and its environment in many parts of the world for many centuries (Senanayake, 2006). Indigenist thinkers have advocated for the recovery and promotion of indigenous knowledge systems as important in decolonizing indigenous nations and their relationships governments, whether those strategies are applied to political systems, governance, health and wellness, education, or the environment (Churchill, 1996). This knowledge therefore needs to be safeguarded at all times and be decolonized for the benefit of indigenous communities (Sithole, 2007). Decolonization is recovery from colonial impact and restoration of indigenous people's identities, languages and experiences (Datta, 2018). In this manner, indigenous communities can disentangle themselves from the oppressive control of colonizing state government through policy and decolonized strategies (Simpson, 2004). Denzin et al. (2008) further described decolonization as a continuous process of anti-colonial struggle that honors indigenous approaches to knowing the world, recognizing indigenous land, indigenous peoples and indigenous sovereignty. Decolonial thinking has been used to critique the colonial-modern function of assumptions and knowledge forms that are deeply embedded in the discipline and the broader field of Western social science (Seth, 2013). Smith (1999) defines decolonization as a process for conducting research with indigenous communities that places indigenous voices and epistemologies in the center of the research process. For example, interest in research on African indigenous vegetables by Agriculture Research Council (ARC) has been regarded as decolonization process. Smith (1999) suggests that the process of decolonization of indigenous research will help regain control over indigenous ways of knowing and being, and ways in which research can be used for social justice. However, apartheid regime together with colonialism had led to the subjugation and suppression of indigenous ways of life (Heleta, 2018), and this was because the colonialists considered indigenous knowledge system as backward and not worthy of any development when compared to other worldviews. There has also been a general disregard of indigenous knowledge system amongst academics and scientists, and as a result, the value of primary knowledge was strategically rejected among academics (Mji, 2019). Even indigenous researchers who were aware of the benefits and superiority of indigenous techniques remarked that they were afraid to admit an interest on this sphere of knowledge, basing this on fear of being ridiculed by Western peers (Mji, 2019). Researchers thus requires contextualized research processes that are relevant to the challenges of indigenous communities and contribute to their development, using acceptable indigenous research methods and theories. Indigenous decolonizing methods and theories suitable to our African contexts include decolonizing research methodologies, Ubuntu and Afrocentrism. Decolonizing research methodology is an approach that is used to challenge the Eurocentric research methods that undermine the local knowledge and experiences of the marginalized population groups (Smith, 1999). The Ubuntu concept expresses the African philosophy of humanness, that a person is a person through other people (Murithi, 2006). The Afrocentric is a philosophical and theoretical perspective that suggests cultural and social immersion as the best approach to understand African phenomena as opposed to scientific distance (Mkabela, 2005). Furthermore, Goduka, et al. (2013) argued that for research to be relevant and improve the quality of life of indigenous people, it needs to be rooted in indigenous worldviews, cultural values and languages. It is therefore prudent for indigenous knowledge researchers to appropriately align the ways in which they engage with communities so that they are respectful of and responsive to sociocultural contexts. There is also a need to follow proper etiquette and protocol when dealing with the people concerned (Gupta, 2010). This article thus looked into initiatives being established in different parts of the world with the aim to protect indigenous knowledge for future generation while also reconciling the indigenous people for the years of loss and suffering. The article also provides the techniques through which this knowledge is shared within the indigenous communities themselves and the ways in which indigenous researchers attempted to share knowledge from research with participants for the benefit and acknowledgement of the research participants as indigenous knowledge holders or communities. Problem Statement It has been accorded in literature review that much research on indigenous knowledge has been carried out by researchers without decolonizing the research (Datta, 2018;Keane, Khupe & Seehawer, 2017). Indigenous scholars argue that Western research without decolonization can be referred to as oppression towards indigenous communities (Kovach, 2010). Researchers rarely think about sharing their indigenous research with the indigenous knowledge owners and their communities. Lincoln (1994) explains Western research as the rape model of research where the researcher comes in, takes what he wants and leaves when he feels like it. Indigenous knowledge researchers should thus engage in research not only to produce knowledge but also to make positive change in the lives of those who participate in research. As noted by Louis (2007), if research does not benefit the communities by extending the quality of life for those in the communities, it should not be done. Knowledge outcomes can be shared in ways that benefit the community through consultations with research participants (Keane, Malcolm, & Rollnick, 2004). However, indigenous researchers in the African context have still not done enough to redress this travesty of conducting a research that is not benefitting the indigenous knowledge owners and their communities. Indigenous knowledge thus needs to be decolonized and be shared in ways that benefit indigenous owners and their communities (Sithole, 2007). The development of policy and support programmes provided by government in the protection of indigenous knowledge systems are also important in the field of previously suppressed indigenous knowledge system. The indigenous knowledge system policy needs to be well understood by indigenous communities. The article thus looked into policy and protocols aimed at protecting indigenous knowledge and its impact in decolonizing indigenous knowledge systems in South Africa. The research objectives formulated for this study were to: • Examine international and national initiatives aimed at protecting indigenous knowledge; • Establish the barriers to effective knowledge sharing among indigenous communities; • Determine the techniques or strategies that are used for knowledge sharing to benefit indigenous knowledge owners and their communities; and • Determine the impact of policy frameworks and protocols in the promotion and protection of indigenous knowledge for the benefit of indigenous holders and their communities. The Impact of Colonialism on Indigenous Knowledge Systems The history of colonialism is one of brutal subjugation of indigenous peoples and most of the African continent is still underdeveloped and recovering from colonization (Blakemore, 2019). Blakemore (2019) define colonialism as a control by one power over dependent area or people and it occurs when one nation subjugates another, conquering its population and exploiting it, often while forcing its own language and cultural values upon its people. Africa was rich in oil, copper, diamonds, gold and many other resources that made European nations blind and cruel to the African people and started to exploit them in the most violent ways possible. Europeans nations claimed that they are in Africa to boost local livelihood, whereas they are in Africa for monetary colonization and shipping resources for their own troubled economy. Many African countries including South Africa were compelled to import oil for their own use because their economies are narrowly tied to exports, and they are hit by higher oil prices. The forces of cultural genocide, colonization and colonial policy perpetuated over the last several centuries by successive occupying settler governments is responsible for the current state of indigenous knowledge systems (Simpson, 2004). The Natives Land Act (1913) reserved most of the land for white ownership, forcing many black farmers to work as wage labourers on land they had previously owned. Black land ownership was restricted to 13 percent of the country, and much of it heavily destroyed over time when the act was amended in 1936. Colonialists regarded indigenous knowledge as primitive, uncivilized, backward, superstitious and savage (Briggs & Sharp, 2004), and this knowledge was viewed as irrelevant to development and an obstacle rather than a force of change. Sillitoe (2006) argues that the premise which was used to confirm the insignificance of indigenous knowledge was drawn from modernity and dependency models. Hobart (2002) explains dependency theory as anchored on exploiting resources and labour of the local people, creating inequalities among people. However, Vázquez (2012) view coloniality as the darker side of modernity and this is because modernity can be seen to bring about democracy, globalization and liberalization. The impact of colonialism on indigenous knowledge systems include environmental degradation, economic instability, ethnic rivalries and human rights violations issues that can outlast one group's colonial rule (Blakemore, 2019). Colonialism is therefore the current threats to indigenous communities and their land, and it continues advancing the oppression of the world's indigenous nations. Morgan (2003) further observed that as indigenous knowledge was suppressed, the western system also allowed appropriation and exploitation of this knowledge for the benefit of the colonizers. Briggs (2005) also observed that the integration of indigenous knowledge into the education system that is dominated by the western worldview occupy a lesser position to the other. This is because indigenous knowledge has to meet the standards set by science to be accepted. The education system continues to favour western knowledge over indigenous knowledge (Wilson, 2004). Morgan (2003) concurs that western worldviews are still dominant in higher education and this is despite efforts by various groups to push for the recognition of indigenous knowledge in such institutions. Morgan (2003) thus emphasized that those who were once colonized should initiate the decolonization, a process that allows the revaluing and recovering of the lost. CoNCePTUAL FRAMewoRK Drawing from Ngulube (2018), the conceptual framework was derived from various components of theories, models and concepts embedded in the extant literature. Decolonial theory and knowledge ontology and modelling framework have been adopted to guide this study. Decolonial Theory The study is grounded in decolonial theory in an attempt to promote and protect indigenous knowledge, by countering the colonial forces that seek to displace this knowledge. Decolonial theory was found suitable for its ability to diagnose the problem of colonization and aims to situate the course within the episteme of indigenous philosophy. This theory was thus used as an underpinning philosophy and it allowed for the examination of epistemological inequalities that were created because of colonialism and apartheid in South Africa. Decolonial theory does not refer to a single theoretical school, but rather points to a family of diverse positions that share a view of coloniality as a fundamental problem in the modern as well as post-modern and information age (Maldonado-Torres, 2011). As observed by Tlostanova and Mignolo (2009) colonial systems seem to be beneficial to all, yet they are the cause of cheap labour, overexploitation of resources, suppression and exclusion of all that is found outside of reality as articulated by the global powers such as Europe and the United States of America. African people acquired a bruised cultural identity, a philosophy of the oppressed and that corrupted their thinking and sensibilities through contact with the West (Shizha, 2005). As noted by Maldonado-Torres (2007), coloniality is continuing to exist in education, economy, culture and people 's image when western ideologies continue to dominate worldviews. For example, colonial languages such as English still have a powerful position and it is dominating the space in most African countries as it has been regarded as a privileged language. The effect of this perception has seen some black parents resorting to taking their children to English Medium schools and most of the children who speak African languages at home switch to English as their primary language. This example serves to show that some people still believe that white people are superior hence they aspire for a white man's language. Decolonial theory rejects modernity, which is located in the oppressed and exploited side of the colonial difference, in favour of decolonial liberation struggle for a world beyond Eurocentred modernity (Ramon, 2011). Ndlovu-Gatsheni (2013) described decoloniality as an epistemic project seeking liberation and freedom for the people who experienced colonialism and living under the boulder of global coloniality. As noted by Ndlovu-Gatsheni (2013), the core of decoloniality is the agenda of shifting the geography and biography of knowledge to native people who have the potential and knowledge to address their own vulnerabilities. Decolonial theory evokes the need to revalue and rework local epistemologies that have been rendered insignificant and unscientific by the West (Ndlovu-Gatsheni, 2013). Decoloniality thus recover indigenous knowledge on the contemporary ground and give shape to new knowledge production for indigenous social practices, such as health, education and governance (Nakata, Keech & Bolt, 2012). As a result, the indigenous people can move towards applying local knowledge to achieve sustainable development and reclaim their identity in the knowledge space through decoloniality. Knowledge ontology and Modelling Framework Several scholars such as Takeuchi (1995), Earl (2001) and Wiig (1993), to mention but a few, have developed theories, models and frameworks which helps guiding knowledge managers throughout the process of knowledge management. This study adopted knowledge ontology and modelling framework by Haron and Hamiz (2014). The importance to develop an ontological model is to share a common understanding and sensible structures of information, to make domain hypothesis explicit, to provide categorization structure and to enable reuse of domain knowledge (Haron & Hamiz, 2014). The ontology is an example of knowledge modelling which represent the knowledge in a manner which a computer can facilitate (Vassev, Hinchey & Gaudin, 2012). The application of knowledge ontology and modelling framework will also develop a clear path pertaining the adoption of knowledge management systems in support of knowledge management and sharing. As pointed out by Almeida and Barbosa (2009) a declarative approach to ontology is thus needed for the knowledge preservation because ontology is a method where the domain is represented in a structured manner and may provide the benefits to those who implemented it. One of the most important purposes of knowledge management is to systematically influence knowledge exchange, application and creation, thereby creating value (Kozhakhmet & Nazri, 2017). The role of ontology in knowledge management processes aids in knowledge creation, acquisition, storage, transfer and application together with performance improvement (Sassson & Douglas, 2006;Haron & Hamiz, 2014). Knowledge Creation Von Krogh, Nonaka and Rechsteiner (2012) describe the creation of knowledge as an ongoing procedure by which knowledge comes into existence through cooperation or individual effort and is refined and enhanced within a corporate system. Grimsdottir and Edvardsson (2018) stated that new knowledge is frequently created or engendered by innovative concepts or urgent needs, either arising within the organization itself or emanating from external market pressures. Knowledge creation process thus evaluates the stages of producing innovative knowledge, such as the application of figurative terms in which to render external knowledge (Grimsdottir & Edvardsson, 2018). Knowledge can also be created through education, interaction, practice and collaboration as the different types of knowledge are shared and converted, as noted by Frost (2010). The development of a software product is an example of knowledge creation (Wan, et al., 2010). Nonaka and Takeuchi (1995) identified the following knowledge creation processes, namely: socialization in which knowledge is passed on through practice, guidance, imitation and observation. Externalization in which tacit knowledge is codified into documents, manuals, etc. so that it can spread more easily through the organization. Combination mode is a situation whereby codified knowledge sources (e.g. documents) are combined to create new knowledge. Internalization that implies that knowledge is internalized, modifying the user's existing tacit knowledge. Knowledge Acquisition Dalkir (2005) defines knowledge acquisition as the stage at which knowledge is contextualised in order to be understood. It is the process of accepting knowledge from external sources for the purpose of using it in the organization (Pacharapha & Ractham, 2012). This process is achieved by extracting, interpreting and transferring knowledge to improve existing organizational knowledge. As noted by Shongwe (2016) an acquisition process thus involves access, gathering, location and capturing of knowledge from suppliers, participants, employers and other knowledge sources. Knowledge can also be acquired from repositories, learning from others and learning from experiences. Knowledge Storage Knowledge storage refers to the existence and identification of information in the database of the company (Shongwe, 2016). It is the codification of existing knowledge and know-how into organizational memory (Dalkir, 2005). It is therefore necessary to store knowledge in a safe place, for future purposes since knowers take their knowledge with them when they withdraw from the organization. However, knowledge may be lost more particularly if it is tacit and it is not properly managed or preserved. Shongwe (2016) added that knowledge can be carried digitally or be stored manually in minutes of meetings, reports, policies and many other physical organisational documents while electronically can be stored in organisational databases, portals and emails. Knowledge Transfer Gaura, Hongjia and Baoshan (2019) define knowledge transfer as the design and transmission of knowledge within an organization or between organizations to enhance learning and productivity of workers, which is essential in the overall success of the organization. It is the process in which knowledge is shared or communicated to other individuals or groups within the organisations through workshops, seminars, conferences, classrooms, meetings, face-to-face interactions or the use of technology (conferencing software, emails) (Likalu, et al., 2010). Sagsan (2009) stated that knowledge transfer requires the prerequisites of knowledge sharing mechanisms that allow teams, departments and groups to share their tacit and explicit knowledge through technological and social communication infrastructure channels. Social communication means informal working settings and it helps in transferring tacit to tacit knowledge while technological communication infrastructure is useful for structuring data and transferring knowledge timely (Sagsan, 2009). Choo and de Alvarenga Neto (2010) identified four sets of knowledge sharing enablers, namely: • Social/Behavioural: Includes social relationships and interactions based on norms and values such as trust, care, empathy, attentive enquiry and tolerance. • Cognitive/Epistemic: Includes the need for both epistemic diversity and common knowledge or shared epistemic practices and commitments. • Information Systems/Management: The use of information systems and Information and communication technologies such as social networks that enable knowledge sharing and information management processes to support knowledge activities. • Strategy/ Structure: The need for the organization and its management to provide direction and structure. However, effective knowledge transfer may not be complete without the use of knowledge. Knowledge transfer thus involves donation and collection of knowledge, and it is possible that donated knowledge that is not put into proper use may not yield any positive benefit (Adeyemi, Uzamot & Modupe, 2022). Knowledge managers thus need to develop strategies for effective knowledge transfer that will aid efficient knowledge use. Knowledge Application Knowledge application refers to the actual use of knowledge that has been captured and stored in organisational databases or the knowledge in people's heads (Shongwe, 2016). Knowledge application can help to transform knowledge from being a potential power tool into actual innovations or inventions, which can enhance overall performance of organizations (Matin et al., 2013). Knowledge use is basically important to knowledge recipient who employ knowledge for innovation and consequently improves organizational performance (Carvalho & Gomes, 2017). Gottschalk (2007) also emphasizes the necessity of using knowledge in organizational practices, processes and policies. Knowledge application is when available knowledge is used for decision making, problem solving and perform tasks through direction and routines (Becerra-Fernandez & Sabherwal, 2010). Direction refers to the process through which the individual possessing the knowledge directs the action of another individual without transferring to that individual the knowledge underlying the direction (Becerra-Fernandez & Sabherwal, 2010). Routines involve the utilization of knowledge embedded in procedures, rules, norms and processes that guide future behaviour. ReSeARCH MeTHoDoLoGy The article critically reviewed literature in order to analyze the policy and protocols in the promotion and protection of indigenous knowledge systems, as a way of decolonizing these systems, using qualitative content analysis. Although content analysis has served mostly as a complement to other research methods, it has also been used as a stand-alone method and there are some specialised forms of qualitative research that rely solely on the analysis of content. The study by Boamah and Liew (2017) on conceptualizing the digitization and preservation of indigenous knowledge was also based on content analysis. Qualitative content analysis method is defined as the systematic reduction of content, analysed with special attention to the context in which it was created, to identify themes and extract meaningful interpretations of the data (Roller & Lavrakas, 2015). Content analysis is suitable for analysing various qualitative and unstructured data such as those collected during unstructured or semi-structured interviews or web-based documentary research. Like other analytical methods in qualitative research, content analysis requires that data be examined and interpreted in order to elicit meaning, gain understanding and develop empirical knowledge (Corbin & Strauss, 2008). According to Ngulube (2017) conducting a literature review can also assist to develop a conceptual definition of a construct on the basis of shared meaning, describe what theory or theories were used to explain relationships among concepts and establish how the concepts have been measured in an empirical investigation. Addressing these questions may enable researchers to develop a conceptual system and check the coherence between the conceptual or theoretical framework and various elements of the research design (Ngulube, 2017). The study also reviewed models, theories and frameworks pertaining to indigenous knowledge and decolonization of this knowledge. The analytic procedure thus entails finding, selecting, appraising or making sense of and synthesising data contained in documents. For this current study, content analysis was introduced and applied for reviewing literature reviews on journals articles reporting on previous studies in decolonization of indigenous knowledge systems, following the guidelines advanced by Kitchenham (2004). The review protocol was composed of the following elements: • Inclusion/Exclusion Criteria The inclusion criteria aim to identify studies that provide direct evidence about the research question (Kitchenham, 2004). The review process thus begun with the researcher identifying and selecting documents on the basis of their usefulness and relevance to the study. The study reviewed literature and the empirical studies reporting on previous studies in decolonization of indigenous knowledge systems, international initiatives in promoting and protecting indigenous knowledge, knowledge sharing strategies, barriers to effective knowledge sharing and the impact of policy and protocols in protecting indigenous systems, whether positive or negative, to address research objectives of the study. Literature review on knowledge management and digital preservation of indigenous knowledge, in major databases such as ScienceDirect, Wiley, Springer, Sage and Google Scholar were conducted to ensure inclusion of all relevant studies in the literature review or content analysis. However, only journal articles were included in the literature review, and the editorials and book reviews were excluded in this study as they do not include original research. Peer-reviewed journal articles represent a major mode of communication among researchers and they are therefore taken as unit of analysis. • Search Strategy The following search terms were used: decolonization of indigenous knowledge systems, initiatives in promoting and protecting indigenous knowledge, benefits of knowledge sharing, knowledge sharing strategies, barriers to effective knowledge sharing, implications of policy and protocols in protecting indigenous knowledge, to find published articles reporting on decolonizing indigenous knowledge systems. The search terms were used to collect data from related studies from EBSCOhost, Emerald, Springer Wiley and Scopus, databases that provides access to publications in a variety of fields. Databases such as EBSCOhost allows using complex search strings and filters which makes it easy to apply complex selection criteria and it is therefore, considered a suitable choice for systematic literature reviews (Wang & Noe, 2010). The study also made a more focused search on digital preservation of indigenous knowledge as it was regarded as a decolonial strategy. • Study Selection Researcher read the title and the abstract of generated articles and removed all the duplicates, which considerably reduced the sample size. The selection criteria include: the study must be empirical, published in a peer-reviewed journal and focused on decolonization of indigenous knowledge systems. Although many articles related to the study were generated, however some of the articles were removed after thorough reading of all the articles, mainly because of their irrelevance to the topic of interest and research objectives or lack of quality. • Data Analysis and Synthesis The thematic analysis technique or process developed by Braun and Clarke (2006) was used to systematically analyse the qualitative data or text extracted directly from previous studies on decolonization of indigenous knowledge systems and the impact of policy and protocols in decolonizing these systems. The process of thematic analysis is outlined below: • Familiarization with the data: It was developed by reading the papers selected for review using the "repeated reading" approach to search for meanings and patterns. To remove any ambiguity, the extracted data were connected to the source paper to develop contextual understanding helpful in data interpretation. • Generating initial codes and themes: The coding process was research objectives driven, i.e. codes were developed through capturing aspects of indigenous knowledge systems, knowledge sharing strategies and the impact of policy and protocols, under investigation, which made it easier to assign relevant code. After the completion of the coding process, all codes were reviewed and collated to generate potential themes relevant to the research objectives. • Reviewing themes: All of the themes were defined and common characteristics in the themes were outlined, as per the objectives of this study, and this led to the development of higherlevel themes composed of many sub-themes. For example, the decolonization of indigenous knowledge was a common thread connecting different themes, which led to the development of main themes, for example, initiatives in promoting and protecting indigenous knowledge, knowledge sharing strategies, knowledge sharing barriers and the impact of policy and protocols in protecting indigenous systems. • Producing the written analysis: The analysis process resulted in the identification of decolonization of indigenous knowledge and impact of policy and protocols explored in previous studies and the potential research gaps needing further investigation. International Initiatives to Promote and Protect Indigenous Knowledge Considerable efforts have been made globally, over the past few years to promote and protect indigenous knowledge for the benefit of indigenous owners and their communities. Many of these initiatives were aimed at providing the necessary infrastructure and strengthening capacity for safeguarding indigenous knowledge in African countries. The United Nations (UN) (1992) formed a Convention on Biodiversity (CBD) that provides for recognition and protection of indigenous knowledge, and it state that each contracting party be subjected to its national legislation and encourage the equitable practices. It also adopted the UN Declaration of Rights of Indigenous Persons acknowledging the importance and the need to respect and promote the rights and knowledge of indigenous communities (UN, 2007). United Nations Environment Programme (UNEP), which is the custodian of the Convention on Biological Biodiversity (CBD), has requested World Intellectual Property Organization (WIPO) and World Trade Organization (WTO) to consider protection and benefiting of local communities that have contributed to an invention or intellectual property development (Gorjestani, 2000). WIPO (2002) has established an Intergovernmental Committee (IGC) to initiate discussion on the protection of traditional knowledge, genetic and biological resources and folklore, and a sui generis system for the protection of indigenous knowledge in order to address the preservation challenges, using intellectual property systems. United Nations Educational, Scientific and Cultural Organization (UNESCO) (2003) African Initiatives to Promote and Protect Indigenous Knowledge The African Department of the World Bank (1998) launched the Indigenous Knowledge (IK) for Development program in partnership with over a dozen organizations. This program has developed a number of instruments and services for the capture, dissemination and application of indigenous knowledge practices. These include: the creation of indigenous database of over 200 indigenous practices, a monthly publication called IK Notes, appearing in two international languages (English and French) and two local languages (Wolof, Swahili), with over 20,000 readers and a multilingual website (Gorjestani, 2000). The program has also helped indigenous knowledge resource centres in other countries to improve their national and regional networking capacity. For example, Uganda received advisory and financial support to help draft a national strategy for the integration of indigenous knowledge into its national Poverty Eradication Action Program and grant funding to build capacity for the implementation of the strategy (Gorjestani, 2000). Several efforts have been made to support national strategies in countries such as Uganda, Malawi, Tanzania, Cameroon and South Africa to mainstream indigenous knowledge, supported by the World Bank program. These African countries have undertaken various activities to build on indigenous knowledge in agriculture, healthcare, and education with the assistance of the indigenous knowledge program. For example, the Agricultural Research and Training Project (ARTP II) in Uganda explored the use of indigenous knowledge in agriculture and a team interviewed communities and farmers in the Ugandan National Agricultural Advisory Services program, to devise a performance monitoring system based on indigenous knowledge indicators (Gorjestani, 2000). In Malawi, indigenous knowledge of farmers and fishermen are merged with scientific knowledge to improve the sustainable use of the Lake Malawi Basin resources. Other countries like Ghana, Kenya and Ethiopia also initiated projects to promote medicinal plants as an integral part of health-related indigenous knowledge to provide alternative sources of income, to maintain and protect biodiversity. A global network of indigenous knowledge resource centres has emerged in Tanzania, over the last ten years and its members include academic institutions, Non-Government Organizations (NGOs) and researchers engaged in documentation, dissemination and advocacy of indigenous knowledge (Gorsjestani, 2000). In Cameroon, the US National Cancer Institute reportedly signed a contract with the government following the discovery of a forest plant species with a potential anti-AIDS chemical. Traditional healers in Pangani district in Tanzania have treated over 2000 HIV/AIDS patients, using medicinal plants. The Uganda National Council for Science and Technology (UNCST) (1999) initiated a study to explore the potential of utilizing indigenous knowledge in the agriculture and health sectors. This was the basis for a national workshop involving policy makers, scientists, development practitioners and NGO representatives, traditional healers and farmers to draft a national strategy and framework for action (Gorjestani, 2000). The government of South Africa has also recognised a need to protect and preserve indigenous knowledge and enacted a policy which protect indigenous knowledge systems. Indigenous knowledge system policy identified various means of protecting indigenous knowledge in the South African context, and these include: the intellectual property system, databases, sui generis laws and registers (Department of Science and Technology, 2004). The Department of Trade and Industry (2005) further initiated the Patents Amendment Act, 2005 that is being used at the WTO and WIPO as legislation model, and therefore if indigenous knowledge is used in securing patents, protection and befitting of the local communities may take place under the law of patents. Geographical indications may be used to protect and commercialised names of both plants and animals that are peculiar to geographic areas, e.g. Nguni cattle (Gorjestani, 2000). Traditional healers may use the laws of patents to protect and commercialise their traditional knowledge. The Department of Science and Technology (DST) (2011) in South Africa established Bio-Innovation programme aimed at mainstreaming indigenous knowledge-based concepts within the national system of innovation and facilitating community-based technology transfer. The programme is supported by an Indigenous Knowledge Systems (IKS) Bill, a legal framework that ensures that collaboration between researchers, industry and communities is protected by law for the benefit of all parties. The programme is already engaged in multiple projects including the University of KwaZulu-Natal, in partnership with the Makonde Indigenous Fruit Processing Association (MIFPA) that conducted the study on the use of marula formulations in the treatment of diabetes that has been used as medicine by communities in parts of KwaZulu-Natal, Limpopo and Mpumalanga for centuries. The National Research Foundation (NRF) (2012) is also supporting research projects through allocating funds specifically for indigenous knowledge systems in the field of technology, health and food security. All these initiatives serve as evidence that both national and international organizations have so far put efforts to protect indigenous knowledge. The Barriers to effective Knowledge Sharing within Indigenous Communities A critical analysis of why indigenous knowledge is becoming lost rarely moves beyond the assertion that the elders are dying or the assumption that indigenous knowledge systems are more vulnerable than Western systems simply because they are oral in nature (Simpson, 2004). One of the main reasons why indigenous knowledge has become threatened lie embedded in the crux of the colonial infrastructure. This infrastructure will continue to undermine efforts to strengthen indigenous knowledge systems and to harm the agenda of decolonization and self-determination, unless it is properly dismantled and accounted for (Simpson, 2004). Msuya (2007) emphasized that indigenous knowledge benefits be returned to knowledge owners and suggests measures that can be taken to alleviate the challenges including developing appropriate indigenous knowledge policies and practices. Indigenous knowledge is tacit knowledge that people should be willing to verbalize and share, however, indigenous people are not always willing to share this knowledge with people from outside their communities (Msuya, 2007). As stated by Wenger, et al. (2002), transfer of knowledge across cultural boundaries also creates additional challenges for collaborative learning in multi-national and global organizations. The common ways of sharing knowledge such as the transfer of knowledge from the sender to the recipient are also based on an old-fashioned transmission model (Savolainen, 2017). Lack of awareness about the historical value and significance of digital documentary heritage among policy makers has also been taken for granted for far too long. Culture is also one of the major barriers to knowledge sharing, and therefore knowledge sharing fails in organizations because they tend to change the culture of their organizations to suit the knowledge sharing strategies (Jain et al., 2007). Nadason, et al (2017) highlighted individual barriers to knowledge sharing such as fear, lack of time, low level of awareness, lack of interpersonal skills, difference in level of experience, poor communication, education, gender and age differences. Nadason et al. (2017) also identified organizational barriers such as lack or no rewards, organizational culture, infrastructure shortage, lack of organizational resources and communications, lack of technical support, lack of integration of the Information and Communication Technology (ICT) system, unwillingness to implement the ICT system and lack of knowledge management system training. Lashgarara, et al. (2011) also identified the absence of training on the use of knowledge management systems as a large problem to the states. Lwoga, Ngulube and Stilwell (2011) identified other barriers that hinder sharing of indigenous knowledge including selfishness, occurrence of conflicts within families, use of conventional technologies and techniques undermining knowledge sharing and some indigenous knowledge holders requiring payment to share their knowledge. Other challenges to effective knowledge sharing include insufficient funding, complexity of ownership protocols, loss or misappropriation of digitized indigenous knowledge, lack or limited skills, inadequate infrastructure, protection of intellectual property rights and unreliability of the preservation media (Akinwale, 2012;Sithole, 2007). The traditional medicinal knowledge is also practiced in secrecy. This knowledge is only shared by the senior traditional medicinal knowledge owners with many years of experience where they stay with trainees until the duration of their training (Khumalo et al., 2018). A study by Wanakwakwa et al. (2013) on traditional medicinal knowledge indicated that indigenous knowledge owners preferred keeping secrecy of their knowledge and any attempts made towards invasion of secrecy was fined to prevent efforts to stealing this knowledge. The mode of transference is in verbal form and this knowledge thus needs to be captured, documented and preserved for future use. However, the owners of this knowledge are generally reluctant to have their knowledge and skills documented and to share their drug sources, materials, methods and implementation procedures with the general public. Therefore, this makes close family members to be the one who imbibe this knowledge from its owners before they perish or die (Anyaoku et al., 2015). Anti-colonial strategies for the recovery and sharing of indigenous knowledge systems and a critical analysis of colonialism are thus required. Wenger, McDermott and Snyder (2002) described the relation of culture as often seen in the relationship between shared practices, knowledge sharing and shared identity development. Knowledge sharing is the ability to transfer framed experiences, information, and expert insights into practices (Wiewiora, Trigunarsyah, Murphy & Coffey, 2013). It is the process of transference of experience and organizational knowledge to business processes through communication channels between individuals (Oyemomi, Neaga & Alkhuraiji, 2016). Indigenous knowledge is communicated orally and can thus be shared through the use of traditional techniques or methods such as oral traditions, community of practices, storytelling, etc. oral Tradition Oral tradition is collections and the living memories of the past that have been orally transmitted, recounted and shared throughout culture (Kargbo, 2008). Oral traditions may speak of a particular family, lineage, language, or region, or serve as markers of distinct indigenous identities. In order to qualify as oral traditions, these traditions comprise varied cultural heritage practices and resource transmitted over generations by means of observation and word of mouth (Biyela, 2016). The information is typically passed on through acts of story-telling, speech or song that are both literal and metaphorical, as they verbally reconstruct connections with the past (Johnson, 2007). As observed by Bruchac (2014), indigenous peoples may combine the narration of these traditions with other activities such as ritual practice, music, art, rock carvings, mock combat, that ritually re-enact or engage with ancestral beings or other creatures. Oral traditions, whether communicated as historical narratives or mythical stories, constitute a form of traditional knowledge that can teach, carry and reinforce other knowledges (Bruchac, 2014). Wilson (2015) described oral traditions as historical sources of social nature derived from the fact that they are not written and the sharing depends on the power of the memory of successive generation of human beings. It is a technique of sharing indigenous knowledge which plays a role in facilitating better understanding of one's history and background, and to inform others about their culture (Babatunde, 2015). These traditions can also include supernatural data, stories of encounters between human and non-human beings in the distant past, messages delivered by animal intelligences, spiritual visions and transformations (Cajete, 2000). Some of the most ancient oral traditions record the actions of other-than-human beings who moved glaciers, rivers and rocks, actively sculpting the Indigenous homeland (Bruchac, 2014). Oral traditions can have both practical and ritual aspects. On a practical level, indigenous peoples have developed technologies that enable successful hunting and gathering while ritual activities provide a means of communicating with elemental spirits and worldly beings that have intelligences of their own (Apffel-Marglin, 2011). From the colonial era to the present, oral traditions have also been employed as a means of identification and a form of resistance to colonial domination (Vizenor, 2008). Community of Practice Communities of Practice (CoP) or communal meetings are popular in recent years, as a means of managing the human and social aspects of creating and disseminating information in organizations, and they are increasingly being looked at in sharing knowledge (Ardichvili, et al. 2003). Community of Practice (CoP) is a group of people who share a concern, a set of problems or a passion about a topic and who deepen their knowledge and expertise in this area by interacting on an ongoing basis (Canadian International Development Agency, 2003). Ngulube and Mngadi (2009) further described a community of practice as a group of people who work together in a responsible way to share ideas. Wenger, McDermott and Snyder (2002) believe that CoP groups are the most versatile and dynamic sources of knowledge in an organization. CoP is created by people who engage in a process of collective learning in a shared domain of human endeavour such as a tribe learning to survive, a band of musicians trying to find new forms of expression, a group of defining their identity in school etc (Wanger, 2011). Creating CoP is a way to share knowledge and experiences with others who are passionate about the same topic and in return, all members learn from others. CoP members freely discuss the various situations they face, their aspirations and they develop a unique action-oriented perspective and context-specific common practice (Canadian International Development Agency, 2003). CoP is a vital component of a learning organization and all members of the community thus benefit from participation in community of practice as they become part of the collective process of learning and share knowledge practices. CoPs can be virtual or physical and are tailored to members' needs. CoP members manage their tacit and explicit knowledge in a given field as effectively as they can and these meetings allow members to generate new knowledge in response to specific problems and issues, share specialized knowledge and to test new ideas (Canadian International Development Agency, 2003). However, as noted by Biyela (2016), this practice was disappearing due to reasons such as lack of resources, lack of commitment and cooperation and low level of education, although the sharing of knowledge through communal meetings has been practiced by communities for a long period of time. Digital Preservation of Indigenous Knowledge as Decolonial Strategy Indigenous knowledge is regarded as undocumented knowledge that exists in the minds of community people which is passed from one generation to the other through word of mouth (Ebijuwa, 2015). This suggests the importance of preserving it for fear of being lost. However, the successful transmission of indigenous knowledge is dependent on the elders of community's capability to pass on the knowledge before the full richness of the stories diminishes. Community elders must thus be respected and valued for knowledge gained from sharing their stories. The number of community elders and indigenous knowledge holders who have a considerable amount of indigenous knowledge is however diminishing as they pass away and this has become a concern to indigenous peoples and academics alike. Adeniyi (2013) also noted that African indigenous knowledge is poorly managed and some ideas vanish once the custodians die. There is a growing need to preserve indigenous knowledge, as indigenous communities around the world face ongoing threats to the survival of their traditional languages and cultures (Stevens, 2008). A concern over the loss of indigenous knowledge due to colonization and globalization, and that indigenous communities may lose control and rights over their knowledge has also raised a need in preserving this knowledge in digital formats. Several proponents have argued that if indigenous knowledge is not digitized and preserved, it will become unavailable for generations to come (Biyela, 2016;Oyelude & Haumba, 2016). It is therefore important to preserve indigenous knowledge as it can be shared with others and be passed on to upcoming generations. The availability of digital technology has greatly expanded possibilities for digitization and preservation of indigenous knowledge. More recently, cultural heritage institutions (libraries, archives and museums) in African countries such as South Africa, Uganda, Nigeria and Ghana are taking advantage of digital technologies, to digitise and preserve heritage resources to create national repositories (Biyela, 2016). A digital heritage management system, South African Heritage Resources Information System (SAHRIS) which integrates the processes of recording moveable (objects) and immoveable (sites) heritage resources with their management, was also developed, as mandated by the National Heritage Resources Act (NHRA) (SAHRA, 2012). The National Digital Heritage Archive (NDHA) in New Zealand, American Memory and the Australian Digital Collections are other examples of national digital memory projects that have been developed through collaborations with people from source communities and key stakeholders. However, as noted by Tobin (2004), there is a misperception in some indigenous communities that recording knowledge in registers and databases is a means of asserting rights of ownership over the knowledge. Tobin (2004) argued that placing the knowledge in a publicly accessible database can enhance its accessibility for bio-prospectors while giving little benefit to the holders of the knowledge. Documentation and preservation of indigenous knowledge must thus be part of a legislative system that recognizes rights over this knowledge. Proper knowledge management policy and procedures thus need to be implemented in the preservation of indigenous knowledge (Kaniki & Mphahlele, 2002). Sithole (2007) described preservation as an acceptable way to validate and grant indigenous knowledge protection from bio piracy and other forms of abuse, and help to ensure that communities are not disadvantaged because of the unique beliefs and folkways that pattern their lives. Bio piracy is a practice in which a community's indigenous knowledge is plundered by outsiders for profit. Dewi and Susetyo-Salim (2017) described preserving knowledge as an effort of conserving knowledge that people have in order to not lose it. Yadav (2013) identified various reasons for preserving indigenous knowledge as to assist in the conservation of the environment, to prevent bio piracy, to benefit national economy and to improve the livelihoods of indigenous knowledge owners and their communities. Digital preservation has thus become a popular method for assisting in the recovery and protection of indigenous knowledge. Some indigenous communities have perceived the need to preserve their knowledge as a means to assert ownership over it and protect it from illegal commercial use or unauthorized use. Twarog and Kapoor (2004) noted the potential benefit of making indigenous knowledge accessible in a digital format as to make it more appealing to youth or others who may see this knowledge as 'old-fashioned'. Therefore, documenting and preserving indigenous knowledge in an accessible format increases the likelihood of considering the indigenous knowledge owners' rights and perspectives in policy development and resource management. In addition, there can be economic benefits for indigenous communities who preserve their knowledge and possibly sharing it for commercial use. For example, some indigenous communities may wish to preserve indigenous knowledge related to plants so that pharmaceutical companies that use these plants for product development will recognize prior the use by these communities and benefit them accordingly (Stevens, 2008). Preservation of indigenous knowledge ranges from written materials such as reports, manuscripts, field notes, and media formats such as audio and video recordings, films, photographs, illustrations, paintings and three-dimensional artefacts. Cultural heritage institutions such as libraries, museums and archives can act as repositories of indigenous knowledge to ensure that it is accessible and usable to benefit indigenous knowledge owners and their communities. These institutions can thus assist in the collection, preservation of indigenous knowledge by publicizing its value, raising awareness on the protection of indigenous knowledge, involving elders and communities in the production of indigenous knowledge and encouraging the recognition of intellectual property laws to ensure the proper protection (IFLA, 2015). Durst and Wilhelm (2012) acknowledge that the libraries have played a major role in South Africa's national life and in the fight for democratic freedom through political and cultural impacts. Some of these institutions have adopted the Virtual Communities of Practice (VCoP) and social networks as the tools for managing and sharing knowledge amongst the employees and other stakeholders. A number of indigenous knowledge policy frameworks and initiatives have also been put in place by these institutions for safeguarding indigenous knowledge systems. Some of the indigenous knowledge systems digitization projects implemented in different parts of the world include the Traditional Knowledge Digital Library in India, Native Web in the Unites States of America (Chikonzo, 2006). Academic institutions in South Africa established Indigenous Knowledge Systems Documentation Centers (IKSDCs) to facilitate the digitization of indigenous knowledge systems. The Department of Arts and Culture (DAC) in South Africa also formulated a National Policy on Digitization of Heritage Resources that explicitly mentions community rights for indigenous and acts as a guideline for the digitization of heritage materials (Biyela, 2016). Cultural heritage institutions in South Africa should therefore continue building consultation and collaborative networks with various indigenous stakeholders in order to improve best practices. Collaborative efforts and networks would also enable the establishment of a database that will provide access to different types of users governed by rights of access to indigenous knowledge. Cultural heritage institutions should also recognize their influence as socio cultural agents and actively work with indigenous communities in protecting and preserving indigenous knowledge. These institutions also need to develop knowledge management systems and provide their patrons with high-quality information in a reasonable time. The Impact of Policy and Protocols in Decolonizing Indigenous Knowledge Systems Decolonizing research should be centred on indigenous value, policy and indigenous protocols. Theories, policy, protocols and initiatives for the protection of indigenous knowledge were thus reviewed in this article. The review of these policies and protocols helped in understanding whether indigenous knowledge policy support indigenous knowledge practices. The government of South Africa has countered bio-piracy of indigenous knowledge and resources by passing laws which protect indigenous knowledge systems (Masango, 2010). The government has also fulfilled its commitment in the health sector by acknowledging traditional healers and medicinal plants through integrating indigenous knowledge systems into health policy (South Africa, 2008a). It has been using indigenous knowledge system such as biotechnology to develop and improve indigenous natural resources for the socio-economic development of South Africa (South Africa, 2012). Several departments such as the Department of Science and Technology (DST) (2004) approached the cabinet in pursuit of formulation of an indigenous knowledge policy. The Protection, Promotion, Development and Management of Indigenous Knowledge Act 6 of 2019 has been promulgated in South Africa to protect the indigenous knowledge from unauthorised use and misappropriation, to regulate the equitable distribution of the benefits of the use of indigenous knowledge and to provide for the documentation of indigenous knowledge. The Department of Trade and Industry (DTI) (2010) developed national intellectual property policy which explains how indigenous knowledge can be protected through the use of patents, trademarks, designs and copyrights. DST (2013) established the National Indigenous Knowledge Systems Office (NIKSO) aimed at protecting intellectual property rights of indigenous communities and ensure equitable sharing of resources. Although protocols for handling indigenous knowledge help to uphold its interests by developing standards for ethical professional practice, however, they do not provide legal protection for indigenous communities nor do they provide any legal framework for those who want their cultural and intellectual property rights protected. Therefore, developing standards for ethical professional practice among indigenous communities is about managing risks associated with breaches of intellectual property rights. DST (2013) developed the National Recordal System (NRS) aimed at recording indigenous knowledge and bridge the chasm between indigenous knowledge systems production and other Western knowledge systems. Based on content analysis undertaken by this study, intellectual property systems in South Africa are not entirely compatible with the nature of indigenous knowledge and do not provide effective protection to indigenous knowledge. Much of the indigenous materials in cultural heritage institutions in South Africa remains subject to relevant copyright laws. In many cases, the institution is the owner of copyright, in others copyright is owned by the individuals or entities which created the particular work or material (Andrzejewski, 2010). Most knowledge in developing countries is thus not legally protected and this leaves much of indigenous knowledge in developing countries open to bio-piracy and other forms of misappropriation (Msuya, 2007). The indigenous knowledge is thus used without the consent of the indigenous people, who are also given no acknowledgement for their work. For example, the hoodia plant has been patented for medicinal purposes and there was no recognition or compensation given to the indigenous Kalahari community that shared this knowledge with the global world, and this is a clear example of bio piracy (Msuya, 2007). Some of the developing countries have proposed that before patents are awarded to applications relating to indigenous knowledge, the country of origin and indigenous knowledge used in the invention must be disclosed, and proof of prior informed consent obtained through relevant authorities in the country of origin be provided (Ndinda, 2011). It is also suggested that a sui generis approach be adopted implying that adding information to the database automatically constitutes establishing a legal claim over it. Sui generis is the preferred method for effectively protecting indigenous knowledge and it has been favoured by international organizations. There are two forms of sui generis systems to be used in protecting indigenous knowledge namely, positive protection and defensive protection (World Intellectual Property Organization) (WIPO, 2002). Positive protection consists of declaring the rights of indigenous knowledge holders and indigenous communities. The protection should empower them to control and manage their indigenous knowledge and also afford them the rights to restrain any unauthorized use and exploitation of such knowledge. This approach has also been used in countries such as Peru, Costa Rica, Portugal, Thailand, Venezuela and Bolivia (Biyela, 2016). The recognition of customary law in national legislation is also a form of positive protection of indigenous knowledge and by adopting this approach in South Africa means it will avail its databases to the international communities. Defensive protection aims to stop unauthorized parties from using indigenous knowledge. For example, India has used the defensive approach to protect its indigenous knowledge (WIPO, 2002). Although a positive protection approach seems to be most favored approach and adopted by different national initiatives, a combination of both positive and defensive protection is however, recommended in order to provide effective protection and preservation of indigenous knowledge in South Africa. CoNCLUSIoN AND ReCoMMeNDATIoNS The national and international initiatives, policy frameworks and protocols were reviewed in this study in order to understand the protection of indigenous knowledge for the benefit of indigenous holders and their communities. Considerable progress has been made in promoting indigenous knowledge, and recognition of this knowledge is increasingly becoming part of the development agenda. National and local initiatives, projects and programs are emerging and increasing, policy and protocols are developed while civil societies groups are forming a broad base of support. Yet, some substantial challenges remain. South African government need to ensure that the indigenous knowledge policy and protocols are easily located and be understood by indigenous knowledge owners, communities and researchers. Cultural heritage institutions should also play a role in the documentation and preservation of indigenous knowledge and in developing policy that can guide in the preservation of indigenous knowledge. Cultural heritage institutions should also build partnerships and collaborative networks with various indigenous communities and relevant stakeholders in developing digitization and preservation projects that communities can use as tools for social development and disseminating community-collected knowledge. Knowledge outcomes must be shared in ways that benefit indigenous communities through consultations with research participants and culturally relevant processes and protocols. This study recommends that: • Digital technologies should be applied by indigenous knowledge owners and their communities to preserve and disseminate indigenous knowledge for the use of future generations. • Cultural heritage institutions should educate or provide training to indigenous knowledge owners on how to use digital technologies to preserve their knowledge. • The South African government should review policy and laws on indigenous knowledge systems to ensure that they meet international requirements for the protection of indigenous knowledge. • International initiatives and foreign national laws may be used as guidelines for the protection of indigenous knowledge in South Africa. FUTURe ReSeARCH AND IMPLICATIoNS Knowledge and cultural manifestations change but the values and worldview need to be recognized and appreciated so that they can be re-expressed in creative and relevant ways for our 21st century spaces (Keane, Khupe & Seehawer, 2017). As stated by Keane, Khupe and Seehawer (2017), we dream into a future that rests in ancient wisdom but rearticulated by bright young people, not for self-promotion, consumerism, personal gain and greed but for community well-being and the respect and preservation of nature in all its manifestations. Academics, indigenous knowledge holders and their communities, traditional and political leaders in governments need to be prepared to dismantle the colonial and its current manifestations by engaging in decolonial and anti-colonial strategies for the protection and preservation of indigenous knowledge. This study viewed digital preservation as decolonial strategy to protect and provide long-term access to indigenous knowledge, and provide ways in which indigenous researchers can share knowledge with participants for the benefit and acknowledgement of the communities or research participants as knowledge holders. Future research should thus focus more on developing indigenization and decolonization strategies. There is thus a need for increased funding and capacitating of information professionals in the digital preservation of indigenous knowledge. The indigenous knowledge research should also follow a community-based co-design approach which is based on philosophies of participatory design and action research, which adopts fundamental principles of Afrocentricity and Ubuntu such as humanness, connectedness and consciousness.
2022-09-30T15:14:34.026Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "f8f3d978171ea80836be224abf7db6344aa89163", "oa_license": null, "oa_url": "https://doi.org/10.4018/ijkm.310005", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cf92131854d3e08be02630bd1dac6030556061b0", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
40502125
pes2o/s2orc
v3-fos-license
Magnetic susceptibility of liquid 3He . 3 He is a model of Fermi liquid, isotropic, its Fermi temperature is attainable and the interaction between atoms can be controlled by changing the pressure on the liquid. In this paper we present accurate cw-NMR measurements of the nuclear magnetic susceptibility of liquid 3 He as a function of temperature and pressure. The emphasis has been placed in reliable thermometry, 3 He pressure measurements directly in the cell to increase the measuring range until solidification, and an accurate characterization of the NMR spectrometer. Our measurements give effective Fermi temperatures substantially lower than former results. Introduction One of the most important research subjects in modern physics is the correlated fermions problem.Lev Landau's theory [1] provided a phenomenological framework for the description of the Fermi liquid in terms of quasi-particles, a fundamental concept in modern condensed matter physics.In particular, Landau's theory could successfully describe the quantum fluid properties observed in liquid 3 He at very low temperatures, establishing a correspondence between the thermodynamic and transport properties of the Fermi gas with those of the interacting system [2].Liquid 3 He has been extensively investigated since 1949.The well known Landau parameters were obtained by measuring compressibility, heat capacity, thermal expansion, thermal conductivity, etc.Its nuclear magnetic properties were investigated by NMR techniques, and a strong deviation from the Curie law indicating the onset of the degenerate regime could be observed by Fairbank and collaborators [3].Many investigations followed these pioneering works.Today, liquid 3 He is considered as the canonical Fermi liquid, and Landau's theory is firmly established.However, a microscopic theory is still missing, in spite of the considerable amount of work done during the last decades.Predicting, or even calculating numerically the Landau parameters from first principles is still a long term objective.Liquid 3 He, as a model system, plays an important role, serving as a benchmark for microscopic calculations.For this reason, it is important to have accurate experimental results on its properties.In addition, the coherent quantum states observed in fermionic systems at very low temperatures, like the superfluidity of liquid 3 He, depend crucially on the Fermi liquid properties.Again, the knowledge of the Landau parameters is essential for a quantitative understanding of these systems. We present in this article a brief account of the extensive measurements of the nuclear susceptibility of liquid 3 He performed in Grenoble.The data cover the whole pressure and temperature range.Particular care has been taken to achieve the best accuracy in all the measured parameters.Substantial deviations have been found with respect to earlier measurements.The consequences are important, as in 1986, when the heat capacity data of D.S. Greywall led to a major change of the accepted values of the Landau parameters and the temperature scale.The Landau parameters inferred from our work, in addition to the improved accuracy, display a very smooth dependence when plotted as a function of the molar volume or pressure.Former data, on the other hand, show a sudden change in slope at intermediate pressures, a feature which caused difficulties in the comparison of the experiments with theoretical works. Earlier experimental work and reference experiments Several articles on the nuclear magnetic susceptibility of liquid 3 He can be found in the literature.In the sixties, the experiments were limited to rather high temperatures [4,5,6], and their accuracy was poor: discrepancies of more than 15% were common.The measurements performed in 1970 by Thompson et al and Ramm et al [7,8] were considered for many years as the reference measurements for the magnetic susceptibility of liquid 3 He and the related magnetic Landau parameters.In 1991 Nacher et al observed inconsistencies between a measurement of the nuclear susceptibility of liquid 3 He and the pure liquid 3 He reference data extrapolated to zero pressure [9].Soon after, in 1992, Hensley et al published new data of the susceptibility of liquid 3 He which agreed well with the 1970 results [10]. However, in measurements of the nuclear susceptibility of liquid 3 He confined in aerogel performed in Grenoble during the PhD thesis of Chen et al. [11], the bulk liquid 3 He contribution was found to be clearly at variance with earlier measurements of this magnitude.This led us to reinvestigate the magnetic properties of bulk liquid 3 He.Our first results [12,13] confirmed the existence of a large discrepancy with ref. [7,8].A large inconsistency was also observed by H. Bozler and coworkers [14,15] using SQUID techniques. Experimental results Our objective was to measure the nuclear magnetic susceptibility of liquid 3 He in the temperature range 5 mK -2 K, for all pressures from the saturated vapor pressure to solidification.For this purpose, a high power and very low temperature dilution refrigerator (built in the laboratory) was used. Substantial efforts were devoted to establishing high quality thermometry, in the framework of a European research program, in liaison with metrological institutions.A sophisticated thermometric set-up, including a high accuracy melting curve thermometer, a superconducting fixed points device SRD1000, a Coulomb Blockade thermometer, and several carbon thermometers for temperature regulation and control, were attached to the mixing chamber of the refrigerator.The sample pressure was determined by a cryogenic pressure gage (similar to a melting curve thermometer); this allowed us to measure the pressure at low temperatures even above the minimum of the melting curve, where the liquid sample is isolated by a solid 3 He plug. The experimental cell is made out of Stycast 1266, and the radio-frequency coil is wound directly onto the cell.The liquid is confined in a volume of 2.7 mm diameter, and the length seen by the radio-frequency is on the order of 1 cm.Platinum wires of diameter 25 µm ensure the thermalization within the main volume, while sintered silver heat exchangers are used in the reservoir placed outside the radio-frequency field, connected to the filling capillary. The NMR measurements were performed at a frequency of 750.15 kHz, in a magnetic field of 23.1 mT provided by a superconducting magnet.The NMR lines (in-phase and quadrature signals) were recorded automatically as the magnetic field was swept by a computer controlled system.Typical results are shown in figure 1, together with the results of other works.Normalized susceptibility of bulk liquid 3 He as a function of temperature, at pressures around 2.9 MPa.The graph shows the large deviation observed between different sets of data (see References).Note that our results extend to lower temperatures and that the measured low temperature susceptibility is substantially larger than reported in previous works. The calibration of the spectrometer sensitivity was done by determining the Curie constant of liquid 3 He for temperatures above 1 K at different pressures, and also with solid 3 He at a (melting curve) pressure of 3.2 MPa between 165 and 520 mK.The determination of the Curie constant was made with an uncertainty of 0.5 %.Note that in addition to the large pressure dependence of the molar volume, the latter also displays a small temperature dependence.Corrections have been made in order to accurately determine the molar susceptibility from data obtained at constant pressure. From the susceptibility measurements one can obtain the effective Fermi temperature T * * f , which is a measure of the temperature below which degeneracy effects are noticeable.T * * f is smaller that the Fermi temperature T * * f , in the ratio of the susceptibility enhancement with respect to the ideal Fermi gas of the same density: The effective Fermi temperatures obtained in this work are substantially smaller than those reported in previous works.Our data for the saturated vapor pressure, for instance, yield T * * f =329 mK, to be compared to the currently accepted value of 359 mK [7,8].At higher pressures, the discrepancy is even larger.Our results for the three pressures of figure 2 (0.2898, 1.811 and 2.906 MPa), are respectively 291, 202, and 170 mK. We believe that the discrepancies observed between different authors are due to the considerable difficulty of the measurements.Although it may look simple to measure the magnetic susceptibility as a function of temperature and pressure, various problems arise when all quantities must be determined with an accuracy on the order of a percent.The nuclear susceptibility is small, especially at high temperatures (above 1 K) where a calibration of the NMR spectrometer is made in the Curie regime of the susceptibility.At millikelvin temperatures, the Kapitza resistance induces a decoupling from the liquid 3 He and the heat exchangers, necessarily placed outside the NMR radio-frequency coils.Note that radio-frequency leakage was common in old NMR spectrometers, and that the flat susceptibility of liquid 3 He below 100 mK makes it difficult to notice a heating effect.In pulsed-NMR measurements, it was also difficult to check that the free induction decay time remained constant, therefore affecting the extrapolation of the signal to the origin of time.At intermediate temperatures, the thermal diffusion is poor.It is difficult to design a cell adapted to measurements in a large temperature range.Finally, measuring accurately the temperature from a few mK to several K is a challenge. Conclusion We have measured the magnetic susceptibility of bulk liquid 3 He as a function of pressure, in all the pressure and temperature range of interest.Our results have been carefully checked by making several independent NMR and thermometry measurements, in different set-ups [12,16].Due to their accuracy, our data lead to a determination of the effective Fermi temperature, an essential parameter in the physics of this model Fermi liquid.In particular, with a careful analysis [16] we could derive the Landau parameter F a 0 for liquid 3 He as a function of molar volume, and the new results are in excellent agreement with density functional theory.A detailed account of the experimental and theoretical results will be published elsewhere [17] Figure 1 . Figure1.Normalized susceptibility of bulk liquid 3 He as a function of temperature, at pressures around 2.9 MPa.The graph shows the large deviation observed between different sets of data (see References).Note that our results extend to lower temperatures and that the measured low temperature susceptibility is substantially larger than reported in previous works. Figure 2 . Figure 2. Magnetic susceptibility of bulk liquid 3 He (normalized by the molar volume and the Curie constant) as a function of temperature, for three pressures: 0.2898, 1.811 and 2.906 MPa.The error bars are smaller than the size of the points on this graph.
2019-04-04T13:03:55.231Z
2009-02-01T00:00:00.000
{ "year": 2009, "sha1": "8593183410d0e30045a4973d9129b89d3f8f07bd", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/150/3/032024/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "862d525bd1191cb5451a44208e156b83577d6a3c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics", "Materials Science" ] }
72756424
pes2o/s2orc
v3-fos-license
Rabbit hemorrhagic disease Rabbit Hemorrhagic Disease (RHD) is a rapidly lethal infectious viral disease of the European Rabbit (Oryctolagus cuniculus) characterized by high mortality rates, acute hepatic necrosis, and disseminated intravascular coagulation. Although this disease is considered enzootic in Europe and parts of Asia, it is rarely seen in the Western Hemisphere since its eradication from Mexico in 1992. In recent years, three cases of RHD have been identified in the United States. Due to the quick action of veterinarians these cases were confined and controlled before the disease could spread. R abbits are raised for food, for pets, and as show animals worldwide. Rabbits are particularly important in Asia and central Europe, where small-scale rabbitries are common and an integrated part of the culture. In this article an emerging disease, Rabbit Hemorrhagic Disease (RHD), is described. RHD already has had a major global impact and is a real threat to the rabbit industry and pet rabbits in the Americas. History Rabbit Hemorrhagic Disease was first recognized in China in 1984. The epicenter of the outbreak occurred in a group of commercially bred Angora rabbits recently imported from Germany to Jiangsu Province, China. These rabbits soon exhibited a contagious rapidly fatal disease that spread over 50,000 square kilometers in 9 months killing 470,000 rabbits. This disease was variously called "X-Disease of Rabbits," "rabbit viral sudden death," "picornavirus hemorrhagic fever in rabbits," "hemorrhagic septicemia syndrome in rabbits," and "infectious necrotic hepatitis of leporidae." Today, this disease is known as Rabbit Hemorrhagic Disease (RHD), Viral Hemorrhagic Disease of rabbits, or Rabbit Calicivirus Disease (RCD). Since 1984, RHD has disseminated widely and has been reported in over 40 countries from Asia, Africa, Europe, and the Americas. [1][2][3][4] Disease RHD is caused by the Rabbit Hemorrhagic Disease Virus (RHDV). RHDV is a member of the family Caliciviridae, genus Lagovirus, and is closely related but distinct from the European Brown Hare syndrome virus, which causes similar symptoms and disease as RHDV in hares. 4 RHDV is readily disseminated. Virus is shed in the feces and nasal secretions of infected rabbits. Both ingested and inhaled virus can result in infection. The virus is stable in the environment. This allows RHDV to be spread by contact with contaminated caging, shavings, bowls, feedstuffs, and other fomites. The virus can also be passively transported by flies over short distances and can also travel by movement of people, equipment, wild or domestic animals, pelts, and rabbit carcasses. 3 The incubation period for RHDV is short, only16 to 48 hours. RHDV is highly infectious and highly virulent. As a result, morbidity and mortality rates may reach 90% to 100%. Death usually occurs between 2 and 3 days postinfection, although some rabbits may live for several days before they die. The disease is confined to rabbits over 2 months of age. It is believed maternal immunity protects the infant rabbits from this disease. As the maternal immunity declines at 2 months of age, the rabbits become susceptible to RHDV. [1][2][3] Three forms of the disease are recognized depending on past history of the disease in the affected rabbit population. A peracute form of the disease occurs in naïve rabbit populations. In this form of the disease, rabbits die suddenly, exhibiting no or very few signs. The acute form of the RHD is seen in rabbit populations where the disease is enzootic. These rabbits typically show some signs of disease before death. A subacute form of the disease is uncommon and occurs in the later stages of an epizootic. In this form of the disease, affected rabbits are clinically ill, but many survive. [1][2][3] Several clinical signs have been observed in rabbits both experimentally infected with RHDV and those that were naturally infected (Fig 1). Not all clinical signs were present in every animal. Animals may have a temperature above 104°F (41°C), show rapid respiration, cyanosis, and become anorexic and recumbent. Severe diarrhea is common. In the late stages of the disease, various neurological signs are apparent, including lateral recumbency, paddling, ataxia, and terminally frenetic behavior sometimes accompanied by squealing. Opisthotonos has been observed in many animals, and 20% of infected rabbits have a bloody foamy discharge from their nose and, less frequently, from the vagina (Fig 2). Hematological findings in rabbits with the acute form of the disease include lymphopenia, thrombocytopenia, and alterations in the coagulation panel. Liver enzymes also are expected to be elevated. 2,3 Typically, rabbits dying with RHD are in good body condition and have recently ingested food. RHD causes a severe diffuse necrotic hepatitis and disseminated intravascular coagulation. The liver may be pale, yellow, gray, friable, or congested and have a distinct lobular pattern. Multifocal petechial hemorrhages occur in the liver, lungs, kidneys and heart (Figs 3 and 4). The spleen is often dark and engorged. Pneumotracheitis and tracheal hemorrhage are common, and jaundice has been occasionally noted. Lesions of the digestive tract are usually absent, although in some outbreaks, a catarrhal enteritis is a feature of RHD. In cases of acute RHD the characteristic hemorrhages might not be present. [1][2][3] Microscopic lesions of the liver are characterized by an acute diffuse hepatic necrosis and hemorrhage with minimal inflammation. Round eosinophilic inclusion bodies are seen in hepatocytes. Lymphoid necrosis in the spleen and lymph nodes is another characteristic lesion of RHD. Glomeronephritis, encephalomyelitis, and enteritis also may be seen histologically. 1-3 Figure 1. Rabbit with Rabbit Hemorrhagic Disease. This animal is severely depressed and poorly responsive. In a sudden outbreak of rabbit hemorrhagic disease, the duration of signs in rabbits before death is usually less than a day (Photograph kindly provided by Elizabeth Morales Salinas). Diagnosis A presumptive diagnosis can be made on client history, clinical signs, and postmortem findings ( Table 1). In North America and Mexico, if this disease is suspected, a regulatory veterinarian should be immediately contacted. At specialized facilities, a diagnosis can be confirmed by electron microscopy of fixed tissues or homogenates of fresh tissue, immunofluorescense, a hemaglutination assay, or a viral antigen detection ELISA. A complete set of tissues should be formalin-fixed for histopathology. Small intestine, lung, liver, spleen, and kidney samples should be saved fresh for virus detection. 5 Eradication Eradication is currently considered the best response to this disease when it is introduced to a country that does not have RHD. When infected rabbits are identified, all in contact rabbits are destroyed. The carcasses, all wooden caging and housing, and all organic material is buried. Metal cages and food bowels and glass water bottles are cleaned and then disinfected with 2% sodium hypochlorite solution. Buildings that housed rabbits are power washed and then disinfected with sodium hypochlorite solution. Soil that may have been in contact with rabbits, rabbit feces or urine is treated with the sodium hypochlorite solution and then calcium hypochlorite (lime). In the United States, sentinel rabbits were housed on the facilities and monitored for disease for 1 month before the facility was allowed to be repopulated. 1 Treatment Treatment is only an option in countries were this disease is enzootic. Because of the vigor and speed with which this disease kills rabbits, most rabbits are dead before any care can be administered. If rabbits are exhibiting clinical signs, isolation, supportive care, and symptomatic treatment is all that can be done. Quarantine and Sanitation In areas were RHD is enzootic, cleanliness is vital. The virus is inactivated with a 1% sodium hypochlorite solution. Rabbits should be housed indoors if possible. Stray animals should be kept away from rabbit housing and flying insects kept to a minimum. Access to the rabbitry should be limited, and caretakers should not have contact with other rabbits from other facilities. Maximizing sanitation within the rabbitry, especially cleaning and disinfecting cages between rabbits, may reduce the chance of disease transmission. New stock should be quarantined in a separate facility for 4 months, as rabbits that have survived RHD shed virus in their feces for up to 4 months. 2,3 Immunization Vaccines against RHD are available in countries with enzootic RHD. There is a 6-month and a 1-year vaccine available. The vaccines are relatively safe, although illness, infections at injection sites, and death have been documented. RHD vaccines are not available in the United States and Canada at this time. The vaccine currently being used in Australia is Cylap HVD (Cyanamid, Spain) and is a killed vaccine. It is safe to consume a rabbit vaccinated against RHD. Rabbits can be immunized at 6 weeks of age or older. If the rabbit is less then 10 weeks old at first vaccine, a second immunization should be given in 4 weeks. Animals demonstrating any signs of ill health should not be immunized. 6 RHD in Mexico The Americas were free of RHD until December of 1988 when it is thought that a shipment of -This is the most common disease seen in clinical practice. -Signs can range from soft stool and brown watery diarrhea with or without mucous or blood, to enterotoxemia, sepsis. Severely affected rabbits will become anorectic and depressed, and die. -Rabbits become hypothermic and moribund and die after 24-48 hours. Occasionally a chronic form of the disease is seen with rabbits ittermitedly having diarrhea, anorexia, and weight loss. -Post mortem findings include petechial and ecchymotic hemorrhages on the serosal surface of the cecum, sometimes including the appendix and colon. -Gas may also be present in the small intestines, cecum, and colon. United States There have been three outbreaks of RHD in the United States. The first was an isolated rabbitry in Iowa in 2000. This premise was depopulated and additional cases were not found. The source of this outbreak was never determined. The second outbreak originated in Utah in 2001, but was first recognized in an Illinois facility that had received rabbits from the Utah source. Both facilities were depopulated and the disease eradicated. The source of this disease was not confirmed; however, individuals on the Utah facility had a history of traveling to countries where this disease was enzootic and may have brought the virus back on their clothing. The third outbreak occurred in a zoo in New York State. The veterinarian for the zoo suspected RHD and immediate action kept the disease from spreading from the zoo. The source of this outbreak is thought to be imported rabbit meat. 1 Conclusion RHD is a disease with a potentially devastating impact on the rabbit industry in countries where this disease has previously been excluded. As the economy becomes increasingly global and rabbit products from RHD-infected countries are imported into RHD-free countries, the chance for outbreaks increases. Veterinarians are the first line of defense against RHD and are vital to its control. If RHD is suspected at your clinic, you should immediately contact local or national regulatory authorities. 9
2019-03-10T13:04:16.390Z
2004-04-01T00:00:00.000
{ "year": 2004, "sha1": "30330102e10406777e872115b56658a17a789dea", "oa_license": null, "oa_url": "https://doi.org/10.1053/j.saep.2004.01.006", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "987fce2c13573429e3ac3fd1085e060cc64e0bb1", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
250091476
pes2o/s2orc
v3-fos-license
Emergency department triage and COVID‐19: Performance of the Interagency Integrated Triage Tool during a pandemic surge in Papua New Guinea Abstract Objective To determine the sensitivity of the Interagency Integrated Triage Tool to identify severe and critical illness among adult patients with COVID‐19. Methods A retrospective observational study conducted at Port Moresby General Hospital ED during a three‐month Delta surge. Results Among 387 eligible patients with COVID‐19, 63 were diagnosed with severe or critical illness. Forty‐seven were allocated a high acuity triage category, equating to a sensitivity of 74.6% (95% CI 62.1–84.7) and a negative predictive value of 92.7% (95% CI 88.4–95.8). Conclusion In a resource‐constrained context, the tool demonstrated reasonable sensitivity to detect severe and critical COVID‐19, comparable with its reported performance for other urgent conditions. Introduction The World Health Organization (WHO) recommends a systematic approach to the management of ED patients with suspected and confirmed COVID-19, including use of a 'standardised triage tool'. 1 The only triage instrument specifically named in WHO's guidance is the Interagency Integrated Triage Tool (IITT), a colour-coded, three-tier system recently developed by the WHO, International Committee of the Red Cross and Médecins Sans Frontières. 1,2 Although the IITT has demonstrated acceptable performance in a pre-pandemic context, its ability to identify COVID-19 patients with urgent care needs has not been assessed. 3,4 The present study sought to determine the sensitivity of the tool to detect severe and critical COVID-19 in a resource-constrained setting. Methods This retrospective observational study was a planned sub-study of a broader project evaluating the IITT. It was conducted in the ED of Port Moresby General Hospital (PMGH) in Papua New Guinea (PNG). The study was undertaken between September and November 2021, coinciding with the incursion of the Delta variant and the country's third wave (Fig. 1). The period was characterised by overwhelming demands for care, limited bed capacity and severe staff shortages. All patients aged ≥18 years who presented during the study period and were diagnosed with COVID-19 prior to leaving the ED were eligible. Patients were excluded if their triage category or illness severity was not recorded. The primary outcome was IITT sensitivity to identify COVID-19 patients with urgent care needs, defined as severe or critical illness based on PNG and WHO criteria ( Table 1). 1 By definition, these patients require oxygen therapy and may need other time-sensitive interventions. 1 Severity assessment was performed by the treating clinician and recorded on a clinical form, usually at the point of ED departure. The tool's specificity and negative predictive value were calculated as secondary outcomes. Although triage is primarily concerned with sensitivity (i.e. detecting all patients requiring urgent care), in a surge context, the ability of a triage tool to identify lowrisk patients who can safely wait is valuable. This allows for optimal use of resources, ensuring that well patients are not unnecessarily streamed to a high-acuity area. To assess these outcomes, a dichotomised triage categorisation was used, with red and yellow 'positive' and green 'negative'. To calculate specificity, patients with mild or moderate illness (i.e. no oxygen requirement, as per Table 1) were considered 'non-urgent'. Performance characteristics were expressed as percentages with 95% confidence intervals (CI). Results During the study period, 4346 adult patients presented to PMGH ED and 479 (11.0%) were diagnosed with COVID-19. Of these, 387 (80.8%) had triage and severity data recorded and were included in the analysis. The mean age was 46.5 (SD 14.1) and 161 (41.6%) were female. Discussion In this single-centre study, the IITT demonstrated reasonable sensitivity to identify patients with severe and critical COVID-19. The tool's specificity was sub-optimal but reflects that the purpose of triage is to capture all patients with urgent care needs, such that a degree of 'over-triage' is to be expected. The IITT has shown similar performance for urgent non-COVID conditions. Previous studies have determined the tool's sensitivity to be 70.8% and 77.8% for identifying patients with time-sensitive diagnoses such as acute coronary syndrome and ruptured ectopic pregnancy. 3,4 Collectively, these figures are within the performance range of other triage systems to detect critical illness. 5 For instance, a recent systematic review reported the sensitivity of established triage tools to identify severe sepsis as between 36% and 74%. 5 The sensitivity observed in the present study may reflect the duration of care at PMGH ED. During the study period, the average ED length of stay for COVID-19 patients was 37.2 h, so it is probable that some deteriorated between arrival (when a triage category was assigned) and departure (when their severity was documented). The lower specificity likely reflects the presence of 'red flag' symptoms (such as chest pain) among some patients with mild and moderate illness. Study limitations include incomplete data and under-reporting. Due to resource constraints, not all presentations are likely to have been entered into the registry. Notwithstanding these issues, the findings suggest that the IITT is detecting most COVID-19 patients who stand to benefit from timely assessment. Staff managing green patients should be aware that, based on the negative predictive value, approximately 7% will require escalation of care for severe illness. Conclusion In the resource-constrained context of PMGH ED, the IITT's sensitivity to identify COVID-19 patients with severe and critical illness was comparable with the reported performance of triage tools to detect time-sensitive conditions.
2022-06-29T06:17:59.444Z
2022-06-27T00:00:00.000
{ "year": 2022, "sha1": "7dcf20d188d0808547d02e64a2494118a7ccff9d", "oa_license": "CCBYNC", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "6c2442cd616226af561e90a0e1db3d58f88584d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17761806
pes2o/s2orc
v3-fos-license
Relative Entropies for Kinetic Equations in Bounded Domains (Irreversibility, Stationary Solutions, Uniqueness) The relative-entropy method describes the irreversibility of the Vlasov-Poisson and Vlasov-Boltzmann-Poisson systems in bounded domains with incoming boundary conditions. Uniform-in-time estimates are deduced from the entropy. In some cases, these estimates are sufficient to prove the convergence of the solution to a unique stationary solution, as time goes to infinity. The method is also used to analyse other types of boundary conditions such as mass- and energy-preserving diffuse-reflection boundary conditions, and to prove the uniqueness of stationary solutions for some special collision terms. conditions for f will be presented. We shall however detail the case where the distribution function of incoming particles is prescribed. Notations. We recall that the spatial domain is denoted by !. From now on, we assume that ! is bounded and @! is of class C 1 . We shall denote by = ! IR d and ? = @ = @! IR d the phase space and its boundary respectively. Let d @! be the surface measure induced on @! by Lebesgue's measure. The outward unit normal vector at a point x of @! is denoted by (x). For any given x 2 @!, we set (x) = fv 2 IR d : v (x) > 0g and ? = f(x; v) 2 ? : v 2 (x)g : Finally, d (x; v) stands for the measure j (x) vj d ? (x; v) where d ? (x; v) = d @! (x) dv is the measure induced by Lebesgue's measure on ?. By a standard abuse of notations, we will not distinguish a function and its trace on the boundary. (H2) The external electrostatic potential is assumed to be in C 2 (!). Without loss of generality we assume that 0 0. (H3) The function has the following property Property P The function is de ned on (min x2! 0 (x); +1), bounded, smooth, strictly decreasing with values in IR + , and rapidly decreasing at in nity, so that sup x2! Z +1 0 s d=2 (s + 0 (x)) ds < +1 : We denote by ?1 its inverse function to IR extended by an arbitrary, xed, strictly decreasing function. (H4) The collision operator Q is assumed to preserve the mass R IR d Q(g) dv = 0, and satis es the following H-theorem D g] = ? Z IR d Q(g) 1 2 jvj 2 ? ?1 (g) dv 0 ; (2) for any nonnegative function g in L 1 (IR d ). (H5) We assume that D g] = 0 () Q(g) = 0 : The aim of this paper is to study the irreversibility of the system (1), the uniqueness of the stationary solutions and the eventual convergence to a stationary solution for large time asymptotics. The main ingredient is the derivation of a -dependent relative entropy of the time-dependent solution versus a stationary solution of the problem. In order to exhibit such a stationary solution, we introduce the map U, de ned on L 1 ( ) in the following way: for any function g 2 L 1 ( ), we denote by U g] = u the unique solution in W 1 Lemma 1.1 Let 0 and consider the set of admissible functions for Q FP; de ned by A 1 = ff 2 L 1 (IR d v ) : 0 f ?1 a.e. and v q f(1 ? f); r q f(1 ? f) 2 L 2 (IR d v )g Lemma 1.3 Assume that 2 L 1 and > 0 a.e. Then the operator Q E is bounded on L 1 \L 1 (IR d ). Moreover, for any measurable function and for any increasing function H on IR, we have Z IR d Q E (f) (jvj 2 ) dv = 0 and H(f) = Z IR d Q E (f) H(f) dv 0 : Finally, if H is strictly increasing, the three following assertions are equivalent: 1. H(f) = 0. 2. Q E (f) = 0. 3. There exists such that f(v) = (jvj 2 ). Consequently, any function having Property (P) satis es Assumptions (H1)-(H5). We shall see in Section 3 that the condition > 0 a.e. can be slightly weakened. Example 5 : Electron-Electron collision operator. The Boltzmann collision operator Q B ee; , possibly including the Pauli exclusion term (case > 0) or the Fokker-Planck-Landau collision operator Q L ee; , namely The operator Q L ee; has been derived by Lemou 70] trough a grazing limit of the Boltzmann operator. A compatible in ow function takes the form (u) = + e (u? )= ?1 . In each of the above examples, for simplicity, the velocities are taken in IR d , but we could as well consider a setting for which v, dv and 1 1.3 Outline of the paper and references We rst deal with the irreversibility due to the boundary conditions and, eventually, the collision kernel (Theorem 2.1). In the one-dimensional case and under technical regularity assumptions, the large time limit solution of the Vlasov-Poisson system, which is overdetermined on the boundary, is then characterized as the unique stationary solution (Theorem 2.5). For several models with various collision kernels corresponding to the above examples, the stationary solution is also identi ed as the unique limit for large times of the Cauchy problem (Corollary 2.4). Without self-consistent potential, a uniqueness result (Theorem 2.6) allows to identify the asymptotic solution (Theorem 2.7) in a special case corresponding to boundary conditions which are not compatible with the collision kernel. Irreversibility driven by collisions is a well known topic 30]. On the opposite, the large time behaviour of solutions of the Vlasov-Poisson system is not very well understood. A scattering result due to Caglioti and Ma ei 25] is more or less the unique result (in the one-dimensional periodic case) which has been obtained up to now. The large-time asymptotics of the linearized version of the Vlasov-Poisson system is known under the name of Landau damping 32,51]. The instability of the so-called BGK waves has been studied in a series of papers by Strauss, Guo and Lin 62,63,64,65,71], while the nonlinear stability has been tackled by Rein and his co-authors 9, 10, 57,22]. Also see the much more di cult case of gravitational forces 92,94,58,60,61], and 23] for recent results in the presence of a con ning potential. Some extensions to the case of electromagnetic forces (Vlasov-Maxwell system) are also available. Without con nement in the whole space, dispersion e ects dominate for large times and asymptotics are more or less understood 68,86] although the description of the asymptotic behaviour is not very precise 48]. For bounded domains with specular re ection boundary conditions or unbounded domains with con nement 46,78,79], the stability results do not provide so much information on the solutions (which are time-reversible at least for classical solutions). Injection or di use re ection boundary conditions introduce a source of irreversibility which is the scope of our paper. We also consider the case of compatible collision terms. By compatible, we mean that the stationary solution determined by the boundary conditions belongs to the kernel of the collision operator, if there is any. This is a severe restriction for some collision kernels like the classical Boltzmann collision operator (only maxwellian functions are allowed), a case which has been studied a long time ago, at a formal level, by Darroz es and Guiraud 31]. There are other cases where compatibility is not as much restrictive, like in the case of the elastic collision operator. In case of uncompatible boundary conditions, again very little is known. Some existence results of stationary solutions have been obtained by Arkeryd and Nouri 89,4,3,2], but as far as we know, uniqueness is mainly open, and some of our results are a rst step in that direction. Technically speaking, we are going to use weak or renormalized solutions and trace properties of these solutions which have recently been studied by Mischler 41,78,79,80], and entropy functionals which are very close to the ones which are used for nonlinear parabolic equations 27,18]. There are some deep connections between entropies for kinetic equations and for nonlinear di usions, which are out of the scope of this paper. However, to illustrate this point, we will derive a di usive limit, at a formal level (see 55,74,52,13,33,91,83] for rigorous results). Further references corresponding to more speci c aspects will be mentioned in the rest of the paper. We will not provide all details for each proof and will systematically refer to papers in which details or similar ideas can be found. Some of the results presented here have been announced in a note 12]. This paper is organized as follows. In Section 2, we will develop at a formal level a strategy to study the long time behaviour. Namely, we will prove an entropy inequality for the Vlasov-Boltzmann-Poisson system with incoming boundary conditions and state its consequences on the long time behaviour and the stationary solutions. Section 3 is devoted to the application of the strategy to the various examples cited above. For uncompatible boundary conditions, the uniqueness of the stationary solutions of the equation corresponding to a special BGK approximation of the Boltzmann collision operator, when there is no self-consistent potential, and a corresponding large time convergence result are proved in Section 4. In Section 5, we extend the relative entropy approach to other types of boundary conditions. Technical results (proof of Theorem 2.5, statements on the Bolza problem) and general considerations (nonlinear stability, di usive limits and relations between relative entropies for kinetic equations and for nonlinear parabolic equations) have been postponed to Appendices A-D. Strategy and results In this section we shall expose our strategy for the study of irreversibility and the large time asymptotics. One of the main di culties is the lack of uniform in time estimates. For instance, the total mass is not conserved since particles are continuously injected into the domain. By introducing a relative entropy, we shall obtain a priori estimates and then use them in order to pass to the limit. All computations are done at a formal level. Rigorous proofs corresponding to the various examples of Section 1 are postponed to Section 3. Theorem 2.1 Assume that f 0 2 L 1 \ L 1 is a nonnegative function such that f 0 jM] < +1. Let f be a smooth su ciently decaying solution of (1) and assume that and Q satisfy Assumptions Z ! D f](x; t) dx (6) where D f] is de ned in (2) and + is the boundary relative entropy ux given by Relative entropy and irreversibility Here, smooth means for instance C 1 and su ciently decaying means that all integrations by parts involved in the formal computation below can be done rigorously. Depending on Q, weaker conditions will be required for f: see Section 3. For weak or renormalized solutions, the equality in (6) will be replaced by an inequality. Proof. We rst deduce from the Vlasov-Boltzmann equation (1) (8) We recall that the above identity requires the use of mass conservation @ @t + div x j = 0 ; where (x; t) = R IR d f(x; v; t) dv and j(x; t) = R IR d vf(x; v; t) dv, which in turn gives d dt 1 Taking the sum of (7) (in which = ) and (8), and noticing that u t Since gjh] is always nonnegative, the above theorem provides a uniform in time control on f(t). Like in whole space problems 7, 46,39], the relative entropy f(t)jM] provides a Lyapunov functional for the study of the large time behaviour, which can also be used to study the nonlinear stability 92, 22, 23] (see Appendix C). An important di erence with whole space problems and with previous studies of boundary value problems 26,19] is that the total mass is not conserved (see Section 5 for boundary conditions preserving the mass). The large time limit Integrating the entropy dissipation inequality with respect to time provides the following inequality (9) (this is an equality for classical solutions). Since the left hand side is the sum of three nonnegative terms (under Assumption (H4)), each of them is bounded by the right hand side. In order to investigate the large time behaviour of the solution (f; ), we consider an arbitrary increasing and diverging sequence (t n ) of positive real numbers and de ne (f n (x; v; t); n (x; t)) = (f(x; v; t + t n ); (x; t + t n )) : (10) It is clear from the above estimates that The last inequality provides a uniform in time estimate for f n as well as a uniform H 1 bound for n . The remainder of the method consists in proving that 1. According to the Dunford-Pettis criterion, up to the extraction of a subsequence, (f n ; n ) weakly converges in L 1 loc (dt; L 1 ( )) L 1 loc (dt; H 1 0 (!)) towards a solution (f 1 ; 1 ) of (1), 2. The limit function f 1 satis es sup t2IR f 1 (t)jM] C and Z IR + f 1 (s)jM] ds = ? Z IR Z ! D f 1 (x; ; s)] dx ds = 0 : Depending on the a priori estimates, (1) will be satis ed by (f 1 ; 1 ) either as a weak solution or even in the sense of renormalized solutions. Item 2 above allows to show that f 1 = M on ? + and that Q(f 1 ) = 0 under Assumption (H5). Therefore (f 1 ; 1 ) is a solution of and (x; t) = 0 ; (x; t) 2 @! IR ; sup t2IR f(t)jM] C : (11) Notice that the time variable t lies in the whole real line and that the boundary conditions on f 1 are overdetermined, since f 1 is given on the whole boundary ? and not only on ? ? . A second source of overdetermination for the system (11) is the condition Q(f 1 ) = 0 (when Q is not identically vanishing). As we shall see in Section 3, this program can be completed for each of the examples of Section 1. When Q 0, Q = Q E or 6 = 0 in Examples 2, 3 and 5, if f 0 is bounded in L 1 , f(t) is also uniformly bounded in L 1 , and we may easily pass to the limit. The other examples (including the case = 0) require additional work (using for instance renormalized solutions). Up to this question which is a little bit delicate, the irreversibility result of Theorem 2.1 provides a characterization of the large time limit that we can summarize in the following formal result (it is formal in the sense that we assume the convergence of the collision term, which is a property that has to be proved case by case). Corollary 2.2 Assume that f 0 2 L 1 \L 1 is a nonnegative function such that f 0 jM] < +1. Under Assumptions (H1)-(H5), consider an unbounded increasing sequence (t n ) n2IN . If (f n ; n ) de ned by (10) weakly converges to some (f 1 ; 1 ) in L 1 loc (dt; L 1 ( )) L 1 loc (dt; H 1 0 (!)) and if Q(f n ) D 0 ! Q(f 1 ), then (f 1 ; 1 ) is a solution of (11) (which belongs to the kernel of Q for any (t; x) 2 IR ! and is such that f j? +(x; v; t) = (jvj 2 =2 + 0 (x)) for any t 2 IR + , (x; v) 2 ? + ). Proof. We have to prove the convergence of r x n r v f n to r x n r v f n as n ! +1. If f n is uniformly bounded in L 1 , by interpolation (see 73,67]) with the kinetic energy, n = R IR d f n dv is bounded in L 1 (dt; L q (IR d )) with q = 1 + 2=d. Using the compactness properties of r ?1 , it is easy to pass to the limit in the self-consistent term. Without uniform bounds, one uses renormalized solutions 41, 78, 79, 80] and (11) only holds in the renormalized sense (compactness for n is a consequence of averaging lemmas). u t 2.3 Are the solutions of the limit problem stationary ? In this paragraph, we provide some rigorous results ensuring the stationarity of the solutions of the limit problem (11). If f 1 2 Ker Q depends only on jvj 2 (examples 2, 3 and 4), we apply the following Lemma 2.3 Let f 2 L 1 loc be a solution of the Vlasov equation in the renormalized sense. If f is even (or odd) with respect to the v variable, then it does not depend on t. The proof is straightforward. The operator @ t conserves the v parity while v r x ? (r x + r x 0 ) r v transforms the v parity into its opposite. u t Corollary 2.4 Let f be a solution of (11) with Q = Q E , Q FP; , Q , Q ee; + Q E , Q ee; + Q , Q ee; + Q FP; or a linear combination of these operators (with nonnegative coe cients). Then f does not depend on t, and is nothing else than the function M de ned in (4) under the additional assumption that there are no closed characteristics if Q = Q E . The proof is an immediate application of Lemma 2.3. Indeed, using the H-Theorem, we deduce that the kernel of a (nonnegative) linear combination of the above collision operators is equal to the intersection of the kernels. Therefore, any function f satisfying Q(f) = 0 is even with respect to v. Step 1 : the electric eld is repulsive at x = 0. Along the characteristics, the total energy satis es @ @t 1 2 jV j 2 + (X; t) + 0 (X) = @ @t (X; t) : As a consequence, there exists v M > 0 depending on k + 0 k L 1 and k@ t k L 1 such that the following for all x 2 (0; 1) and s 2 IR, ?1 < T in (s; x; v) < s < T e (s; x; v) < +1, We claim that this ensures the existence of a constant C 1 > 0 such that (x; t) = R IR d f(x; v; t) dv C 1 , which implies, thanks to the Poisson equation, the existence of a positive constant C 2 > 0 such that for any t 2 IR, @ @x (0; t) C 2 : To prove our claim, we rst deduce from the Vlasov equation and the boundary condition that f(x; v; t) = f(0; V in (x; v; t); T in (x; v; t)) = 1 2 jV in (x; v; t)j 2 : In view of the above estimates on V in and due to the decay of , we get the estimate (x; t) Z +1 v M 1 2 jvj 2 + C M dv =: C 1 > 0 : The conclusion then holds with C 2 = 1 2 C 1 using Step 2 : Analysis of the characteristics in a neighborhood of (0; 0; t). Since the electric eld @ @x is (uniformly in t) positive in a neighborhood of x = 0 + , there exists x M 2 (0; 1) such that for every x 0 2 (0; x M ) and every t 0 2 IR ?1 < T in (t 0 ; x 0 ; 0) < t 0 < T e (t 0 ; x 0 ; 0) < +1 and X in (t 0 ; x 0 ; 0) = X e (t 0 ; x 0 ; 0) = 0 : Saying that f is constant along the characteristics means f(X in ; V in ; T in ) = f(X e ; V e ; T e ). Besides, we deduce from the boundary conditions (11) that f(X in ; V in ; T in ) = 1 Theorem 2.6 Assume that 0 and consider two nonnegative solutions f 1 and f 2 of (13) such that for any (x; v) 2 , f i (x; v) F D (x; v) = + e ( 1 2 jvj 2 + 0 (x)? )= ?1 (for i = 1; 2). Then f 1 = f 2 . Note here that we do not make any assumption on 0 saying for instance that there are no closed characteristics. The proof of Theorem 2.6 is deferred to Section 4. Let us denote by f s the unique stationary solution of (13). A computation similar to the one of Theorem 2.6 provides the following result on large time asymptotics. Theorem 2.7 Assume that 0. If (14) is satis ed, then any solution of with an intial data f 0 such that 0 f 0 F D -weakly converges in L 1 ( ), as time tends to +1, towards the unique stationary solution f s of (13). bounded, then f(t) is bounded as well according to sup jf(t)j max(sup jf 0 j; sup ). The L 1 bound will be useful for passing to the limit. Throughout this section, we shall assume that (H6) f 0 2 L 1 x;v \ L 1 x;v ; jvj 2 f 0 2 L 1 x;v and f 0 jM] < +1 : Moreover, we require that (H7) Z +1 0 s (d+1)=2 (s) ds < +1 ; so that R IR d jvj 3 (jvj 2 ) dv makes sense, and we assume that (H8) 0 is bounded on ?A; +1) for any A > 0 : We shall rst prove two preliminary results and then state a theorem which covers the results of Theorem 2.1 and Corollary 2.2 at once. Proof. Since ?( ?1 ) 0 (u) = ?( 0 ?1 (u)) ?1 and ? 0 (u) C according to (15), we have, for any u 2 0; A], ?( ?1 ) 0 (u) 1=C. Consequently, These estimates allow us to prove rigourously a result on the large time behaviour for Q E . The existence of solutions can be found in 11,1,78,79,80]. In these references, stability results are also proved for renormalized solutions. Theorem 3.3 Assume that (H6)-(H8) hold and that is symmetric, measurable, nonnegative. The Vlasov-Poisson-Boltzmann system (1) with Q = Q E or Q = 0 admits a weak solution f 2 L 1 (IR + ) such that kf( ; ; t)k L 1 max kf 0 k L 1; inf @! 0 : The sequence (f n ; n ) de ned by (10) converges up to the extraction of a subsequence, -weakly in L 1 (IR + ) L 1 loc (IR + ; H 1 0 ( )), towards a solution (f 1 ; 1 ) of (11). Proof. The L 1 estimate is straightforward in case Q = 0. For Q = Q E , we may use the fact that The existence proof of f goes as follows. For a given , we remark that f has to solve @ t f + v r x f ? (r x + r x 0 ) r v f + f = Q + (f) : The mapping f 7 ! g de ned by ( @ t g + v r x g ? (r x + r x 0 ) r v g + g = Q + (f) ; g jt=0 = f 0 ; g j ? = ( 1 2 jvj 2 + 0 ) : is contractive in L 1 ((0; T) ). It has a unique xed point which can be computed with an iteration scheme. Starting the iteration procedure from a nonnegative initial point, the Maximum Principle is satis ed at each step, which implies that the solution f is nonnegative if f 0 0 and 0. On the other hand, for any K 2 IR, K ? f also satis es the same equation with f 0 and ( 1 2 jvj 2 + 0 ) replaced by K ? f 0 and K ? ( 1 2 jvj 2 + 0 ) respectively, which proves the Maximum Principle for a solution of (1). Let (f n (x; v; t); n (x; t)) = (f(x; v; t + t n ); (x; t + t n )) with lim n!+1 t n = +1 and consider the limit as n ! +1. According to (9) where fjg] = R h f log f g ? f + g i dx dv + 1 2 jrU f ? g]j 2 dx. In 80], it is proved that any sequence of renormalized solutions of the Vlasov-Poisson-Fokker-Planck system which satis es the above entropy inequality has a subsequence which weakly converges in L 1 towards a renormalized solution of the same system. We apply this stability result to a sequence f n (x; v; t) = f(x; v; t + t n ). Then, we have a uniform bound in L 1 for f n . Besides, a Cauchy Schwartz inequality leads to R +1 0 ( R jvfn+ rvfnjdvdx) 2 R fndvdx ds R +1 0 R 1 fn jvf n + r v f n j 2 dvdx ds = R +1 tn R 1 f jvf + r v fj 2 dvdx ds ! 0 as n ! +1 which proves that lim n!+1 kvf n ( ; s) + r v ( ; s)f n k 2 L 1 ( ) = 0. This shows that the weak limit f 1 of a converging subsequence is a Maxwellian: f 1 = (x; t)M (v). Exactly as in 21], f 1 is a renormalized solution of the Vlasov-Poisson-Fokker-Planck system whose unique Maxwellian solution is given by (4) with (u) = C e ?u= . Thus we obtain the Example 3: semiconductor BGK model In this paragraph, we are going to give detailed estimates which allow us to prove directly that the large time limit is in the kernel of the collision operator, without proving the convergence of Q (f n ) to Q (f 1 ) in D 0 . We deal either with the standard BGK model ( = 0) or with the BGK model for fermions ( > 0): where is such that there exists two positive constants 0 and 1 for which which implies that Q (f 1 ) = 0 : We have then proved that (f 1 ; 1 ) is a solution of the stationary Vlasov-Poisson system and f 1 is a Fermi-Dirac function. By Lemma 2.3, it is stationary. We deduce from Theorem 2.6 that f 1 is equal to M and there is therefore no need to extract a subsequence. u t Example 5: Boltzmann or Fokker-Planck-Landau collision operators The Boltzmannn equation has been extensively studied during the last 15 years, so we shall only brie y sketch how the case with a Poisson coupling and injection boundary conditions can be dealt with. The main di erence with standard approaches is that the total mass is not xed. Theorem 3.9 Let 0 2 L 1 ( ) be such that r 0 2 W 1;1 ( ) and consider a solution f of the Vlasov-Poisson-Boltzmannn system with a Boltzmann or a Fokker-Planck-Landau collision term such that, with the notations of Section 2, f( ; ; t)jM] f 0 jM] : Then t 7 ! M(t) := kf( ; ; t)k L 1 ( ) is uniformly bounded in L 1 (IR + ) and (f n ) n2IN de ned by (10) weakly converges in L 1 (IR + loc ) to (M; U M]). Note here that the condition on 0 is certainly not optimal: apart from regularity conditions which have to do with the de nition of the characteristics (see 42,72]), the right condition should be given in terms of the existence of a lower bound for the functional which de nes U M] or equivalently in terms of the existence of a lower bound for R (f) dxdv (see 46] for a discussion of the notion of con nement and 46,47] for the equivalence of these conditions). Proof. Consider rst the case > 0 (statistics of fermions) and assume that 0 0. Since 0 f 1 a.e., for almost all t > 0, the function f( ; ; t) is bounded in L 1 ( ) as soon as R f(x; v; t) jvj 2 dxdv is bounded uniformly with respect to t. Let us prove that f( ; ; t) is also relatively compact. This also proves that R jvj 2 f(x; v; t) dxdv is uniformly bounded and gives an upper bound for M(t): fjM] M(t) log (M(t)) ? C M(t) ; for some C 2 IR. The weak compactness in L 1 then follows by Dunford-Pettis' criterion (see 50]). u t Up to these preliminary estimates, the method is more or less standard and we will only refer to the existing literature. In case = 0, for the Boltzmann collision operator, one has to use the notion of In this Section, we consider the case without self-consistent potential (no Poisson coupling), when the collision operator is the BGK approximation of the Boltzmann collision operator for fermions given by The function is a bounded positive function. We claim that this implies that h = 0. Indeed, let (x 0 ; v 0 ) 2 with jv 0 j large enough in such a way that any characteristics with initial conditions in (x; v) 2 B r (x 0 ) B r (v 0 ) (with r > 0) is open. Let be a nonnegative smooth function which is strictly positive on B r (x 0 ) B r (v 0 ). The solution of v r x ? r x 0 r v ? = ; = 0 on ? ; (17) is nonnegative and does not vanish on B r (x 0 ) B r (v 0 ). Let us give a short proof of this fact. Assume that the characteristics which is given by @X @t = V ; @V @t = ?r x 0 (X) ; exists on a maximal interval (T in (x; v); T e (x; v)) 3 0. If such a characteristics is open, this means that either T in (x; v) > ?1 and X(x; v; T in (x; v)) 2 @!, or T e (x; v) < +1 and X(x; v; T e (x; v)) 2 @!. Note that since is a steady state, the problem is autonomous, so we dont need to introduce a speci c initial time (with the notations of (12), X(t; x; v; s) is replaced by X(t ? s; x; v; 0) = X(x; v; t ? s)). Let (x; v; t) = R t 0 (X(s); V (s)) ds. If The above method is also usefull for the study of large time asymptotics. It gives the convergence to the unique stationary solution and shows the connection with relative entropy formulations which have been extensively used throughout the rest of this paper. We denote by h the function f ? f s . Z Q 0 (h)H 0 h m dxdv = 0 : According to the same strategy as in Section 3, we de ne h n (x; v; t) = h(t + t n ; x; v) where t n is an arbitrary diverging sequence and deduce that up to the extraction of a subsequence, the sequence h n -weakly converges in L 1 ((0; T) ) towards a function h 1 such that Q 0 (h 1 ) = 0, @ t h 1 +v r x h 1 ? r x 0 r v h 1 = 0 and h 1 = 0 on @ . The rst identity implies that h 1 is a Maxwellian, which is even with respect to v and is therefore stationary in view of Lemma 2.3. Theorem 2.6 then implies that h 1 = 0. The nonlinear case. As in the proof of Theorem 2.6, a simple computation gives d dt Z jhj dxdv + Z ? + jhj d ? Z Q (f) ? Q (f s )] sgn(h) dxdv = 0 : The -weak limit h + 1 in L 1 ((0; T) ) of a subsequence of h + n = jh( ; ; + t n )j de ned as above satis es @ t h + 1 + v r x h + 1 ? r x 0 r v h + 1 + 1 h + 1 = Z IR d 1 j(h + 1 ) 0 j dv 0 where f 1 is, up to the extraction of a further subsequence, the limit of f( ; ; + t n ). The convergence in the collision term holds for the same reason as in the proof of Theorem 2.6. On the other hand h + 1 (x; v; t) = 0 for all t > 0, (x; v) 2 @ . As in the proof of Theorem 2.6 again, this implies that h + 1 = 0. But since jf 1 ? f s j h + 1 , we deduce that f 1 = f s . u t Other boundary conditions This section is devoted to further considerations on relative entropies corresponding to various types of boundary conditions. The case of di use re ection boundary conditions is studied with some details: after a de nition of such boundary conditions, which are such that the total mass is preserved, stationary solutions are found using a variational approach. These solutions are then used to de ne a relative entropy, which describes the irreversibility, and gives the uniqueness of the stationary solution when there is no closed characteristics, exactly like in the case of injection boundary conditions. Conditions preserving the energy and the mass are then introduced and further remarks are done concerning other types of possible boundary conditions. Di use re ection boundary conditions (DRBC) Here we consider as in Section 1 the full Vlasov-Poisson-Boltzmann system 8 > > > > > < > > > > > : and di use re ection boundary conditions for f. These conditions are de ned as follows. For any (x; t) 2 @! IR + , let f(x; v; t) v (x) dv : (19) Assuming that is de ned on IR, satis es (P) and is such that lim s!?1 (s) = +1, there exists a unique function : @! IR + ! IR for which + (x; t) = Z P ? (x) 1 2 jvj 2 + 0 (x) ? (x; t) jv (x)j dv 8 (x; t) 2 @! IR + : (20) With the notation m f (x; v; t) := 1 2 jvj 2 + 0 (x) ? (x; t) 8 (x; t) 2 @! IR d IR + ; we shall say that f is subject to di use re ection boundary conditions (DRBC) if and only if f(x; v; t) = m f (x; v; t) ; 8 t 2 IR + ; 8 (x; v) 2 ? ? : (21) Note that under this condition, the total mass is preserved: dv is a signed measure such that d~ = d on ? , with the notations of Section 1. From now on, we denote by M the L 1 -norm of f: M = kf( ; ; t)k L 1 ( ) 8 t 2 IR + : Under DRBC conditions, we shall now prove the existence of a stationary solution corresponding to any given mass by the mean of a variational approach. This solution then allows us to de ne a relative entropy, which we shall use to prove the uniqueness of the stationary solution. This relative entropy also describes the irreversibility and the large time asymptotics as in the case of injection boundary conditions. See the concluding remark of this section for further comments on the denomination: relative. (20) of (x; t). According to (19) (here we omit the dependence of in t since f does not depend on t either). In the following, we shall do as if and 0 were of class C 2 . For lower regularity, one has to take advantage of the uniqueness of the characteristics according to 42]. Consider two points x 1 and x 2 in @! such that the segment (x 1 ; x 2 ) is a subset of !: there exists a characteristics connecting x 1 to x 2 (Bolza problem in IR d : see 95] and Appendix B), i.e. a solution of dX dt = V ; dV dt = ?r( (X) + 0 (X)) ; X(0) = x 1 ; V (0) = v 1 such that for some t > 0, x(t) = x 2 , for some well chosen v 1 , with jv 1 j large enough. Since f is constant along the characteristics, f(x 2 ; v 2 ) = f(x 1 ; v 1 ). Because f only depends on the energy on the boundary (up to ) and since is strictly decreasing, we have: 1 2 jv 2 j 2 + 0 (x 2 ) ? (x 2 ) = 1 2 jv 1 j 2 + 0 (x 1 ) ? (x 1 ) : A variational formulation in But on the other hand, does not depend on t and the energy also is conserved along the characteristics: 1 2 jv 2 j 2 + 0 (x 2 ) = 1 2 jv 1 j 2 + 0 (x 1 ) : This is possible if and only if (x 2 ) = (x 1 ). See Lemma B.1 in Appendix B for more details on how to nd v. It remains to check that any two points of a C 1 connected domain in IR d can be connected by a nite number of segments in !, whose extremities are in @!. This is the purpose of Lemma B.2 in Appendix B. Thus (x) de ned by (19) does not depend on x and we can conclude by applying By an argument similar to the one of Corollary 5.3, it is then easy to prove that any stationary solution is necessarily of the form (25). Remarks on the boundary conditions To the boundary conditions for f correspond various well known situations of thermodynamics (see 8]). In the case of injection (resp. di use re ection) boundary conditions, the temperature and the chemical potential (resp. the temperature and the mass) are xed, so that the energy and the mass (resp. the energy and the chemical potential) of the system uctuate: this is the grand canonical (resp. canonical) framework and the relative entropy can be identi ed with a grand potential (resp. free energy) function. The stationary state is uniquely de ned in both cases. When the energy and the mass are xed (microcanonical framework), the relative entropy can be identi ed with an entropy function (in the usual sense of thermodynamics, up to a sign convention), but a di culty arises from the lack of uniqueness results of stationary states (see 38,17]). Other cases formally enter in our relative entropy formulation: for instance, if the volume is not xed, one could prescribe the pressure by requiring the equality of the incoming and outgoing uxes corresponding to the rst moment in the velocity. Remark 5.5 Why we use the denomination relative for the entropy arises from the following reason. In the three examples of boundary conditions studied in this paper (injection boundary conditions, diffuse re ection boundary conditions with xed temperature and di use re ection boundary conditions preserving mass and energy), the function is entirely de ned by , but we further impose that the minimum of is reached by the unique stationary solution corresponding to the boundary conditions. This in turn determines the Lagrange multipliers associated to the constraints. In that sense, the entropy is therefore relative to this stationary solution. The relative entropy functional can be interpreted { from a probabilistic point of view { as a conditional expectation, or simply as a measure of the distance to the stationary state (at least when it is unique). This notion of distance is also the one which appears when measuring the stability by the Casimir-energy method or in case of di usion equations with compatible nonlinearities (see 27,18]), as we shall see in Appendices C and D. A End of the proof of Theorem 2.5 Let : 0; 1] IR ! IR, (x; t) 7 ! (x; t) be an analytic function in x with C 1 in time coe cients. x (0; t 0 ), (iii) @ t @ n+1 x (0; t 0 ) = 0. Proof. We rst remark that (iii) is a direct consequence of (ii) and of the fact that e + 2n+1 (1) = e ? 2n+1 (1). In order to prove (i) and (ii), we insert the expansion of e " in (27) and identify the terms of the same power in ". From the zeroth order term, we obtain d dx e 0 = 2 @ x (0; t 0 ). The formulae for e 1 (i.e. (ii) with n = 0) follow from the order 1 term. For the higher order terms, we proceed by induction. Namely, let n 2 IN be given and assume that (i), (ii) and (iii) hold up to the order n. Let us prove that they hold for n + 1. Terms of order " 2n+2 in the right hand side of (27) are obtained by taking n + 1 derivatives of @ x with respect to x, n derivatives with respect to x and two with respect to t, n ? 1 derivatives with respect to x and 4 with respect to t, , 2n + 2 with respect to t. Noticing that (iii) holds up to n and for all times t 0 2 IR, we deduce that the only non vanishing term in this expansion is the rst one. This leads to (i) for the index n + 1. In order to prove (ii), we proceed analogously. The only non vanishing term of order 2n+3 is the one corresponding to n+1 derivatives with respect to x and one derivative with respect to t. All the other terms involve t derivatives of @ k x (0; t) with k n, and are therefore vanishing in view of (iii). This leads to (ii) (with n replaced by n + 1). u t A straightforward consequence is the following Corollary that we use in the proof of Theorem 2.5. B Two technical lemmata for the Bolza problem The Bolza problem is a standard question of mechanics. For a given potential and for any given pair (x 1 ; x 2 ) 2 ! 2 of points, does there exist a trajectory which connects x 1 to x 2 , for an appropriate initial velocity v 1 ? In this Appendix, we are going to prove two lemmata which are of interest for the proof of Corollary 5.3. We consider rst the Bolza problem for two points x 1 , x 2 2 @! such that the segment (x 1 ; x 2 ) is contained in !, and then prove that two arbitrary points of the boundary of ! can be connected by a nite number of such segments, under the assumption that ! is a connected and bounded domain. For simplicity, we assume that is of class C 2 , so that we deal with classical characteristics, but an extension based on the uniqueness of weaker notions of characteristics (see 42,72]) is easy to establish. Let x 1 ; x 2 2 @!. We shall say that (x 1 ; x 2 ) satisfy Property (S) if and only if (x i ) (x 2 ? x 1 ) 6 = 0, i = 1; 2, and (x 1 ; x 2 ) = ftx 1 + (1 ? t)x 2 : t 2 (0; 1)g !. Let u 0 = x 2 ?x 1 jx 2 ?x 1 j . We may notice that if (S) is satis ed, there exists an > 0 such that 8 u 2 S d?1 ; ju ? u 0 j < =) fx 1 + tu : t > 0g \ ! is a neighborhood of (x 1 ; x 2 ) in ! : Lemma B.1 Let be a bounded C 2 potential de ned on ! and consider x 1 ; x 2 2 @! such that (x 1 ; x 2 ) has Property (S). Then there exists an A > 0 such that 8 a > A 9 v 1 2 a jS d?1 j IR d ; for which the characteristics de ned in ! by d 2 X dt 2 = ?r (X) ; X(0) = x 1 ; dX dt (0) = v 1 ends at x 2 . Proof. For " > 0 and u 2 S d?1 , we denote by X ";u (t) the characteristics de ned by d 2 X ";u dt 2 = ?r (X ";u ) ; X ";u (0) = x 1 ; dX ";u dt (0) = 1 " u in IR d (we extend by a bounded C 2 function to IR d ). Consider the time rescaling: "s = t, and the rescaled characteristics Y ("; u; s) = X ";u ( s " ). @ 2 Y @s 2 = ?" 2 r (Y ) ; Y ("; u; 0) = x 1 ; @Y @s ("; u; 0) = u : It is straightforward to prove that @Y @s ("; u; s) ? u " q 2 k k L 1 (IR d ) : De ne S("; u) = inffs > 0 : Y ("; u; s) 2 @!g and Z("; u) = Y ("; u; S("; u)). By the above estimate, it is clear that, Z(" = 0; u 0 ) = x 2 . The function Z is of class C 2 on a neighborhood of (0; u 0 ) 2 IR + S d?1 , and it is easy to check that r u Z(0; u 0 ) is invertible. The conclusion holds by the implicit functions theorem. u t Lemma B.2 Let x; y 2 @!. Assume that ! is a C 1 bounded and connected domain in IR d and is of class C 2 on !. Then there exists a nite sequence of points x 1 = x, x 2 ,... x i , x i+1 ,... x n?1 , x n = y in @! such that (x i ; x i+1 ) has Property (S) for i = 1; 2; :::n ? 1. Proof. We shall rst prove an in nitesimal version of Lemma B.2. Let x; y 2 @! and denote by (x) and (y) the unit outgoing normals at x and y respectively. Because of the regularity of @!, for " > 0 small enough, if jx ? yj < ", there exists an > 0 such that fz 2 B(x; ") n fxg : z ? x jz ? xj (x) < ? g ! and fz 2 B(y; ") n fyg : z ? y jz ? yj (y) < ? g ! : Next, consider U = fu 2 S d?1 : u (x) + < 0 and u (y) + < 0g for " > 0 small enough so that U is not empty (for jx ? yj < " small enough, j (x) ? (y)j is as small as we want). Moreover, in the limit " ! 0, we can take arbitrarily small. For any u 2 U, we may therefore consider Z = fz(u) : u 2 Ug ; where z(u) = x+t(x; u)u and t(x; u) = infft > 0 : x+tu 2 ! c or (y; x+tu)\! c 6 = ;g. By Sard's theorem, there exists at most a countable number of points u in U for which either (z(u) ? x) (z(u)) = 0 or (z(u) ? x) (z(u)) = 0, which ends the proof: there exists a u 2 U such that both (x; z(u)) ! and (z(u); y) ! have Property S. By compactness of @!, if x and y are in the same connected component of @!, it is then easy to nd a nite sequence of points x 1 = x, x 2 ,...x i , x i+1 ,... x n?1 , x n = y in @! with jx i+1 ? x i j < " for " > 0 small enough such that Lemma B.2 holds. If x and y are in two di erent connected components of @!, the extension is straightforward and left to the reader.u t
2014-10-01T00:00:00.000Z
2003-08-01T00:00:00.000
{ "year": 2003, "sha1": "e9694984c821480f56016ce67d2f0f7dab83a1ba", "oa_license": "CCBYSA", "oa_url": "https://basepub.dauphine.psl.eu/bitstream/123456789/6274/4/relative_entropies.PDF", "oa_status": "GREEN", "pdf_src": "CiteSeerX", "pdf_hash": "1bcca0a55c179e91482a113fb0d1e07218d08465", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
246581201
pes2o/s2orc
v3-fos-license
New Cryptosystem Using Two Improved Vigenere Laps Separated by a Genetic Operator This document traces the development of a new cryptosystem using two circuits ensured by a deep Vigenere classical technique improvement, separated by a genetic operator. This new technique employs several dynamic substitutions matrices attached to chaotic replacement functions; whose construction will be detailed. Firstly, we will be start by modifying the seed pixels by an initial value calculated from the original image, and will be infected through the chaotic map used to overcome the uniform image problem, followed by the improvements Vigenere injection technology. The output vector will be subdivided into sub blocks for future application of deeply improved genetic mutations to better adapt to color and medicals image encryption. The second round will increase the compdlexity of the attack and improve the installed systems. Simulations performed on a large number of images of different sizes and formats ensure that our approach is not exposed to known attacks. Article Highlights This new algorithm offers two tricks ensured by a deep improvement of Vigenere. We mention the most important changes made.  First Vigenere’s rotation  Genetic mutation applied  Second Vigenere’s lap I. Introduction The rapid development of chaos theory in mathematics provides researchers with opportunities to further improve some classic encryption systems. In front of this great security focus, many techniques for color image encryption have flooded the digital world, mostly exploiting number theory and chaos Others are attempting to update their policies by improving some classical techniques, such as Hill , Cesar, Vignere 1) Vigenere's Classical technique This technology is based on static matrix defined by the following algorithm. Despite the knowledge of the substitution matrix, this method has been able to withstand more than three centuries. Let plain text, cypher text; Encryption key, Vigenere matrix and length of clear text. So Even though Vigenere's matrix was known, the encryption was able to withstand several centuries. But, Babagh's cryptanalysis is not efficient in not knowing the size of the encryption key. Several attempts to improve Vigenere's technique have invaded the digital world we quote . In this work, the new structure of the substitution matrix and its attached replacement function will be described in detail. 2) Our contribution This work puts into practice the implementation of a deeply modified genetic operator in a color image encryption system. This operator will be surrounded by two improved Vigenere circuits II. THE PROPOSED METHOD Based on chaos , this new technology which acts at the pixel level by two Vigenere turns provided by a dynamic substitution's matrices and replacement functions These two rounds will be separated by a deeply improved genetic operator for future use in color image encryption. The following steps describe this algorithm Chaotic permutation application At the end of this work, the follow-up operations of each encryption round will be described in detail to show the development of the system. and detailed analysis of the performance of our methodology will be discussed and compared with other referencing systems. Step 1: Chaotic Sequences Development All the encryption parameters required to successfully run our system come from the two most commonly used chaotic maps in the field of cryptography. This choice is due to the simplicity of its development and its high sensitivity to the initial parameters. 1) The Logistics Map The logistic map is a recurrent sequence described by a simple polynomial of second degree defined by the following equation 2) HENON'S Map Henon's chaotic two-dimensional map was first discovered in 1978. It is described by equation below We can convert the two-dimensional map expression to a onedimensional map that is easy to implement in the encryption system. This formula is described by next equation 3) Chaotic used Vector design Our work requires the construction of three chaotic vectors , and , with a coefficient of , and the binary vector will be regarded as the control vector. This construct is seen by the following algorithm The binary vector is considered as a control vector the complexity of our algorithm. Axe 2: plain Image preparation After the three color channels extraction and their conversion into size vectors each, a concatenation is established to generate a vector of size . This operation is described by the following algorithm This step slightly reduces the high correlation between the pixels. 1) Initialization Value Design First, the initialization value must be recalculated to change the value of the starting pixel. Ultimately, the value is provided by the next algorithm The presence of the vector is to overcome the problem of the uniform image. Step3: Vigenere upgrade In the first stage, Vigenere's technology was greatly modified by integrating the new substitution matrix provided by the new powerful replacement function. 1) Vigenere's Advanced Methods This classical technique requires the generation of a substitution matrix and a replacement function a) Classic Vigenere function expression These two matrices will be used together in the encryption process and will be completely under vector control We remember to pass Vigenere's classic replacement function through the following formula key duplicated to the size of the text to be encrypted. b) New Vigenere function expression The following equation illustrates the effective expression of the image of the pixel through the new Hill technology. c) First-round spread function Expression The first round will be equipped with a powerful diffusion function to connect encrypted pixels with subsequent transparent pixels to increase the impact of the avalanche effect and protect the system from any differential attacks. The expression of this new diffusion function is given by the formula below 2) The first-round analysis This first round is defined by the following algorithm, The figure below shows the first round For a better follow-up of our algorithm, several reference images were tested by this first round, we quote At the end of the first round, the output vector (Y) will be treated as a clear image and subdivided into three sub-blocks of equal size for future submission to genetic mutation. 3) Genetic mutation Gene mutation is the exchange of sub-blocks between three blocks of the same size. This exchange is provided by two chaos constants and The first indicates the starting position of the sub-block to be swapped, and the second indicates the size of the sub-block. In our method, these two constants are defined as The mutation process between three blocks of size each is illustrated by the following figure This mutation process follows the following formula 1) Second round analysis For a better follow-up of our algorithm, several reference images were tested by this first round, we quote The generated vector will be submitted to a second round of Vigenere provided by two other substitution matrices. 1) Second Vigenere round At the end of the first round, the new initialization value will be calculated according to the following algorithm In the second round, by simply replacing the position of the replacement matrix, the output vector will be treated as a new image to be encrypted by the same method as the first round. a) Second round analysis The second round can also be ensured by using a different same matrix in the first round. The same mold will be used in the second round, but in a different way a) Second-round spread function Expression The second round will be equipped with the diffusion ensured by the replacement matrix generated. The expression of this function is defined by the following notation a) The Second-round analysis This second round is defined by the following algorithm The figure below shows the first round 2) Third round analysis For a better follow-up of our algorithm, several reference images were tested by this first round, we quote The output vector will be subjected to permutation (PH) to possibly suppress any correlation. Step 5: Decryption of encrypted images In the literature, the classic Vigenere method uses the same matrix in both processes. Our contribution in this work is that the matrix used in encryption is different from the matrix used in decryption. Therefore, the calculation of the decryption matrix is necessary. 2) Decryption matrix structure Each row of the encrypted S-box is a permutation in , so the decryption matrix will consist of reverse permutations. For this reason, two decrypted generations are given by the following algorithm 1) Decryption function By following the same logic of Vigenere's traditional technique, we obtain In the first round In the second round 2) Decrypt the encrypted image Our algorithm is a symmetric encryption system, so the same key will be used in the decryption process. In addition, installing the broadcast function requires us to start the decryption process from the last pixel to the first pixel, and recalculate the initialization value to get the exact value of the first pixel. In addition, decryption uses the countdown function of encryption. a) Reverse permutation The inverse permutation of is given by the following algorithm After vectorization of the image encrypted in vector an intervention of the permutation to recover the vector This operation is determined by the following algorithm b) The reciprocal of the Second lap function A recalculation of the initialization value will make it possible to retrieve the exact value of pixel c) The reciprocal of the First lap function A recalculation of the initialization value will make it possible to retrieve the exact value of pixel d) The reverse mutation In general, mutation is an involutive operation, therefore we have III. deep simulations We randomly selected 150 images from a chaotic vector that contained a database of thousands of color images in different sizes and formats. All these images were tested by our system. all experiments are performed under the Matlab software running under Windows 7, on a basic i7 personal computer, , and we found the following results. 1) Key-space analysis The chaotic sequence used in our method ensures strong sensitivity to initial conditions and can protect it from any brutal attacks. The secret key to our system is If we use single-precision real numbers to operate, the total size of the key will greatly exceed , which is enough to avoid any brutal attacks. 2) Secret key's sensitivity Analysis Our encryption key has a high sensitivity, which means that a small degradation of a single parameter used will automatically cause a large difference from the original image. The image below illustrates this confirmation Figure6: Encryption key sensitivity This ensures that in the absence of the real encryption key, the original image cannot be restored. 1) Statistics Attack Security a) Entropy Analysis The entropy of an image of size is given by the equation below is the probability of occurrence of level in the original image. The entropy values on the tested by our method are represented graphically by the following figure These values are largely sufficient to affirm that our crypto system is protected from known differential attacks The study of the revealed the following diagram fIgure13: UACI of 150 images of the varying sizes All detected values are inside the confidence interval . These values are largely sufficient to affirm that our crypto system is protected from known differential attacks. d) Avalanche effect Our algorithm uses a strong link between encrypted pixels and subsequent clear pixels in the strategy. This leads to a gradual change in the value, which becomes more and more important as the data spreads through the structure of the algorithm. The avalanche effect is the number of bits that have been changed if a single bit of the original image is changed. The mathematical expression of this avalanche effect is given by Our encryption keys are large, which can ensure that the new system is protected from brute force attacks. At the same time, the randomness of the operators described in the system makes it difficult to unlock any encrypted images, which increases the difficulty of statistical attacks. In addition, due to the high sensitivity to the initial parameters of our three chaotic cards, and the broadcast installed in each tower confirmed the robustness of our encryption system. V. Conclusion Due to their high sensitivity to initial conditions, our algorithm can prevent sudden attacks. This new technology is based on two deeply improved Vigenere rounds, using dynamic substitution matrices to attach to highly developed substitution functions. These two techniques are separated by genetic mutations suitable for color image encryption. The two calculated initialization values increase the complexity of the system. The monitoring of encryption in three rounds showed robustness and improved results from round to round. The global analysis of the system, ensures that our algrithm can cope with any known attack. Conflict of Interest All the authors of this article, there is no conflict of interest and add that no organization or private or public laboratory finances its research, moreover the research carried out no experiment on animals or human beings. Informed consent We all have the approval to write and write this article giving a new method of encryption of color images. Ethical approval We have respected the ethics of the journal
2022-02-06T16:45:13.799Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "954fadc19d2aa77ade40915e533e17fc756948df", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1035932/latest.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b0a9736fa0882609f8f7ceda939467d6126d47bf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
237333318
pes2o/s2orc
v3-fos-license
Improving Learning Outcome for GSL (German as a Second Language) Students in a Blended Learning Cumulative Assessment Material Science Course First year students especially with migration background and language deficiencies rate material science in mechanical engineering as one of the fundamental courses with high work load and necessity of language skills due to the descriptive nature of the course. Therefore a blended learning course structure using based on inverted classroom teaching scenarios was established. Heart of the self-study period are visualizing peer-to-peer lecture films supported by micro-lectures along with various online teaching materials. Although students with migration background generally scored lower in tests due to the lack of language skills improved learning outcomes are demonstrated in high quality class discussions and in overall understanding. This paper introduces the learning structure and graded activities, evaluates the course and compares results of native German-speaking students to those of students with migration background. Introduction Diversity among engineering students is growing more and more acknowledgeable in higher educationespecially in first year classes where in applied universities students from many backgrounds form new heterogeneous classes. Differences in education are as common as various social aspects hindering full time studying. Students may enroll directly from high school, they may have had job training, or went for dual careers. Many students already founded families taking care of little children or supporting elderly family members. High school students generally have had good education in math, physics and chemistry whereas students with job histories mainly have advantages in applied subjects such as technical mechanics, design or material science. Because as a future maker of things students should investigate and learn with a strong practical motive and learn to match material properties in a design with the underlying physics, chemistry and material science background knowledge. Therefore, at HTW-Berlin, Germany Material Science for mechanical, automotive and economical engineering students is taught based on the "design-led" teaching approach in a blended learning course setting including inverted classroom teaching scenarios [1]- [6]. Although new evidence assigns success only for MINT courses neglecting progress for economic related teaching the "inverted classroom" method [2]- [9] enables students to discuss early and communicate in a scientific related course such as material science in equal measure. Students study the science on their own without time limit and then take time to raise questions and discuss details, solve hands-on problems, perform group work and master difficult problems in class. Reporting on student learning challenges educators especially in highly divertive classes. Only if the grading provides quality information about student learning, is carefully planned and excellently communicated it is successful [10]. Objective grading also requires an overriding concern for the well-being of students [10]. Generally criteria-based assessment approaches are known to be educationally effective in higher education. However, shifting the primary focus to standards and making criteria secondary could lead to substantial progress [11]. Today standards are widely and controversially discussed, but there is a lack of common understanding in practice especially with regards to learning styles, learning basis, teaching material and learning outcome. The quality of students` proficiency towards achieving well educationally benefits from standards-based assessment in contrary to the traditional score-based [12]. Standards-based assessments provide clear, meaningful, and personalized feedback for students related to learning objectives and help to identify students` weaknesses in the course [13] if the course objectives are well defined beforehand [14]. Time limited exams with strong focus on verbal expression not mathematically precise description in internationally acknowledged formulas penalize students with language difficulties. All tasks, such as homework, presentations, answering questions and group discussions are of disadvantage to these students and to those with outside of university duties. Therefore the blended learning environment of the Material Science course introduced earlier [2]- [4] offers a promising alternative. Course Setting Averagely 30-40% of the students in a first year mechanical or automotive engineering class at the applied university HTW Berlin enroll directly from high school, 10-20% after a certain time, 30-60% were educated on the job and then got work experience between 2 to 6 years. 5-10% achieved the German "Fachabitur" after grade 11 or 12 without a high school diploma. These students may enroll at applied universities after a minimum of 3 years of job training and 4 years of continuous employment on the job. Because of their personal history many students are in their mid-twenties and already founded families taking care of little children. A number of students support elderly family members or even take care of them. Because math, physics and chemistry are known and still remembered well high school students generally perform well in these science based courses. Students with job training are doing well in applied subjects such as technical mechanics, design or material science. These students relate science quickly to engineering problems and have a practical conception. They do well in groups and quickly organize themselves. Over time these student groups intermix with each other supporting with the missing skill to achieve well in the material science class. However, global changes and the 2015 refugee politics of the German government effected German applied high schools and the percentage of students with migration backgrounds changed tremendously. A large percentage of students with migration background enroll in mechanical engineering. These are mostly highly motivated students with engineering skills but severe lack of German language and also have to get used to social conventions as well as different teaching and learning approaches. In summer semester 2018 56 % of the first year material science students were non-native German speakers who gained their university entrance qualification outside of Europe. Material Science at HTW Berlin offers a balanced mixture of standard and score-based grading and has been introduced in detail earlier [2]- [5]. The blended-learning course setting is based on "inverted classroom" scenarios [2]. To meet the needs of the highly diverse first year mechanical engineering class main learning resources are scientific peer-to-peer lecture films [4], [6], [9], [15]. Analogous with the learning outcome and face-to-face teaching micro module lectures strongly clustered by themes are provided via the content management system Moodle. A variety of teaching materials such as worksheets and worked solution, mindmaps, glossary entries, memory sheets, online tests and web-based-trainings International Journal of e-Education, e-Business, e-Management and e-Learning WBT support the learning procedure [2]- [5], [16]. Learning materials were partly contributed by students during material science projects meeting students learning needs. This enables all students coming from different scientific, family, cultural and language backgrounds to study during online periods on a level playing field. In class there is time for explanations of difficult questions, hands on exercises, discussions and group work. Peer instruction [17] is used to assess the learning progress prior to each class. The peer-to-peer approach [18] of participating in production of teaching materials, such as micro lectures and lecture videos and peer reviewing [18], [19] allows for high teaching standards [2], [4] even in a highly divertive class. Advantages of the course is the renunciation of one final exam and the possibility to cumulative gain grade points throughout the semester focusing on different skills (standard based grading). In an engineering context the scientific background is the measure of the course and should overcome other problems. The course structure provides an extra degree of freedom to study and achieve acceptable grades even without perfect language skills [5] or a study-only private environment. The cumulative portfolio grading directly connects the course assessments to the course learning objectives and is not only a series of separate course assignments [15]. Parts of this study has been published before [2], but now shows latest data. Face-to-face time of the first year material science course is 4 hours/week. In alignment with the learning objective of the course the assessment focuses on different skills and the learning progress rather that a one-time result. The decentralized course assessment cumulatively added activities over the 12 to 16 weeks of the semester with regard to the learning outcome (Fig. 1). Moodle provides an excellent basis to establish graded activities that are followed each lecture or theme [2], [4], [5]. [2], [4], [5]. International Journal of e-Education, e-Business, e-Management and e-Learning All semester activities (quizzes, tests, glossary entries, homework, group assignments, forum entries, graded lectures) were weighted appropriately and implemented as compulsory summing to 60 possible points in total. Semester activities are worth 50 points, the final Moodle exam based on tests during the semester counts for 10 points (in sum 60). Progressing points were assigned 3 weeks before final exam (60 points) or final Moodle exam (10 points) to prevent students from stopping to study after they reached the necessary 30 points to pass the course [5]. Before final grading students needed to sign the cumulative assessment and a non-disclosure agreement for the teaching materials throughout the course. Fig. 2. Results of material science blended learning portfolio assessment SS2016 to 2018 [5] accounted for German as a second language, grey WS15/16 (final exam), green (cumulative activities). Course Results Comparing data from 2015 until now the average course score is between 36 (C=lowest) in SS2017 and 54 (A-) in WS2016/17. The scores in WS2016/17, SS2017 and SS2018 are low but compare (Figure 2). The grade point average does not differ much from result of former classes with traditional assessment. International Journal of e-Education, e-Business, e-Management and e-Learning However, due to the special situation in Germany since 2015 refugees mainly from the Arabic peninsula, Syria, Lebanon, and partly from the Maghreb, Tunisia, Morocco and Egypt are involved in asylum affairs. In summer semester 2018 these students were allowed to enter without the otherwise necessary DSL-2 language certificate. These students generally show a high engineering achievement potential. But due the lack in language skills and cultural dependent learning abilities their learning progress and success is constrained severely (Fig. 2). Evaluation of Course Design in a GSL Environment Students in general prefer the cumulative assessment method, mainly because the studying time did not push towards the end of the semester, but was equally distributed in time throughout the course. Students coming from a different language background than German found lecture videos -even in German!-and micro modules as main source of the "inverted classroom concept" appealing because they offer study freedom [4] (Figure 3). From the assessment point of view weighed and summed micro grading offers the lecturer to be less biased therefore students grades are more substantial [17], [20]. Our results support this statement with the possibility of Moodle to grade homework, glossary entries and tests anonymously. As stated earlier deeper learning outcomes were achieved [2], [4], [21], [22], because students were given more responsibility for their learning progress during the semester critical thinking [17], [21]. At-risk students possibly failing the course were identified early and their further learning process was accompanied more closely [2], [4]. [5] and rating of the material science course cumulative blended learning course (right) with regard to inverted classroom teaching and blended learning scenarios SS2016 to SS2018. Material science is a descriptive subject covering models, intrinsic explanations and scientific descriptions more than internationally well understood formulas. The portfolio grading offers students the possibility to study in their own velocity at home, comfortably with students mates offering help -if neededimmediately. Therefore, students with low language skills who may not follow a face-to-face class benefit from the self-studying periods, because they would not become frustrated with not understanding and participating in class. Still approximately 30% of the students with low language skills showed up during face-to-face time of the inverted classroom scenarios because they appreciated the individuality of small group work. Most students were prepared and had a list of written questions they longed to be answered. They were able to solve even more complex material science problems because the pleasant atmosphere during the small group work enabled students to apply their knowledge without being discriminated. They also scored well at these particular tests the evening after the face-to-face time. Also, during small group International Journal of e-Education, e-Business, e-Management and e-Learning project based work in class the lecturer may explain in detail according to the special individual need and level of comprehension of different students. It was possible to explain details slowly, partly using translations and sketching most important issues. The teaching becomes more personal and individual. As stated earlier [2], [4], [5], students benefit and score higher achieving a better understanding of the theoretical background in Material Science than students who only studied for one single final exam. In the troublesome summer semester 2018 even fewer students failed the class when their background was non-native German speaking. Even if there no immediate success showing the final grade of the course students benefit from their common learning skills in subsequent classes as mentioned often in questionnaires. Still, the biggest advantage of the new grading system was found to be the individual reusable, time and place independent teaching materials and the transparent level of points throughout the semester which offers direct knowledge of the achieved grade [4], as testemonials of students coming from a different language background support: • "We have enough time to look up unknown words and repeat sentences as often as we need this." • "I have the possibility to talk to friends with the same language background and understand better" • "I do not have to catch everything I need to study in class, I can spend as much time as I need and do not only have a half page full of notes I do not understand at home." • "It is great that I know exactly how many points I need in the different activities to pass the course. It feels good to succeed during the semester and not only towards the end" From "problem based learning" courses it is known that students are often insecure on the scientific details to learn. Therefore guidance and transparency is very important in this course setting. The course setting has to be explained in detail, deadlines have to be given in advance, the learning-outcome needs to be clear for each theme of the course and students need to know why they learn these exact topics. Hands-on exercises and class projects have to be in alignment with the teaching material and content of the self-study period. Most important: the lecture videos have to be in alignment with the teaching material, the present lecture tasks and with the learning objective. Only then, students benefit from digital teaching material. To tell them "somebody has uploaded something good -work on this", will fail! This specific course setting is time consuming, regarding the preparation and maintenance of Moodle activities necessary to generate a stand-alone course. Also, the preparation of necessary lecture films and the individual addressing of students needs resources [2]. However, lecture films, inverted classroom scenarios and the portfolio standard based grading offers a solution to successfully teach GSL first year students. Conclusion The material science course at HTW-Berlin successfully implements the design driven teaching approach along with inverted classroom scenarios based on micro lectures and peer-to-peer lecture films. Visualizing difficult scientific background knowledge offers native German language students but especially non-native German students with severe language deficiencies an equal starting. All learning materials were established and provided via Moodle and therefore time and place independent giving students the chance to fulfil the courseś learning outcome during self-study periods. The course design is accompanied by a cumulative micro-grade assessment via multiple activities, such as tests, lectures, presentations, forum discussions, written homework and glossary entries. These grades are summed to obtain the overall course grade improving students` grades during the semester because the final grade is not based on one 90 minute final exam but rather on multiple different assessments meeting the semester course progress. This enables students to participate regularly at multiple activities during the semester that are summed to obtain the overall course grade. The majority of students agreed on enhanced study skills and freedom when forced to study throughout the entire semester instead of learning intensely towards its end. Especially students with migration background benefitted from this new approach because they did not have to struggle with language deficiencies in class but were able to study (with peers) according to their individual learning progress and velocity. The small group hands-on lectures during face-to-face time also allows for answering questions according to the individual need. Although students with migration background generally scored lower in tests due to the lack of language skills improved learning outcomes are demonstrated in high quality class discussions and in overall understanding. Conflict of Interest "The authors declare no conflict of interest". Author Contributions The author conducted the research, analyzed the data and wrote the paper. The author had approved the final version.
2021-08-27T16:55:09.733Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "752ef8e7cebe8c1e045abf88d0d9edc0b2cc1abe", "oa_license": null, "oa_url": "https://doi.org/10.17706/ijeeee.2021.11.3.93-100", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0ce4b73b090a016103bfe4b7021ba69da38d092b", "s2fieldsofstudy": [ "Education", "Engineering" ], "extfieldsofstudy": [] }
245818209
pes2o/s2orc
v3-fos-license
African Cosmovision and Eco-Spirituality: Healing the Ecological Crisis in Africa In our age of huge ecological crisis and colossal consequences for humanity and the earth system, occasioned by man’s ill-exploitation of the earth’s resources, this paper argues that the root of our ecological crisis is in the spiritual deficit of the modern man, which informs man’s attendant exploitative approach to nature. Thus, in the light of the contemporary search for remedy through ecological spirituality, the paper calls for spiritual renaissance of Africans through the re-sacralization of their worldview in line with the traditional African Eco-spirituality, to save the African continent from ecological collapse. For this aim, the paper explores the concept of African eco-spirituality and appeals to its valuable potential in addressing the present ecological crisis and ensuring a sustainable ecological management and protection in Africa. The hermeneutical, speculative and prescriptive methods of research are adopted in the Introduction The need for ecological preservation and conservation is, more than ever, everywhere emphasized today. Of course, the pervasive effects of the global ecological crisis on all life on earth in our day, has made this essential concern a basic necessity. Ranging from environmental degradation, climate change, global warming, to the depletion of natural resources (food, water, and energy), the ecological crisis has not only created massive imbalance in the ecosystem, but ruefully threatens the survival of humans as well as the earth's biodiversity. Sequel to this is the global urgency for the conservation and preservation of the ecosystem, if life on earth must continue. Amidst this call, however, is the general recognition that the unchecked human activity and ill-considered exploitation of nature are responsible for the problem. Among the oft-proposed mitigating measures are investment in cost-effective and sustainable energy technologies, elimination of distorting subsidies favouring fossil fuels at the expense of renewable alternatives, the development of climate-friendly markets (e.g., carbon trading), targets for concentrations of greenhouse gases, and rationalized consumption and production patterns (Melnick et al., 2005, p.28). However, the failure of these proposals to effectively address the challenge of our ecological crisis has not only awaken the consciousness of many to the limitations of the scientific and technological approaches the problem, but also to the fundamentally moral, spiritual and religious nature of the problem as associated with man, who is exploiting the earth's resources irresponsibly. This awareness has, in recent time,encouraged the concern for environmentally based spirituality, as an approach in tackling the problem. Thus, at the forefront of ecological debates today,is the clarion call for a spiritual and moral response to our environmental crisis. Pope Francis, for instance, recognizes that, "The ecological crisis is essentially a spiritual problem," (2015, para. 9), and urges"the need for a spiritual and moral response to these environmental crisis" (2015, para. 206).This concern is, thus, the basis of today's advocacy for spiritual ecology or ecospirituality, based on the assumption that spirituality is an important dimension in contributing to how we value and care for our environment. However, as expressed in their traditional eco-spirituality, caring for the environment has been a part of the African traditional way of life. The African eco-spirituality, which takes its bearing from the African cosmovision or cosmology, embraces the awareness of the sacredness of the whole reality (or the whole "web of life"). This manifests itself in the deep sense of reverence and respect traditional Africans have for the natural world, and the way they regulated their relationship with nature to ensure that nature and the environment are protected, while at the same time serving their human needs. Unfortunate, however, this African eco-spirituality appears lost today in the continent, due to the impacts of the materialistic, mechanistic, capitalist and consumerist world economy, which simply ravages the resources of the earth. Consequently, Africans now relate with nature and their environment from the capitalistic, manipulative and exploitative point of view in their attempt to accumulate wealth in terms of money. The net outcome is the present ecological disaster engulfing the continent. In the face of this challenge, this paper emphasizes the need for Africans to re-sacralize their worldview and re-invent their spiritual orientations of love, care and reverence for nature as augured in the indigenous African eco-spirituality, in order to reliably address the challenge of ecological crisis and ensure sustainable environmental preservation and conservation in the continent. Ecosystem The word, "ecosystem" deserves an attention here given its relation to our basic term here, "eco-spirituality" -a shortened derivative from "ecosystem" and "spirituality". An ecosystem (or ecological system) consists of all the organisms and the physical environment with which they interact (Chapin,2011, p. III). It consists of biotic and abiotic components that function together as a unit. The biotic components include all the living things, whereas the abiotic components are the non-living things. These biotic and abiotic components are linked together through nutrient cycles and energy flows (Odum, 1971, p.56). Thus, an ecosystem entails an ecological community consisting of different populations of organisms that live together in a particular habitat. Ecosystems are controlled by external factors, such as climate, parent material which forms the soil and topography; and internal factors, such as, decomposition, root competition, shading, disturbance, succession, and the types of species present. Ecosystems provide a variety of goods and services upon which humans depend for their survival. These include water, food, fuel, construction material, and medicinal plants, the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Ecosystem processes are driven by the species in an ecosystem, and the net outcome of the actions of individual organisms as they interact with their environment is the balance this ensures in the ecosystem. Hence, biodiversity(the biological variety and variability of life on Earth) plays an important role in the proper functioning of the ecosystem (Schulze, et al., 2005, p.449). Although humans exist and operate within ecosystems, much of human exploitation of nature have negatively impacted the ecosystem, resulting in a medley of ecological problems facing the world today. For the terrestrial ecosystems threats include such as environmental pollution, climate change, global warming, biodiversity loss, air pollution, water pollution, habitat fragmentation, soil degradation, and deforestation. For the aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas (Alexander,1999, p.14). Spirituality The other aspect of the term "eco-spirtuality", is "spirituality, which denotes the "deepest spiritual values and meanings by which people live"(Sheldrake,2001, p.1). It can also be conceived as Anne Carr defines it as "the whole of our deepest religious beliefs, convictions, and patterns of thought, emotion, and behaviour in respect to what is ultimate, to God" (1982, p.49). She adds that, "spirituality is holistic, encompassing our relationships to all of creation -to others, to society, and nature, to work and recreation -in a fundamentally religious orientation" (1982, p.49). Christina Puchalskibroadens the reach of the concept as she sees spirituality as,"the aspect of humanity that refers to the way individuals seek and express meaning and purpose and the way they experience their connectedness to the moment, to self, to others, to nature, and to the significant or sacred" (2014, n.pg.). The essential elements captured about the concept of spirituality in these range of definitions is it is "a relationship with the supernatural or spiritual realm that provides meaning and abasisfor personal and communal reflection, decisions and action" (VerBeek,2000, p. 32).Spirituality is commonly associated with religion because, "people explain their spirituality through a religious perspective" (Wijk, 2010, p.7). Eco-Spirituality "Eco-spirituality"(or Spiritual Ecology), as obvious from the definitions above, connectsthe ecosystem (ecology) with spirituality. In his paper, "Celebrating Earth Day through Eco-Spirituality",Olga Bonfigliosays,"eco-spirituality brings together religion and environmental activism" (2012, n.pg.).In the light of this, Valerie Lincoln (2000, p.227) writes that eco-spirituality is "a manifestation of the spiritual connection between human beings and the environment."As an ecospirituality is field in religion, conservation, and academia Eco-spirituality rests on the conviction that" there are spiritual elements at the root of environmental issues" (Sponsel, 2014(Sponsel, , p.1719; and that, to decisively address our distressing global environmental crisis, "there is a critical need to recognize and address the spiritual dynamics at their roots" (White,1967(White, , p.1203. Thus, it recognises that there is a spiritual facet to all issues related to conservation, environmentalism, and earth stewardship (Sponsel, 2014(Sponsel, , p.1718. Reflecting on this basic concern of eco-spirituality,Virginia Jones, cited in Bonfigliosays: "Eco-spirituality is about helping people experience 'the holy' in the natural world and to recognize their relationship as human beings to all creation" (2012, n.pg.).Historically, eco-spirituality emergesas a reaction to the Western world's materialism and consumerism (Delaney, 2009, p.32), as well as the "mechanistic and capitalistic world view"(Schalkwyk, 2011, p.1), believed to be responsible for many intensive forms of environmental exploitation and degradation, leading to the global ecological and environmental crises as we have them today. For this reason,Annalet Van Schalkwyk is convinced that the ecologicalcrisisis "man-made" (2011, p.2). Several mitigating measures have been proposed by environmental experts and by several International Organizations or Conferences on Environmental Protection in view of addressing this frightening global ecological crisis. For instance, the United Nations Millennium Project's Task Force on Environmental Sustainability recommends a series of measures,including investment in cost-effective and sustainable energy technologies, elimination of distorting subsidies favouring fossil fuels at the expense of renewable alternatives, the development of climate-friendly markets (e.g., carbon trading), targets for concentrations of greenhouse gases, and rationalized consumption and production patterns (Melnick et al.,2005, p.28). Again, in the United Nation's 2030 Agenda for Sustainable Development, member states expressed their commitment to protect the planet from degradation and take urgent action on climate change (Tarusariran. 2017, p.398). However, given the seeming inability of these measures to halt the perilous slideof ecological crisis and it consequences on man and the earth, today many have rightly argued that these measures are merely dealing with the symptoms of the problem, rather than tackling the fundamentally spiritual issues at root of the problem, whereby the modern man's defective worldview, which denies the transcendence, secularizes and instrumentalizes nature, engenders in him the attitude of ill-exploitation and degradation of nature, leading the present ecological crisis. In the light of this, James Speth, submits that,the top environmental problems are not only biodiversity loss, ecosystem collapse and climate change, but also, and more fundamentally"human selfishness, greed and apathy, and to deal with these we need a cultural and spiritual transformation" (qtd. in Crockett, 2014, n. pg.). And Pope Francis in his Encyclical Letter, Laudato Si,recognizes that, "The ecological crisis is essentially a spiritual problem," (2015, para. 2015, para. 9), requiring from man a spiritual and moral response (2015, para. 206). The Pope acknowledges the interconnectedness of human beings with nature, and maintains that,"the issue of environmental degradation challenges us to examine our lifestyle" (para. 2015, para. 206), which requires that we "look for solutions not only in technology but in a change of humanity; otherwise, we would be dealing merely with symptoms" (2015, para. 9). The vanguards of eco-spirituality are, thereby, united in the conviction that the present ecological crisis needs to be understood as requiring a new way of life, not just a few adjustments here and there" (Ruether,15). Particularly, the crisis is deemed to "necessitate the search for an ecological re-sacralized worldview of which an "ecological" understanding of religion and spirituality is part, and for an awareness of the sacredness of the whole reality (or the whole "web of life"); the cosmos, the earth system and its ecosystem, and of humanity as part of this whole and not separate from it" (Schalkwyk, 2011, p.3). Thus, to resolve the present global ecological crisis, there exist a serious need for an ecological spirituality, or environmentally based religion and spirituality, by which humans mustre-examine and re-assess their underlying attitudes and beliefs about the earth, and their spiritual responsibilities toward it.Eco-spirituality is, therefore, "the direct consciousness and experience of the sacred in the ecology which may serve as a sustained source for communities' and individuals' practical struggle for the healing of the earth's ecology and for humanity's sustainable living from the earth's resources" (Schalkwyk, 2011, p.6). It is the consciousness and experience of the physical-spiritual interconnections between ourselves and the environment. In the words of Suganthi (2019), eco-spirituality is simply, "having a reverential attitude toward the environment in taking care of it while dwelling within its premises" (n.pg.).It is on the strength of this emerging view that proponents of eco-spirituality emphasize the importance of including spiritual elements in contemporary debates on environmental conservation and preservation, as well as awareness of and engagement of contemporary religion and spirituality in ecological issues. There exists the emphasis the, that ecological renewal and sustainability necessarily depends upon spiritual awareness of men and an attitude of responsibility towards the ecosystem. This includes rejection of the attitude of seeing no other meaning in the natural environment than what serves for immediate use and consumption; and on the other hand, arecognition of the sacredness of nature and the adoption of behaviours that reflect that recognition in the utilization of the earth's resources. Lincoln (200,p.227) identifies five principles of eco-spiritual consciousness: tending, dwelling, reverence, connectedness, and sentience (cited in Suganthi, 2019, n.pg.). Eco-spirituality includes a vast array of people and their traditional practices that intertwine spiritual and environmental experience and understanding. In case of the African traditional eco-spirituality, which is our major concern here, nature is sacred, imbued by intrinsic spiritual value, and worthy of reverent care" (Taylor,2009, p.xi). This condition the attitudes of the traditional Africans and their way of relating with nature in a manner that manifests reverence and stewardship, to ensure their sustainable living from the earth's resources and their effective conservation of the earth's ecology. African Cosmovision An understanding of the African cosmovision is crucial for our appreciation of our discussion on the African ecospirituality, because the latter is an offshoot of the former. African cosmovision or cosmology defines the traditional African worldview or the African concept of the universe and what there is in the universe, which serves as a major determinant of how they perceive, interpret and relate with the universe (Ojong, 2008, p.201). Notwithstanding the presence of a variety of subcultures in the African continent, there exist some basic assumptions across borderline which defines their cosmovision, which is itself rooted in the African ontology -African concept of reality.From the ontological perspective, it is impossible to separate the life of the Africans from their religion as they maintain a densely religious or spiritual notion of reality.The religious awareness of the African people is not an abstraction, but a living component of their way of life. Kofi Busia and John Mbiti affirm this about the traditional African societies. Busia(1967, p.34) remarks that the African is "intensely and pervasively religious … in traditional African communities it was not possible to distinguish between religious and non-religious areas of life. All life is religious". Mbiti also asserts that "Africans are notoriously religious" (1969, p.72). For B. E. Idowu "in all things [Africans] are religious... for the African to be is to be religious" (1967, p.3). Given the background influence of this religious and spiritual ontological framework, the Africans cosmovision or worldview is also densely spiritual. Africans believe that the entire cosmos is the product of God, and that, "there are three intimately related cosmological modalities, which encompass a continuum of realities" (Ijiomah, 2014, p.97). These cosmological areas or universe include the sky, where God, major deities and angels reside; the earth, where humans, animals, natural resources and physically observable realities abide; and the underworld, where ancestors and some bad spirits live (Elemi, 1980, p.54). Irrespective of the different categories of cosmological domains, Africans believe that they are not separate, but interrelate and interact with each other by vital force (Tempels, 1959, p.5). And although all the realities belonging to these different domains of the sky, the earth and the underworld, are categorized into two: the physical and the spiritual realities, yet, "they all relate with other" (Maurier, 1985, p.65). This relationship or interaction is made possible by the fact that every existent (including the environment) has a spirit or force inhabiting it. These forces, which must be acknowledged and treated with reverence though reverential handling of every existent, relate as contraries and yearn for each other (Ijiomah, 2014, p.99). This belief accounts for Africans' sense of sacredness of nature as well as their veneration of things, which prompted the early European explorers to the African continent to describe African religion as animistic. Thus, in the cosmovision of the Africans, every reality in the universe, is not only a product of God, they have both physical and spiritual elements and they relate with each other through vital force, making them yearn for each other. This creates a strong notion and conviction for the Africans that life is a unity that consists of three integrated domains, namely the natural world, the human world and the spiritual world.Mbiti captures this African integrated worldview thus: "The spiritual universe is in unity with the physical universe, and these two intermingle so much that it is not easy, or even necessary at times to draw the distinction or separate them" (1969, p.72). In other words, the African world which exists in two spheres -the visible, tangible, and concrete world of humans, animals, vegetation, and other natural elements; and the invisible world of the spirits, ancestors, divinities, and the supreme deity, is one world, indivisible, with one sphere touching on the other. Commenting on the implications of this African unitary and integrated cosmovision, T. Okeke submits that, for the Africans, the visible and the invisible are perceived as one, interrelated, interacting systems, where agency and causality form a gigantic network of reciprocity, which translates into several acts of what we call religion, respect for nature, sacrifice, divination, communalism, which mark the relations between spirits and ancestors on the one hand, and men on the other hand (2005, p.3). It is this integrated and densely religious/spiritual cosmovision that gives the motivation gives meaning, motivation and direction to the African eco-spirituality. African Eco-Spirituality Spirituality may be defined as "a relationship with the supernatural orspiritual realm that provides meaning and a basis for personal and communal reflection, decisions and action." (VerBeek, 2000, p. 32). African Eco-spirituality defines the African's direct consciousness and experience of the sacred in the ecology which serves as a motivation for their responsible management of the earth's resources, while at the same time living sustainably from the earth's resources. African eco-spirituality not only presumes the sacredness of the ecosystem, it also considers the nature of each being and of its mutual connection in an ordered system of spiritual connectedness. Features of this African eco-spirituality include the believe in the earth a sacred ownership of God (Lang, 2018,p.61), which has vital force and forms a part of the continuum with other spheres in the cosmological modalities; man as the steward not master of the earth; life as a continual act of prayer and thanksgiving to God the Creator, for the gift of life and the earth for man's sustenance; knowledge and symbiotic relationship with the earth; and being aware of the impacts of one's actions in the use of the environment onthe present and future generations. Such spiritual orientation about the ecosystem necessarily implies a mutuality and reciprocity between man, earth and the cosmos. It is also particularly rooted in the belief that humans communicate with the spiritual world (God, deities, ancestors) via the natural world (earth).Hence, to keep this communication channels between human and spiritual world, itis important to conserve the earth by creating a favourable environment where flora and fauna canhave their habitats. This means that conservation of the environment is key for a fruitful spiritual connection between man and the spiritual world. This also means that, for the Africans, nature and the environment are part and parcel of the sacred reality of life, or one with man, since there is no separation. To destroy nature and environment is to destroy oneself. Living in harmony with the natural world translates to living in harmony with the spiritual world, as they are interconnected and co-dependent(Tarusarira,2017, n.pg.).It is believed that punishment takes place, in the form of drought,diseases andconflicts, when the rules and norms that protect the environment are exceeded (Gonese, 1999, p.9). Africans indeed believe that reciprocity between land, plants and humans makes life on earth possible. This is shows up in the efficient land use and management by traditional Africans, which prevented land degradation, even as land is utilized for agriculture, extraction of natural resources and other land-based activities that are considered fundamental to livelihoods, food security, incomes and employment. It is for this reason that in Africa, the entire relationship between humans and nature, including activities such as use of the environment, has a sense of sacred, with deep religious and spiritual underpinnings. Writing about Africa eco-spirituality McDonnell says that, "relationships between nature and humans, spirit and nature are not dichotomized or compartmentalized, but are integrated into an interdependent system of existence that is tied together through spiritual interactions" (2014, p.98). Turaki(2006, p.95) observes that African eco-spirituality is steep in a "profound respect and reverence without exploitation for nature". This has had immense positive benefits for the traditional African management of the ecosystem. This eco-spirituality automatically ensures that nature and the environment are protected. It abhors all forms of environmental degradation, while encouraging environmental conservation and preservation in diverse ways. Furthermore, this eco-spirituality conserves biodiversity, as animals are also considered as apart of a larger spiritual system, and are respected and not killed unless in self-defence or to provide immediate sustenance or sacrifice (Tarusarira, 2017, n.pg.). For this reason, in certain cases, some animals may be regarded as sacred to devotees of a particular divinity (hence, not killed or endangered); or natural phenomena such as trees, hills, or rivers may be deified; hence, not to be degraded or polluted. Moreover, non-living elements, such as rain, are also deemed as sacred and as powerful spirits, as they are needed to sustain life. Human beings are, therefore, seen as being spiritually connected toall that happens within the greater frameworks of nature, which must be respected, conserved and used with care, rather thanseen as a given to be exploited and abused through unchecked human activities. Hence, land is considered sanctified by its possession by God and ancestral spirits: "land does not belong to humans, but that it belongs to ancestors or a God" (Workineh, 2005, p.17). Humans are the custodians of the land. They have to take care for it so that they can pass it through to the yetunborn generation (Wijk, 2010, p.12). In this vision nature is seen as a living being that works together with mankind. The earth is therefore not seen as a property that can beexploited in the way humans simply desire, but has to be taken care of in a way that benefits the whole community (including the unborn).Good care of the land can secure health and survival, through responsible farming practices, such as shifting cultivation, to allow the land to regain its lost nutrients after a period of cultivation and agricultural cycles shaped by the seasons and religious observances covering the entire year. The entire farming cycle was marked by ritual practices which included sacrifice to and appeasement of the spirits or God; prayer and requests for communal intercession. In every community, there existed traditional religious specialists whose roles were connected with agriculture. They carried out religious observances throughout the year in an annual cycle of rituals intended to promote agriculture and ensure environmental protection (Lang, 2018, p.62).Knowledge on traditional agricultural practices was established by years of experiences to cope with environmental conditions and was induced by the strong notion of the interrelationship of human, nature and spiritual realm. Loss of African Eco-Spirituality and Consequences Today, Africans, like the rest of humanity face mounting environmental crisis. Part of the reasons for this with regard to the African continent is the loss of African eco-spirituality, due to influence from western materialism, consumerism and secularized worldview. The origin of it all is colonialism. When in the nineteenth century, most of Africa was colonized by various European powers, it was ostensibly to bring 'enlightenment' to the 'dark continent'. However, "colonialism eventually became synonymous with material exploitation, cultural expropriation and anthropological impoverishment" (George Ehusani, 1997,p.18). Citing Ivan Sertima, Ngugi Wa Thiong'o says of the state of emergency all over Africa, occasioned by colonialism: "No human disaster… can equal in dimension of destructiveness the cataclysm that shook Africa… the thread of cultural and historical continuity was so savagely torn asunder that henceforth one would have to think of two Africas: the one before and the one after the Holocaust" (1983, p.86). Among the swarming consequences of colonialism in Africa was the loss of African sense of eco-spirituality, because with colonization, indigenous sense of sacredness of nature was historically replaced by an imposed western colonial belief that land and the environment are commodities to be used and exploited, with exploitation of natural resources in the name of socio-economic evolution. This perspective "remove any spiritual value of the land, with regard only given for economic value, and this served to further distance communities from intimate relationships with their environments" (Ritskes, 2012, p.45), with devastating consequences for the people and their environment. Wijk writes that where the ancient agricultural systems were based on a relation between human, nature ands pirituality, the western colonial world separates this triad (2010, p.15).In consequence, today in Africa, the sense of sacredness for the natural world is lost, as land, for instance, is seen merely as an acreage to be exploited, bought and sold. Ehusani confirms that, "today the characteristic African humanness, personalism, hospitality, wholesome relations, and the overwhelming sense of the sacred, have been invested and obscured by the cankerworm of Western materialism and individualism" (1997, p.20). Another factor responsible for this loss of African eco-spirituality is the secularist philosophy of our age and materialism, which reject the sacred value Africans attach to land and empty the land of all its spiritual roots. These spirits of secularism and materialism have long been embraced by Africans at the peril of their environmental life, as this shows up in the irresponsible use of land and other natural resources, with lethal consequences on the ecosystem in the continent. The influence of this western secularism and materialism in Africa ramifies itself in the ethno-religious conflicts and civil wars, which have become more vicious in the continent with massive loss of human and natural resources, for Africans now possess lethal weapons of war to main and kill their fellow men, since life has lost its sacred value. Also, in the last two decades, armed robbers, hired assassins, terrorists and bandits have multiplied their ranks and laid siege of the continent, killing and maiming their victims with reckless abandon, because life has lost its value with the loss of sense of sacred, inspired by materialism. With the same loss of sense of sacred for nature, herdsmen are not only frequently terrorizing human communities, but are also massively destroying the terrestrial and aquatic environments. Globalization has also resulted in the loss of African eco-spirituality through the exposure of the Africans to the capitalistic economic system of thinking, based on the idea that one canuse the natural resources one desires and that it is necessary to accumulate wealth in terms of money (Rolston,2006, p. 308).Reflecting on this, Schalkwyk observes that, "The present ecological disaster is a result of human exploitation of the earth's natural resources due to a capitalist and consumerist world economy, which disadvantages the larger majority of the world's population, but most of all, which ravages the bounty of the earth in the name of using 'natural resources' in a productive economy" (2011, p.2).This creates a difference in approach in the Africans' relationship with nature and the ecosystem, by which they now adopt thecapitalistic and manipulative approach in dealing with nature in an attempt to accumulate wealth in monetary terms.This exploitation creates crises like the extinction of species and biodiversity, the destruction of habitats in which species need to survive; pollution of water, air and the environment in which humans, animals and plants have to exist; the depletion of mineral resources, forests and fisheries; change of climate patterns, global warming and so forth (Schalkwyk, 2011, p.2). Thus, the spiritual elements of justice, mutual trust and respect for fellow human beings and nature have all disappeared because of the tendency to exploit the land and relationships for, personal benefit (Wijk, 2010,p.12). The intrinsic motivations by African to take good care of the land (good stewardship)induced by African indigenous ecospirituality is now absent because of the expansion of this capitalistic thinking. Extrinsic motivations, like the pressure to behave according the new economic principles, becomes the new norm. The sense of sacred and conviction to take care of the ecosystem for spiritual equilibrium with the Supreme Being and the ancestral spirits as well as for future generations has disappeared because of exploiting the land and environment for monetary benefits. The communal cohesion and reciprocity that was established from the Africaneco-spirituality has also consequently disappeared. Besides, the internationalization of the food production has made the modern African farmers to focus more and more on the economic profitability of their production as the expense of the ecosystem. AsPrice puts it, "pressure from markets and cash undermines what farmers know as the right thing to do" (Price, 2007, p. 30). This has resulted in the over cultivation and exploitation of natural resources to fulfil the production needs (Chapin et al., 2009, p.242).In this way, the environment becomes sacrificed for development and economic benefit. With these influences Africans have been challenged to regard their cropping calendar, land rituals practices and festivals that ensured effective land management as wrong, just because the practices of the new religions in the continent are not in line with the African traditions. The strong link between spiritual values and environmental management, which has positive effects on the ecosystem has now disappeared. In general, Africans have become disconnected from the natural world, to which they attached significant spiritual value. The consequence of this is the ecological crisis facing the continent today. Development organizations and environmental agencies neglect the importance of spirituality, focusing mainly on the same narrow capitalistic and materialistic ends. This is at the base of the environmental degradation by multinational companies operating in Africa today. These companies, which are overly concerned with profit maximization show little or no concern for the damaging effects of their resource exploitation on humans and on the ecosystem. Ken Saro-Wiwa, cited in Kekong Bisong observes as a result of exploitation in the Niger Delta region of Nigeria, "the once fertile African farmland has been laid waste by constant oil spills and acid rain. Puddles of ooze, the size of football fields dot the landscape, and fish and wildlife have vanished" (2005, p.36). The Wildlife Fund for Nature has calculated that Shell's gas-flaring activities in Nigeria are a major contributor to global warming (qtd. in Kekong, 2005, p.36). And according to the United Nations Conference on Environment and Development, "due to the nearly four decades of oil extraction, the Niger-Delta coastal rainforest and mangrove habitats is the most endangered river delta in the world" (cited in Kekong, 2005, p.36). Recommendations Certainly, the environmental crisis in the African continent, and by extension, the world, has created a need for environmentally based religion and spirituality. African eco-spirituality is necessary in this process as it frees man from consumeristic and materialistic approaches to nature and imposes a sense of understanding about the interrelatedness of cosmic realities and sacredness of nature. This ultimately encourages the protection of the environment as it creates an understanding of the earth as a total community of living beings, where humans have to immerse themselves in nature, and not dominate or objectify nature. This immersion leads one to finding one's connection with nature, towards respecting and loving nature. In line with the current search for eco-spirituality for ecological conservation, African ecospirituality in its value for nature, thus, presents a solution to the ecological crisis facing the African continent today, occasioned western materialistic, consumerist and capitalistic and secularized worldviews. Not only is it possible to counteract these degrading forces on our environment with African eco-spirituality, we can also restore our environment ruined by these forces through certain recommendations in line with spirit of this concept. In view of this, were commend that our ecological crisis needs to be seen not just as a crisis in the health of nonhuman ecosystems. Rather, we need to see the connection between the impoverishment of the earth and the impoverishment of our human spirit though these defective worldviews. In other, words, our ecological crisis has its root in the spiritual disconnect of many African from their sense of sacredness of nature, which must be rediscovered by reliving our belief about the interrelatedness of cosmic realities, divine ownership of the earth, and sacredness of nature, which African eco-spirituality represents. Again, we identify with the assertion that "a healed ecosystem -humans, animals, land, air, and water together -needs to be understood as requiring a new way of life, not just a few adjustments here and there" (Ruether, 1992, p.15). Hence, we recommend the re-animation of our obligation fora reverent and responsible use of nature's resources, to be in harmony with the spiritual world, and so, ensure the conservation and preservation of the ecosystem for the present and future generations, as the African eco-spirituality commits. Again, we recommend a new ecological vision and communal ethic that can knit together Africans across religious and ethnic divides for good of our environment, where elements of justice, mutual trust, respect for fellow human beings and commitment to protect the environment is seen as a mark of authentic existence and Africanness. To preserve the earth is to be ecospiritual; and to be eco-spiritual is to be an authentic African. Conclusion The environmental crisis that Africa as well as the rest of world faces today, as evident in our discussion above, is closely connected with a dysfunctional worldview and lopsided concept of reality, namely, the materialism, consumerism and capitalism. This dangerous triad is manifested everywhere today in the denial of transcendence, denial of interrelatedness of cosmic realities, the illusion of man's absolute control and unlimited power of scientific knowledge. These are at the root of today's explosive wave of industrial and technological civilization and are all a by-product of the spiritual poverty of the modern man. This worldview has eroded from the hearts of many today, the sense of sacredness of nature and the requisite attitude of stewardship towards nature. With such defective worldview starved of spirituality, humans have today disrupted the harmony with nature, and dislocated the complex coherence within reality, resulting in ecological crisis swamping the entire world. Ensnared in this web, Africans have lost their indigenous sense of eco-spirituality, by means of which they have often regulated their relationship with the environment to ensure its conservation and preservation. This calls for the reinvention of this African eco-spirituality, has the potential to save Africans not only from the spiritual bug caused by western materialism, consumerism and capitalism -which is at the root of our ecological crisis -but also effective enough to heal our ecosystem from the degrading forces of this bug. In African eco-spirituality, we therefore, see a credible alternative to the despicable forces of western materialism, consumerism and capitalism, which are destroying our ecosystem. In their place, African eco-spirituality presents a vision of nature as a sacred living being that works together with mankind; and the earth, not as a property to be can exploited in the way humans want, but a sacred sphere to be taken care of in a way that benefits the whole community (including the unborn).
2022-01-09T16:06:25.998Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "0dca9728d2d8af5878d170d0eef2d16607a115cb", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/ijird_ojs/article/download/166891/114436", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "18cccaf4f5a70fae061cce64c3758060d1ffe3ed", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
270403608
pes2o/s2orc
v3-fos-license
Earthworm ( Eisenia andrei )-Mediated Degradation of Commercial Compostable Bags and Potential Toxic Effects : The availability of compostable plastic bags has increased greatly in the past few years, as it is perceived that this type of bags will be degraded after disposal. However, there are some knowledge gaps regarding the potential effects on the soil ecosystems. We assessed the rate of degradation of samples of four different types of commercial compostable bags in vermicomposting systems with the earthworm species Eisenia andrei . We also evaluated the biological response of E. andrei (survival and reproduction) to microplastics (MPs) from fragments of the plastic bags (<2000 µ m) and assessed seedling emergence in common garden cress ( Lepidium sativum L.) exposed to micronized plastic (<250 µ m) and the respective leachate, following OECD and ISO guidelines, respectively. The rate of degradation differed significantly depending on the type of plastic rather than the substrate in the vermicomposting system. This finding suggests that the degradation process is more dependent on the microbial community colonizing the different plastic types than on earthworm activity. Regarding the biological response of the soil system, L. sativum seedling emergence was not significantly affected; however, earthworm reproduction was affected, suggesting that although compostable, some of the formulations may potentially be toxic to soil fauna. Introduction Plastic pollution is a worldwide issue that has resulted from consumption patterns in daily life.The disposal of single-use plastic items, including bags, is causing the rapid and ever-increasing accumulation of plastic debris in aquatic and terrestrial ecosystems, including soil [1].Plastic manufacturers have followed two different routes with the aim of tackling this problem [2].One is the introduction of reusable plastic bags, which decreases the frequency of disposal and reduces the influx of plastic debris in soil.Another is the development of biodegradable or compostable bags, which can be rapidly fragmented and degraded, as they are made from polymers such as poly (butylene adipate co-terephthalate) (PBAT) or polylactic acid (PLA) that undergo weathering via hydrolysis, mechanical and enzymatic activity, leading to their eventual disappearance [3].As the weathering process leads to the polymer chains being broken down into oligomers and monomers that are more easily transformed by microbes [4], this appears to be the best option [5].Therefore, many manufacturers have introduced bags made of these polymers into the market, and labeled them with national and European certificates of compostability as a commercial advantage for their products [6]. However, the fragmentation and degradation of plastic items will result in the formation of microplastics (MPs) of different shapes and sizes [7].In addition, the weathering of plastics will lead to the release of chemical additives and degradation products through leaching [8].Biodegradable plastics are quite complex in terms of their chemical composition [9,10], and despite the exponential increase in the number of new products of this type, their ecological effects remain widely unknown, which raises concern about the environmental safety of the by-products of discarded compostable bags [11]. Furthermore, these compostable bags may reach liquid and solid waste treatment facilities and undergo degradation in other matrices, such as sewage sludge (SS) or organic solid wastes, which, upon disposal, eventually end up in the soil [12].This leads to scenarios in which compostable bags are subjected to abiotic-related degradation processes before reaching the soil.As such, studies have been performed on soil [13,14] and on compost, using a thermophilic phase [15,16], showing a clear effect on plastic surface [13], significant decrease in weight [16], presence of oligomers and particle size [15]. However, these studies have not linked abiotic and microbial degradation with what occurs once these materials reach terrestrial ecosystems, where plastics are further processed by soil-dwelling organisms such as earthworms [7,17].Earthworms are known to be able to transform organic matter (OM) and contribute to soil microbial turnover, as they can process different types of organic waste, e.g., SS, and non-recyclable solid waste, such as spent coffee grounds (SCGs), into material that can potentially be used as fertilizer [18,19].The role of earthworms in transporting and transforming plastic particles has also been demonstrated, and the abundance of MPs has been found to be altered in vermicomposting systems, suggesting that they can alter MPs and plastic adjuvant availability in soil systems [20][21][22]. There is, therefore, a need to investigate the earthworm-mediated degradation of plastics in intermediate matrices prior to disposal in the soil system and also to assess the potential toxicity of plastic degradation products to soil fauna and flora.Several studies published in the last decade have used toxicity tests following ISO and OECD guidelines to assess the environmental impact of plastic particles by using model species.This includes the study of the response of Eisenia fetida/andrei, as representative earthworms, to conventional polyethylene (oxidative stress and internal lesions) and polyester microfibers, as well as to PLA-based bioplastics (effect on reproduction) [23][24][25].Lepidium sativum L., as representative terrestrial plants, was also studied, with growth inhibition and physiological parameters shown to be altered in response to conventional polyethylene terephthalate (PET) and polycarbonate (PC) [26][27][28] but also to PLA-based bioplastics [29].Nonetheless, to our knowledge, few or no studies have used both types of model species to assess the toxicity of plastics or, specifically, of compostable polymers.Another factor for consideration is the potential source of toxicity.To date, most studies conducted in terrestrial ecosystems have overlooked the leaching from plastic structures due to weathering, resulting in the release of compounds such as phthalates and flame retardants that may be toxic to soil organisms [30,31]. The aim of the present study was to assess how samples of different types of certified compostable plastic bags are degraded in vermicomposting systems with two distinct substrates (SS and SCGs).The study also aimed to assess the toxicity of the degradation products, i.e., MPs of different sizes and plastic leachates, to the model plant species L. sativum and the model earthworm species E. andrei following ISO 18763 and OECD 222 standard guidelines [32,33], respectively.We hypothesized that the combination of the composition of the substrate and the composition of the plastic bags would have a role in the degradation rate of the latter [34,35].On the other hand, we hypothesized that the presence of polylactic acid (PLA) in the plastic bag composition can potentiate negative effects on E. andrei survival and reproduction [23] and that higher degradation can result in higher toxicity due to the release of plasticizers and other unknown by-products of plastic bag degradation into soil [36].The data obtained were examined by multivariate analysis, to identify key chemical components of compostable bags that should be carefully considered during manufacture due to the potential effects on soil fauna and flora during the decomposition of the compostable plastics. Source, Preparation and Characterization of Commercial Compostable Plastic Bags Plastic bags showing in their labels compostability certifications were bought from on-line suppliers and local markets and were given identification numbers, for inclusion in the LABPLAS project sample database: 069_LPB-Bag_Pat-GW (hereafter 069), 070_LBP_BagBioTuf_PHA (070), 072_LBP_Bagbrown (072) and 073_LBP_BagBio100 (073).The plastic composition was provided by the supplier and also checked by Fourier Transformed Infra-Red Spectroscopy coupled with Attenuated Total Reflectance (FTIR-ATR) through a Nicolet 6700 FT-IR spectrometer coupled with a Smart Orbit diamond (Thermo Electron Corporation, Waltham, MA, USA).Further characterization was performed for phthalates through Gas Chromatography-Mass Spectroscopy (GC-MS) using an Agilent 5975C TAD series chromatograph (Agilent Technologies, Santa Clara, CA, USA) (methodology published by Abril et al. [37]).All assessments were performed at Centro de Apoio Científico-Tecnolóxico á Investigación (CACTI, UVigo). Prior to the different tests (degradation or toxicity), the plastic bags were prepared and fragmented according to the test purpose.For the degradation tests, each plastic bag was cut with scissors into square pieces of approximately 25 cm 2 in area, and the dry weight was then determined.For the earthworm toxicity test, the samples were further fragmented into pieces in which the largest dimension was less than 2000 µm.For the seedling emergence test, the plastic fragments were micronized into particles of less than 250 µm in size to ensure interaction with the organism in an Ultracentrifugal Mill (ZM 200; Retsch Verder Scientific, Haan, Germany) at 16,000 rpm.Dry ice was added during the micronization process to prevent heating. To distinguish the potential toxicity of the plastic particles themselves and their leachates, <250 µm fragments were added to distilled water at a concentration of 10 g/L, and the suspensions were thoroughly mixed to extract water-soluble components, as previously described [38]. Degradation Test To assess how the plastics degrade under vermicomposting conditions, previously weighed and identified samples were placed within the surface layer of two ongoing active vermicomposting systems, one containing SS obtained from a local wastewater treatment plant (WWTP) (Moaña, Pontevedra, Spain) and another containing SCGs obtained from a local cafeteria (Faculty of Biology, University of Vigo).The plastic sheet location within the vermicomposting system was marked for sampling time, and the plastic bag ID was marked with a plastic stick.The main characteristics of each substrate are presented in Table 1. Table 1.Main characteristics of the initial substrates in each vermicomposting system used to assess the degradation of the plastic samples.EC: electric conductivity; OM: organic matter.Each vermicomposting system comprised 680 L containers with a surface area of 1 m 2 and a depth of 50 cm, with a bottom layer of vermicompost obtained from the respective original material, serving as a bed, prior to fresh SS or SCG addition, similar to other studies performed by the group [39].Each system contained approximately 6000 individuals/m 2 (5990 ± 53 for SS and 6212 ± 100 for SCGs) of E. andrei at the start of the experiment.Earthworm activity was continued by adding fresh substrate material (SS or SCGs) every two weeks, also promoting upward mobility and interaction with the deposed plastic sheets, and the moisture was maintained by spraying the material with water 3 times a week. SS After 15, 30, 60, 90 and 120 days, each vermicomposting system was locally sampled for the corresponding ID and time and at each sampling time by retrieving a vertical sample of the system, minimizing the mixing of vermicomposting system layers.For each sampling time, 4 pieces of each type of plastic were retrieved, washed and dried prior to weighing. The plastic material was examined for tears, deformation and softening, and the weight loss, calculated as the difference between the initial and final weights, was recorded as indicator of degradation. Lepidium sativum Seedling Emergence Test The L. sativum seedling emergence test was conducted in glass Petri dishes (ø = 80 mm) lined with Whatman #1 filter paper, in an adaptation of the ISO guidelines [32].Before the start of the test, powdered plastic was added directly to Petri dishes followed by 1 mL of distilled water and 3 mL of a LUFA 2.2 soil extract (1:5 weight/Volume).Soil extracts were used to simulate the existing interstitial soil-water interface for root development and predict more realistically plastic-plant interactions.Other Petri dishes were spiked with leachate solutions by adding 1 mL of serial dilutions of the original 10 g/L leachate solution and 3 mL of soil extract.In addition, control dishes were prepared with 1 mL of distilled water and 3 mL of soil extract.The resulting concentration ranges of powdered plastic were 0.5, 1.25, 2.5 and 12.5 g/L, and the concentration ranges of plastic leachate were 0.25, 0.5, 1.25 and 2.5 g/L.In each replicate (n = 3), groups of 30 L. sativum seeds were added to all dishes, which were then held in the dark at room temperature for 7 days.The germinated seeds were counted on days 1, 2, 3 and 7, and the root and shoot lengths in each germinated seed were measured after 7 days with the use of a caliper (metric scale).Seeds were considered to have germinated when the shoot was longer than 1 mm.The test validity was confirmed when, in the control, the seedling emergence at the end of the test was higher than 70% and the average root length was greater than 30 mm. The germination index (GI) was calculated after 7 days based on the relative seed germination (RSG), i.e., the ratio of germinated seeds under test and control conditions, and the relative root growth (RRG), i.e., the ratio of the average root length (in mm) of germinated seeds under test and control conditions (Equation ( 1)).The relative shoot growth (RShG), i.e., the ratio of the average shoot length (in mm) of germinated seeds under test and control conditions, and the root-shoot ratio (RSR), i.e., proportion between the average root length (in mm) and shoot length (mm) under a given condition (test or control), were also calculated.GI = RRG × RSG The germination index (GI) was calculated from the relative root growth (RRG) and the relative seed germination (RSG). Chronic Toxicity Test with Eisenia andrei The response of E. andrei to soil spiked with the different plastics was assessed following the OECD standard guidelines [33], with some modifications.Prior to the start of the test, LUFA 2.2 soil was spiked by adding plastic fragments to dry soil, mixing thoroughly and adding distilled water to at least 50% of the water holding capacity of the soil.Two concentrations of plastics were tested: 2 and 10 g/kg dry weight (dw).Soil without added plastic was used as control. For the toxicity test, E. andrei specimens were retrieved from culture systems in which they had been fed continuously with SCGs, and they were acclimatized to a batch of fresh LUFA 2.2 soil for one week.At the start of the test, groups of 10 mature specimens of E. andrei, each with a well-developed clitellum and weighing 304 ± 5 mg (average ± standard error), were thoroughly washed and placed in replicate glass vials (n = 3) containing at least 300 g of soil.The tests were carried out at 20 ± 2 • C under a photoperiod of 16:8 h light-darkness for 8 weeks.Pre-moistened non-spiked SCGs (7 g) were spread across the soil surface of the test system as a food source, and water was replenished weekly.After 4 weeks, the surviving adult earthworms were removed, counted, washed and weighed to determine any change in body mass.After 8 weeks, the numbers of juveniles and cocoons were sorted by spreading the soil over a white tray, sorted with tweezers and counted with the aid of a magnifying glass. The OECD guideline test validity was confirmed when the adult earthworm mortality in the control boxes was less than 10% after 4 weeks; after 8 weeks, the coefficient of variation of reproduction was less than 30%, and more than 30 juveniles were produced per replicate. Statistical Analysis The Shapiro-Wilk test was used to check the normality of the data.For the degradation test data, for each vermicomposting condition, two-way ANOVA was used to detect any significant differences (p < 0.05) between sampling times and plastic types, as well as sampling times and substrate used for each polymer, while three-way ANOVA was performed to additionally assess the role of the substrate in plastic degradation, together with Tukey's post hoc test. For the ecotoxicological assessment, one-way ANOVA, together with Dunnet's posthoc test, was used to detect significant differences (p < 0.05) between the control and test conditions. Principal component analysis (PCA) was used to correlate E. andrei toxicity data and plastic degradation data and identify key factors to explain the toxic effects on the organisms.All analyses were performed by using SigmaPlot, version 14.0. Characterization of Commercial Plastic Bags The plastic bag material characterization provided by the supplier is presented in Table 2, jointly with the data acquired through FTIR-ATR.The FTIR-ATR analysis identified all compostable materials as polyesters containing a terephthalate group.This is compatible with the PBAT composition provided by the several brands analyzed.In addition, other components were also detected, namely, talc in 070 and other ester groups in 070, 072 and 073 compatible with the aliphatic polyesters PLA and PHA.Through GC-MS, phthalates were identified in extracts obtained from the materials, with the highest concentration of the latter (in µmol/g) being observed in 070 (Table 2). Degradation of Commercial Plastic Bags The maximum degradation recorded in the vermicomposting system was 24.8 ± 0.5% (mean ± standard error), corresponding to bag 073 after 120 days in SCGs (Figure 1).While a significant weight loss corresponding to plastic bag degradation was recorded in all systems after 90 days, in direct contrast, the degradation of bag 070 was almost non-existent (<2%).The degradation of bag 069 reached a maximum after 90 days in SCGs (17.3 ± 0.6%) while for bag 072, maximum degradation was reached after 120 days in SS (16.2 ± 2.9%). A comparison of the two vermicomposting systems (SCGs and SS) revealed small but significant differences (p < 0.05) in the degradation time of the materials: 070 after 120 days, 072 and 073 after 15 days (SS > SCGs) and 073 after 90 and 120 days (SCGs > SS) (detailed information on significant differences observed when performing three-and two-way ANOVAs in Tables S2 and S3).In addition, the degradation of bag 069 in SCGs reached a plateau between 90 and 120 days (Figure 1).Certified EN-13432 and "home" compostable by TÜV (Austria) [40] PLA: polylactic acid; PBAT: polybutylene adipate-co-terephthalate; PHA: polyhydroxyalkanoates. Response of Lepidium sativum to Microplastics and Leachates from Commercial Plastic Bags The L. sativum seedling test fulfilled the performance criteria outlined in the ISO guidelines, with percentage germination reaching 81.2 ± 0.8% (mean ± standard error) after 7 days under control conditions and the root length control reaching 60.4 ± 1.5 mm (mean ± standard error) in the control replicates.The results contradict the hypothesis that the degradation of compostable plastic could lead to the release of toxic compounds to plants.No overall effect on seedling emergence after 7 days was observed when the L. sativum seeds were exposed to either powdered microplastics (<250 µm) or leachates from the four types of plastic bags, even at a concentration of 12.5 g/L.No significant differences in RSG were observed after 7 days (Figure S2).However, a significant decrease (p < 0.05) in RRG was observed for exposure to 0.25 and 0.5 g/L of bag 073 as leachate (Figure 2) and a significant decrease in GI was observed for the same material at 0.25 mg/L (Figure 3) (detailed information in Table S4).Regarding RRG, there was a positive tendency with the increase in concentration from 0.25 to 2.5 g/L, while the GI values for higher concentrations (2.5 g/L) were as high as the control. Earthworm Response to Microplastics from Commercial Plastic Bags The chronic toxicity test fulfilled the validity criteria outlined by the OECD standard guideline [33], with adult mortality of 6.7% in controls, the number of juveniles produced reaching 35.3 ± 1.8 in controls and a coefficient of variation of 8.6%. As expected, the survival of E. andrei adults was not affected after 28 days.However, after 56 days, there was a significant decrease in the numbers of juveniles and cocoons produced (p < 0.05) relative to the controls (detailed information in Table S5).For the exposure to the highest concentration of fragments for all plastics, the number of juveniles produced was significantly lower than in the controls.This was also observed for exposure to bags 070 and 072 at a concentration of 2 g/kg.The number of cocoons produced was also significantly lower for exposure to bags 069 and 073 at a concentration of 10 g/kg (Figure 4). Multivariate Analysis The PCA identified two principal components (eigenvalue > 1), which together represented 77.1% of the variance (PC1: 54.5%; PC2: 22.6%).For PC1, the main factors (loadings ≥ |0.7|) were bag characterization (terephthalate group and other ester groups), earthworm response (juveniles) and degradation in SCGs and SS, whilst for PC2, bag characterization, in particular the use of additives (talcum and phthalate sum), was the most important factor.Regarding the bag characterization, the presence of the terephthalate group was correlated with degradation in SS after 120 days.On the other hand, the same factor, together with other esters and the sum of phthalates, was negatively correlated the reproduction of E. andrei, similar to the plastic concentration.In addition, the degradation results at all timepoints in different vermicomposting systems were highly correlated (Figure 5, Table S1). Discussion The aims of the present study were to examine whether the composition of compostable plastic bags and the composting substrate determine how the bags degrade and to assess the potential toxicity of plastic and its degradation products to soil organisms.When SS or SCGs were used as vermicomposting substrate, the rate of degradation of the plastic bag samples after 120 days (measured as weight loss) reached approximately 20%, with the highest value being observed for 073 in SCGs (25%).However, one of the plastics (070) was not significantly degraded even after 120 days (Figure 1).This could be attributed to the composition of the plastic.According to the manufacturer, bag 070 is made of the biodegradable polymer PHA, but the FTIR analysis also detected talc as a major component.None of the other bags tested in this study contained talc (Table 2).Mineral talc is used in plastics as a functional filler to improve the rigidity, impact strength, flexural modulus and thermal stability of the final product [43].Biodegradable polymers in particular need additives to improve their mechanical properties, and talc is frequently added, contributing to lower rates of degradation [44,45]. On the other hand, the highest rate of degradation was observed in the plastic containing potato starch, PBAT (according to the manufacturer), with additional ester groups (determined by FTIR), which was highly correlated in the PCA.The addition of potato starch to biodegradable polymers such as PBAT has been shown to increase the degradation rate [46], and the degree of degradability increases as the percentage of starch increases [47,48]. Weight loss has been used as an endpoint in previous studies intended to assess degradability of biopolymers in aquatic (e.g., [49][50][51]) and terrestrial [52] environmental compartments.Another study examining the presence and abundance of MPs in vermicomposted SS reported an increase in MPs in a similar range to the weight loss values obtained in the present study (approximately 20%) [53]. Furthermore, few differences between the degradation rates according to the substrates employed were observed, despite clear differences in their physical-chemical characteri-zation, namely, in in electrical conductivity and more so in the percentage of organic content (much higher in SCGs).This indicates that the impact of earthworm activity, given its high density (6000 individuals per m 2 ) in a continuous vermicomposting system can override the influence of the used substrate.A significant level of degradation was observed after 15 days in SS spiked with bags 072 and 073, while in SCGs, significant differences were only observed in later stages.As both materials are aliphatic-aromatic co-polyesters, carboxylesterase activity may differ significantly in each substrate in this early stage (15 days).Previous studies have shown that carboxylesterase activity in earthworms (Eisenia fetida) is enhanced by the presence of plastics [35] and can also alter the microbial communities of organic waste [39,54].This is important, given that the role of bacteria in the degradation of MPs in different substrates has become the focus of several studies [55].Indeed, the incubation of different types of MPs in WWTP effluents has been shown to enhance the selection of some bacterial strains [56]. Regarding the potential toxicity of degradation products, no dose-response effects were observed for L. sativum after exposure to <250 µm MPs (Figures 2 and 3).In contrast, polycarbonate and polystyrene MPs inhibited seedling germination, with a particular effect on root and seedling length after 7 days under similar conditions [26,27].In addition, significant effects have also been observed in seed germination tests performed in soil spiked with PET [28,57].However, exposure to PLA did not induce changes in seed germination but affected root growth [29,58]; short-term exposure (3 days) to leachates from plastic bags only induced changes in morphology but did not affect germination [59].This corroborates our observations at lower concentrations of leachates for 073 of a decrease in root growth but not in seedling emergence.On the other hand, no effects were observed when exposing poly(3-caprolactone) with adipate-modified starch in rice plantlets grown in a soil microcosm over 14 days and for L. sativum exposed to PLA-based (similar to 072) plastics in a soil mesocosm system for 28 days [14,60].The different levels of response in the aforementioned studies suggest that the type of exposure matrix (leachate vs. powder) has a relevant role in the toxicity of plastics due to the release and availability of additives used in their manufacturing. Regarding the response of E. andrei exposed to plastic bag fragments, no effects on survival were observed (Figure 4A).This was expected given the demonstrated resilience of earthworms in plastic-polluted environments, e.g., including SS [61], and the low mortality observed in standard toxicity tests in soil [62].However, reproduction performance (measured as the number of juveniles and cocoons produced after 56 days) was affected by plastic concentration, as revealed by the PCA (Figures 4B,C and 5 and Table S2).While the number of juveniles was significantly affected at the highest concentration tested (10 g/kg, equivalent to 1% weight/weight), this was only true for bags 070 and 072 at 2 g/kg (0.2% w/w).This indicates that the use of additives, such as PHA or talc (in 070), as well as other esters (in 070 and 072) or phthalates (070 and 072-see Table 1) may affect the reproductive process [63].The mechanisms underlying plastic toxicity remain unknown, given the uncertainty regarding the source of toxicity (particle size, leaching of adjuvants and vector for other contaminants) [64]. Interesting enough is the fact that exposure to 072, which is Mater-Bi and starch, induced effects on reproduction, while another study did not show significant differences in this parameter when using soil in which Mater-Bi-based plastics were already biodegraded [65].Differences in using "fresh" particles versus degraded plastics may be key to explaining the observed toxicity. Nonetheless, it appears that earthworms can ingest plastic particles [66], thus potentially affecting their internal functioning, e.g., through lesions or induction of stress at the cellular level [24,67], the latter as a result of the release of additives (phthalates) or even other plasticizers, such as Bisphenol A [68,69], and potentially affecting their reproductive capacity [70,71].Earthworms may produce non-viable cocoons in response to exposure to plastics or their chemical additives, not covalently bond to the polymer chains.This would explain why the number of cocoons produced by E. andrei exposed to bag 070 was not significantly lower than in the control worms, even though fewer were produced.Previous studies have shown that PLA-based plastics do not induce mortality but can significantly decrease the number of juveniles produced [23].On the other hand, the lower toxicity to E. andrei of starch-and PBAT-based bags is consistent with the findings of a study in which no significant effects were observed on survival and growth of Lumbricus terrestris exposed to PBAT microplastics [72]. The least degradable bag (070) was also considered one of the most toxic to earthworms, although seedling emergence in L. sativum was not affected.Given that it was shown that these biodegradable plastics contain phthalates and other substances as additives, it could be hypothesized that their low degradation rate can lead to a small but continuous release of more available hazardous substances over time, thus having a larger effect on longterm reproduction. This indicates that the production of truly biodegradable plastics demands that not only the polymeric matrix (e.g., PLA) but also major chemical additives (talc, other ester and phthalates) be susceptible to microbial and enzymatic degradation in a safe manner. Conclusions This study addressed the lack of information regarding how the composition of compostable plastic bags affects the degradation of plastic under vermicomposting conditions and the potential toxicity of the fragmentation and degradation products to soil organisms.It was observed that additives, more than the biodegradable polymeric matrix, can modulate the degradation rate of compostable bags in vermicomposting systems. As for the potential impact of the degradation of compostable bags on soil biota, it should be highlighted that negative effects were observable even at lower concentrations when organisms were exposed to plastics with a low degradation rate.As the low degradation rate should indicate a low release rate of additives (with or without toxic potential) to the soil system, this represents an issue that should attended to.Although the biological mechanisms underlying the effects on E. andrei reproduction in response to plastic exposure remain unclear, a closer look at the additive composition of compostable bags could be a way to explain and explore the benefits and mitigate the hazardous potential of these products to soil systems. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/microplastics3020020/s1, Figure S1: Representation of the degradation of the 4 types of plastic films, represented as % weight loss, in two vermicomposting systems, one containing spent coffee grounds and another containing sewage sludge, Figure S2: Representation of the Relative Seed Germination (RSG) in L. sativum after exposure to powdered microplastics (< 250 µm) and leachates of 4 types of plastic bags for 7 days.Values are expressed as number of germinated seeds relative to control after 7 days (mean ± standard error), Table S1: Descriptors of the principal component analysis model (component loadings, explained variance and component fitted correlation).Component loadings and correlation > |0.7| marked in bold.Table S2: Descriptors of the three-way ANOVA performed on the plastic weight loss in vermicomposting systems.DF-Degrees of Freedom, SS-Sum of Squares, MS-Mean of Squares.p < 0.05 values highlighted in bold.Table S3: Descriptors of the two-way ANOVA performed on the plastic weight loss in vermicomposting systems for each plastic type (069, 070, 072, 073).DF-Degrees of Freedom, SS-Sum of Squares, MS-Mean of Squares.p < 0.05 values highlighted in bold.Table S4: Descriptors of the one-way ANOVAs performed on the Lepidium sativum germination test parameters (Relative Seed Germination, Relative Root Growth, Relative Shoot Growth, Root to Shoot Ratio, Germination Index) after exposure to plastic bag as powder and as leachate, for each plastic type (069, 070, 072, 073).DF-Degrees of Freedom, SS-Sum of Squares, MS-Mean of Squares.p < 0.05 values highlighted in bold.Table S5: Descriptors of the one-way ANOVAs performed on the Eisenia andrei survival and reproduction test parameters (Number of surviving adults, number of juveniles and number of cocoons) after exposure to plastic bag as powder and as leachate, for each plastic type (069, 070, 072, 073).DF-Degrees of Freedom, SS-Sum of Squares, MS-Mean of Squares.p < 0.05 values highlighted in bold. Figure 1 . Figure 1.(A) Degradation of the 4 types of plastic films in vermicomposting systems, represented as % weight loss.Significant differences compared with the initial timepoint (T = 0) are indicated by *, while differences between vermicomposting systems are identified by #. (B) Visual appearance of sheets from each of the 4 types of plastics sampled after 120 days in a vermicomposting system with sewage sludge, post-washing and drying steps, from left to right: 069, 070, 072 and 073. Figure 2 . Figure 2. Representation of relative root growth (RRG) in L. sativum after exposure for 7 days to powdered microplastics (<250 µm) and to leachates from 4 types of plastic bags.Values are expressed as the ratio between the mean root length in seedlings growing under test conditions and the mean root length in control seedlings after 7 days (mean ± standard error).Significant differences relative to the control (p < 0.05) are indicated by *. Figure 3 . Figure 3. Representation of germination index (GI) of L. sativum after 7 days of exposure to powdered microplastics (<250 µm) and to leachates from 4 types of plastic bags.Values are expressed as the germination index, i.e., relative seed germination (RSG) multiplied by relative root growth (RRG) after 7 days (mean ± standard error).Significant differences relative to the control (p < 0.05) are indicated by *. Figure 4 . Figure 4. (A) E. andrei survival, represented by the number of adults after 28 days, (B) E. andrei reproduction, represented by the number of juveniles after 56 days and (C) the number of cocoons after 56 days of exposure to microplastics (<2000 µm) from 4 types of plastic bags.Significant differences relative to the control (p < 0.05) are indicated by *. Figure 5 . Figure 5. Representation of the component scores and loadings of the 2 principal component axes (eigenvalue > 1) obtained from the principal component analysis of E. andrei reproduction data and exposure concentration, plastic composition (according to FTIR-ATR and GC-MS analyses) and degradation data at 30, 60 and 120 days. Table 2 . Commercial plastic bag identification (ID) codes, physical appearance and composition provided by the supplier and checked by FTIR-ATR analysis and phthalate concentration complemented by GC-MS analysis.
2024-06-13T15:45:33.446Z
2024-06-08T00:00:00.000
{ "year": 2024, "sha1": "739e2cb3fad414b4cf7fe87a4ca27d7c31ae603f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-8929/3/2/20/pdf?version=1717836981", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1caebd32fc8ae2414eecf5588793a38d61fb22c2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
34527685
pes2o/s2orc
v3-fos-license
Observation of an electrically tunable band gap in trilayer graphene A striking feature of bilayer graphene is the induction of a significant band gap in the electronic states by the application of a perpendicular electric field. Thicker graphene layers are also highly attractive materials. The ability to produce a band gap in these systems is of great fundamental and practical interest. Both experimental and theoretical investigations of graphene trilayers with the typical ABA layer stacking have, however, revealed the lack of any appreciable induced gap. Here we contrast this behavior with that exhibited by graphene trilayers with ABC crystallographic stacking. The symmetry of this structure is similar to that of AB stacked graphene bilayers and, as shown by infrared conductivity measurements, permits a large band gap to be formed by an applied electric field. Our results demonstrate the critical and hitherto neglected role of the crystallographic stacking sequence on the induction of a band gap in few-layer graphene. Producing a controlled and tunable band gap in graphene is a topic of central importance 1-6, 14,[21][22][23][24][25] . In addition to the intrinsic interest of altering the fundamental electronic properties of materials, the availability of an adjustable band gap opens up the possibility of a much wider range of applications for graphene in electronics and photonics. Both single-and few-layer graphene in their unperturbed state lack a band gap 13,16 . However, few-layer graphene materials under the application of a symmetry-lowering perpendicular electric field may exhibit an induced gap 1,2,4-6, 14,[21][22][23][24] . In this regard, trilayer graphene is an attractive material system. Unlike bilayer graphene, however, trilayers, which typically exhibit Bernal (ABA) stacking order 19 and the associated mirror symmetry (figure 1a), have been shown both theoretically 14,21,22 and experimentally 10 not to support the induction of a significant band gap when subjected to a perpendicular electric field. As discussed below, this behavior follows from the mirror symmetry of the unperturbed ABA trilayer 14 . Recent research 26 has, however, reported the existence of a new type of trilayer graphene, one with ABC (rhombohedral) stacking order between the graphene sheets 13,14,18,24 (figure 1b). This crystal structure, like that of the bilayer possesses inversion symmetry, but lacks mirror symmetry (figure 1b). The electronic structure of the ABC trilayer 16,24 is accordingly more similar to that of the ABstacked bilayer graphene. In particular, the undoped ABC trilayer has only two-fold degeneracy 16 at the Fermi energy, like the graphene bilayer, rather than the four-fold degeneracy found in the ABA trilayer 16,19 . The two-fold degeneracy in the ABC trilayer band structure can be readily lifted by imposing different potentials on the top and bottom graphene layers by an applied electric field, which leads to the opening of a band gap 14,18,23,24 . While theory has predicted the induction of a large band gap for ABC trilayer graphene, experimental confirmation has been lacking. In this paper, we report an experimental and theoretical study of the electronic response of trilayer graphene, both of ABA and ABC stacking order, to perpendicular electric fields as strong as ~0.2 V/nm. Our results show direct spectroscopic signatures of the induction of a tunable band gap of as much as ~120 meV in ABC trilayer graphene. Such a band gap is not observable in ABA trilayers under the same electric field. We analyze these results by considering the implications of the different crystal structure and interlayer coupling in ABA and ABC stacked trilayers. We investigated graphene trilayer samples exfoliated from kish graphite on SiO 2 /Si substrates. The sample thickness and stacking order were first determined by infrared 8,9,27 and Raman 26 spectroscopy. In our measurements, we made use of an electrolyte top gate 4,28 (figure 2a) to induce high doping densities and electric fields in the samples. The resultant change of the band structure is probed by infrared conductivity measurements 4,8,9,27 (see Methods). We have measured IR sheet conductivity σ(ħω) of ABA and ABC trilayer graphene samples at different gate voltages V g (figure 2). At the charge neutrality point (V g = V CN ), the ABC spectrum shows a single absorption peak at ħω = 0.35 eV (figure 2b), and the ABA spectrum exhibits two peaks at 0.52 and 0.585 eV (figure 2f). These transitions reflect the distinct nature of the interlayer interactions and low-energy band structure for the two types of crystal structures (figure 2e,i). The energies of the absorption peaks in ABC and ABA trilayer correspond approximately to γ 1 and √2γ 1 , respectively, where γ 1~0 .37 eV is the nearest-neighbor interlayer coupling strength. The factor of √2 arises from the mirror symmetry in ABA trilayer 8, 16 , where the atoms in the middle layer are coupled symmetrically with atoms in both the bottom and top layers (figure 1a,c). We note that the two slightly different transition energies of 0.52 and 0.585 eV in ABA trilayer correspond, respectively, to hole and electron transitions 29 . (See Supplementary Information for more detailed analysis of the electron-hole asymmetry in ABA trilayer). As we increase the gate bias for the ABC trilayer, the main peak splits into two distinct features (P1 and P2 in figure 2b) that shift in opposite directions and broaden. This behavior is a clear signature of the induction of a band gap. Corresponding effects are also observed when a negative gate voltage is applied to produce hole doping (as described in the Supplementary Information). Figure 2e shows the evolution of the electronic structure of ABC trilayer graphene under an applied electric field according to a tight-binding (TB) calculation that includes the dominant intralayer γ 0 and interlayer γ 1 couplings. The unperturbed ABC trilayer (green line) has three valence and conduction bands near the K-point in the Brillouin zone. The two low-energy bands touch one another at the K-point, while the other bands are separated by γ 1~3 70 meV. With the application of a strong electric field, a gap develops between the low-energy valence band and conduction band (red line). The observed absorption peaks P1 and P2 are readily understood as arising from the transitions indicated as 1 and 2 in the modified band structure. The difference between P1 and P2 hence reflects the size of the band gap, which reaches ~120 meV at the largest applied gate voltage of 1.2V. For the ABA trilayer, as we increase the gate bias, the amplitude of the transition at 0.585 eV grows and the peak position red shifts, while the low-energy peak at 0.520 eV disappears (figure 2f). A similar effect was observed for negative gate biases and hole doping (see Supplementary Information). Apart from state-filling effects that reflect the increase of Fermi level under gating, there is no evidence of the emergence of additional peaks associated with the creation of a band gap. We estimate from the broadening of the absorption peak that an induced band gap, if it exists, should not exceed 30 meV at the highest gating voltage of 0.9 V. The above observations can be understood within a framework of the TB description, with a self-consistent scheme 22 to take into account the gate-induced electric field across the graphene layers (see Methods). For the ABC case, we consider only the dominant coupling terms of γ 0 and γ 1 . Carrying out TB calculation with a full set of coupling parameters did not yield significantly different predictions. To obtain the best fit to the data, we used a value for the interlayer coupling of γ 1 = 377 meV and assumed a capacitance of the electrolyte top gate of C g = 1.3 μF cm -2 . The predicted band gap, E g , and the energy gap at K-point, ΔE k , agree well with the band gap extracted from experiment (figure 3). For more detailed and direct comparison, we calculated the expected IR conductivity spectra by means of Kubo formula (figure 2c). These simulations clearly reproduce the main features of the experimental spectra (figure 2b). We also show for comparison the predicted conductivity under the neglect of any induced modification of the electronic structure or band gap opening (figure 2d), including only the effect of state filling on the optical transitions. The resulting behavior is completely inconsistent with experiment. In the case of the ABA trilayer structure, we include in the TB simulations parameters that describe the observed electron-hole asymmetry. In particular, we use δ = 37 meV as the average on-site energy difference between atomic sites A1, B2, A3 and B1, A2, B3 (figure 1c) and v 4 γ 4 /γ 0 = 0.05 to describe the next-nearest-neighbor interlayer coupling strength. We found reasonable agreement between the experiment and the simulated σ(ħω) spectra by the Kubo formula (figure 2g) with similar values for the other parameters in the model as for ABC stacking (γ 1 =371 meV and C g = 0.8 μFcm -2 ). For comparison, we also show the predicted σ(ħω) spectra under the neglect of any induced modification of the band structure (figure 2d). The resultant spectra are rather similar to the previous simulations (figure 2c). This conclusion is consistent with a predicted band structure for the ABA trilayer that changes little under the applied electric field (figure 2i). As this analysis shows, the induction of a gap in graphene trilayers is completely different for ABA-and ABC-stacked materials. For applied electric fields of similar strength, the ABC trilayer shows a sizable band gap of ~120 meV, while the ABA trilayer does not exhibit any signature of band-gap opening. The different behaviors can be understood within a TB model using just the dominant intra-and interlayer parameters of γ 0 and γ 1 (Figure 1c,d). At the K-point of the Brillouin zone, the effective intralayer coupling vanishes 14 . The states of ABC trilayer can hence be represented by two dimers with finite energies (±γ 1 ) and two monomers with zero energy (blue and yellow atoms in figure 1d, respectively). The application of a perpendicular electric field induces different potentials at the bottom and top layers. This lifts the degeneracy of the two corresponding monomer states (A1 and B3) and induces a band gap 14 . On the other hand, the electronic states at the K-point in the ABA trilayer system are represented by a trimer and three monomers (blue and yellow atoms in figure 1c, respectively). The trimer has a nonbonding state that forms a four-fold degenerate zero-energy level with the monomers. While a vertical electric field can lift the degeneracy of the two monomer states on the bottom and top layers (A1 and A3), it has no appreciable influence on the monomer state on the middle layer (B2) and the non-bonding trimer state. The presence of this remaining degeneracy precludes the induction of a band gap in ABA trilayers 14 . It is informative to compare our results with the behavior found in bilayer graphene under the influence of an applied electric field 3,4 . For the ABC-stacked trilayer, we have observed an induced band gap of 120 meV for an applied electric field of ~0.2 V/nm. Induction of a comparable gap in bilayer graphene is achieved for an applied field of 0.4 -0.5 V/nm 3,4 . The increased sensitivity to the applied field for the trilayer sample is expected because the size of the induced band gap for a given field increases with layer thickness. In particular, for the same (moderate) applied field, the band gap in the thicker ABC trilayer should be approximately twice as large as in the (AB) bilayer 18 , in agreement with our observations. In addition, for applications involving a material with tunable infrared properties, we found that the infrared peaks in ABC trilayers are much sharper than those observed in bilayers 4 because of the higher-order van Hove singularity in ABC trilayer band structure 16 . These better defined features favor applications of trilayers for applications requiring a tunable change in IR absorption. More generally, our work suggests that a tunable band gap can be induced in thicker graphene samples with ABC (rhombohedral) stacking order 9,16,17,20 , thus providing a still broader class of materials with a tunable band gap. Sample preparation and characterization Graphene trilayer samples were prepared by mechanical exfoliation of kish graphite (Toshiba) on silicon substrates coated with a 300-nm oxide layer. The sample thickness and stacking order are characterized by means of infrared spectroscopy 9,26 . These measurements were performed using the National Synchrotron Light Source at Brookhaven National Laboratory (U12IR beam line). For a more detailed analysis of the spatial variation of the sample, we relied on scanning Raman spectroscopy 26 . Using the signature of the stacking-order in the 2D Raman feature, we could visualize the spatial distribution of the ABA and ABC stacking domains in trilayer samples. We found ~60% of trilayer samples were of purely ABA stacking order, while the rest exhibited mixed ABA-ABC stacking orders. For our investigations, we chose for device fabrications those samples showing either pure ABA stacking or large (>200 µm 2 ) homogeneous domains of ABC stacking. Determination of the optical conductivity We measured the infrared transmission spectrum of the gated trilayers by normalizing the sample spectrum with that from the bare substrate. We then extracted the real part of the optical sheet conductivity (σ) in the spectral range of 0.2-1.0 eV from the transmission spectra by solving the optical problem for a thin film on the SiO 2 (300nm)/Si substrate. In our calculation, we neglect the interference from the sample/PEO interface and consider only the much stronger reflection from SiO 2 /Si interface. We also neglect the contribution of the imaginary part of the optical conductivity. The above simplifications are estimated to induce 10% errors in σ, mainly in the spectral range below 0.3 eV, and have negligible influence on the spectral positions of the peaks in σ. Device Fabrication Theoretical simulation of the optical conductivity We use the self-consistent approach of Avetisyan et al 22 to calculate the charge density at each layer of the graphene trilayers for different total charge density. In the calculation, we consider only the dominant γ 0 and γ 1 couplings in ABC trilayer TB Hamiltonian and γ 0 , γ 1 , γ 3, and δ in ABA trilayer TB Hamiltonian. We use the dielectric constant of bulk graphite (κ=2.4) in the calculation. With the self-consistent charge distribution, we simulate the optical conductivity by using the Kubo formula with a broadening parameter of 10 meV. (c) shows the predictions of TB model for the electronic structure described in the text, while (d) is a reference calculation in which the band structure is assumed to remain unaltered with gating and only the induced population changes are taken into account. In (b-d), the individual spectra are displaced by 2 units. The gate voltages V g and the condition of charge neutrality (V g =V CN = -0.65 V) are denoted on the spectra. e, The band structure of ABC trilayer graphene with (red) and without (green) the presence of a perpendicular electric field as calculated within the TB model described in the text. Transitions 1 and 2 are the strongest optical transitions near the K point for electron doping. f-h, Results corresponding to (b-d) for ABA-stacked trilayer graphene samples. The different spectra, from top to bottom, were obtained for gate voltages V g = 0.9, 0.7, 0. 5, 0.3, 0.1, -0.1, -0.3, -0.4, -0.5, -0.6, -0.65(CN) V and are displaced from one another by 0.4 units. i, Band structure of ABA trilayer graphene with (red) and without (green) the presence of a perpendicular electric field as calculated within the TB model described in the text. The arrow indicates the transition responsible for the main absorption peak in 0.5-0.6 eV. Figure S1 shows the conductivity σ(ħω) of ABC-stacked graphene trilayer at gate biases corresponding to induced hole doping. The results are qualitatively the similar to those for the electron doping described in the main text. For hole doping, we observe an enhancement and splitting of the main transition peak at ħω = 0.35 eV. Just as for electron doping, this behavior is the result of the induction of the band gap. The high-energy component of the split peaks is broadened and reduced in strength at the highest gate bias voltages. We attribute this behavior to the lateral inhomogeneity in the electric-field of the polymer electrolyte top gate rather than to the inherent material response of the trilayer graphene. Electron-hole asymmetry in ABA-stacked graphene trilayer In contrast to the result for the ABC-stacked trilayer structure, the ABA trilayer displays a clear difference in its optical conductivity σ(ħω) for applied fields corresponding to electron and hole doping ( Figure S2a). At the charge neutrality point V CN = -0.65 V, σ(ħω) exhibits two peaks, one at ħω = 0.520 eV and one at 0.585 eV. As we increase the bias V g (electron side), the amplitude of the higher-energy transition (0.585 eV) grows and the peak position red shifts, while the lower-energy transition (0.520 eV) subsides and disappears. As we decrease V g (hole side), the low-energy peak grows and the high-energy peak subsides and disappears. The evolution of the energy of the absorption peak with gate bias is summarized in figure S2d. The observed behavior in the ABA-stacked trilayer can be understood within the framework of a tightbinding (TB) model that includes not only the dominant intralayer (γ 0 ) and interlayer (γ 1 ) couplings, but also the on-site energy difference δ and the parameter v 4 =γ 4 /γ 0 describing the next-nearest neighbor interlayer coupling. The observed electron-hole asymmetry is attributed to the band structure asymmetry between the valence and conduction bands, as previously discussed for graphene bilayer [S1-3]. According to the TB analysis of ABA trilayer (Figure 1c in the paper), sites A1, B2, A3 and B1, A2, B3 possess different energies (δ) because of the different crystal field environment. This leads to different transition energies for the electron and hole sides. The next-nearest-neighbor interlayer coupling parameter v 4 produces different dispersion properties for the conduction and valence bands. This parameter is thus responsible for the different evolution of the electron and hole transition peaks with the gate voltage. The calculated band structure ( Figure S2e) clearly shows the role of coupling parameters δ and v 4 . For a quantitative understanding on the ABA trilayer data, we have simulated the ABA trilayer IR conductivity by means of the Kubo formula with a 20-meV phenomenological broadening parameter. The conductivity at different induced charge densities n is calculated in a self-consistent scheme that takes into account the different potentials at individual graphene layers resulted from uneven charge distribution in the sample (see Methods in the main text). We find that the main features of the experimental conductivity spectra and the dependence of absorption peak on the gate voltage are reproduced with γ 1 =371 meV, δ=37 meV and v 4 =0.05 ( Figure S2b and blue solid line in figure S2d). We have used a top-gate capacitance C = 0.8 μFcm -2 to obtain the best description of the data. For comparison, we show the predicted σ(ħω) spectra for a TB model of the same parameters, but under the neglect of any induced modification of the band structure ( Figure S2c). The calculated spectra are quite similar to those found when the change of the band structure is considered ( Figure S2b). The predicted absorption peaks are also in reasonable agreement with the experimental data (green dashed lines in Figure S2d). We conclude therefore that the gate-induced modification of band structure is not needed to explain our experimental data. The extracted on-site energy difference (δ = 37 meV) in ABA-stacked trilayer is much larger than the corresponding value in the AB-stacked bilayer (δ = 18-25 meV) [S1-3]. The value is also much larger than the limit of δ < 22 meV that we have estimated for the ABC stacked trilayer by considering the 44-meV width of the optical transition for V g = V CN (figure 2b in the main text). The distinct behavior in the two cases is related to the difference in the crystal structure. Since δ arises from the change of local crystal field by the interlayer coupling, a trimer with two interlayer bonds is expected to have a larger value of δ than a dimer with only one interlayer bond. ABA trilayers, which feature trimers, should therefore exhibit a larger electron-hole asymmetry than do bilayers or ABC trilayers, which only have dimers.
2017-09-27T10:05:06.065Z
2011-05-24T00:00:00.000
{ "year": 2011, "sha1": "c1ad2e6b742f0b5b42ac376c00719116ff7edeb8", "oa_license": null, "oa_url": "https://www.nature.com/articles/nphys2102.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "10500b1b5b268f3c754e9f26f532a90fa6bbce61", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269928108
pes2o/s2orc
v3-fos-license
High concentration of iron ions contributes to ferroptosis-mediated testis injury In order to explore the effect of different concentrations of iron ions on ferroptosis in mouse testes, Kunming mice were randomly divided into control group (normal saline), low iron concentration group (25mg/kg), high iron concentration group (70mg/kg) and deferoxamine group (40mg/kg). The mice were injected continuously for 7 days and their body weight was measured. At the end of the experiment, the organ weight, sperm count, and malformation rate were measured. Testicular tissue, the pathological and ultrastructural changes in spermatogenic tubules were also observed by using hematoxylin eosin (HE) staining and transmission electron microscopy. The changes in transcription levels of related genes and serum biochemical indicators were measured in mouse testicular tissues. The results showed that higher iron concentration may inhibit the growth of mice, reduce the organ coe�cients of testis, heart, and liver, and increase the rate of sperm malformation and mortality. Supplementation of iron ion in high concentration can negatively affect the male reproductive system by reducing the sperm count and causing malformation and structural damage in seminiferous tubules and sperm cells. In addition, the iron concentration also affected the immune response and blood coagulation ability by in�uencing the red blood cells, white blood cells and platelets. The results showed that iron ions may affect mice testicular tissue and induce ferroptosis by altering the expression of ferroptosis related genes. Though, the degree of effect was different for the different concentrations of iron ions. The study also revealed the potential role of deferoxamine to inhibit the occurrence of ferroptosis. Though, the damages caused to the testis by deferoxamine supplementation suggests the need for further researches in this direction. Introduction Male infertility is a common disease, affecting around 70 million people worldwide.As per the World Health Organization, approximately 9% of couples worldwide have fertility problems and 50% of these couples are affected by male infertility.There are many reasons for male infertility ranging from gene mutation to lifestyle selection, to medical disease or drug treatment [1] .Its pathogenesis can be roughly divided into three categories: (1) secondary hypogonadism caused by hypothalamic-pituitary diseases; (2) obstruction of semen out ow (commonly referred to as obstructive azoospermia); (3) testicular dysfunction (possibly related to primary hypogonadism) [2] . Testis is an important part of male reproductive system.Spermatogenic cells in testis are responsible for the production of spermatozoa and maintaining their normal physiological structure and function. Research shows that electromagnetic radiation, high temperature, viruses, and environmental endocrine disruptors such as polycyclic aromatic hydrocarbon compound, heavy metals (lead) and metalloids (arsenic) can all cause damage to the male reproductive system and affect male fertility [3,4] .Similarly, iron ions can also affect the testicular tissue with dense vascular distribution by damage its structure and potential functions [5] . Ferroptosis is a form of regulatory cell death characterized by iron-dependent lipid peroxidation which was rst proposed in 2012 [6] .Morphologically, cells undergoing ferroptosis have typical characteristics of necrosis with small and deformed mitochondria, reduced crista, concentrated membranes, ruptured outer membranes, and no apoptotic characteristics [7,8] .Different from the apoptosis in immune silence, ferroptosis has immunogenicity as the affected cells release damage-related molecular patterns and alarmins that may amplify cell death and promote a series of responses associated with in ammation [9] .Endoplasmic reticulum-related oxidative stress, Golgi stress-related lipid peroxidation, and lysosomal dysfunction all contribute to the induction of ferroptosis.In the presence of the antioxidant glutathione (GSH), glutathione peroxidase 4 (GPX4) inhibits lipid peroxidation and protects the cells from ferroptosis.Since this discovery, the complex interaction between iron, cysteine and lipid metabolism has become an important regulatory factor for ferroptosis [9] .Some studies have shown that in order to promote growth, cancer cells have an increased iron requirement as compared with normal non-cancer cells [10] .In addition to cancer, ferroptosis has been found to be associated with degenerative diseases, ischemiareperfusion injuries, and cardiovascular diseases [11][12][13][14] .Similarly, male reproductive diseases are also found to be induced by ferroptosis and have received extensive attention.It has been reported that oxygen-glucose deprivation and reoxygenation injury of sertoli cells in testis can cause ferroptosis [15] .In addition, arsenite can cause ferroptosis in testicular cells by inducing oxidative stress [16] .It is implied that busulfan-induced ferroptosis might be mediated via inhibition of Nrf2-GPX4 (FPN1) signaling pathway, and highlight that targeting ferroptosis serves as a potential strategy for prevention of busulfan-induced damage and male infertility [17] .Furthermore, iron overload has previously been demonstrated to induce oxidative damage in the testes in a number of animal and human studies [18][19][20][21] .Iron toxicity results in morphological changes in the seminiferous tubules, epididymes and sertoli cells [22] . As an iron supplement, ferrous sulfate is often used to treat iron de ciency anemia.The effect of excessive iron supplementation on testis and ferroptosis has been rarely studied.Therefore, in this study, after intraperitoneal injection of different concentrations of ferrous sulfate into mice, the transcription levels of ferroptosis-related genes including GPX4, ferritin heavy chain polypeptide 1 (FTH1), Heme oxygenase 1 (HO-1) were detected by uorescent quantitative polymerase chain reaction (PCR) technology, pathological tissue damage, and ultrastructural changes were observed by hematoxylin eosin (HE) staining and transmission electron microscopy.Various biochemical indicators for the blood and sperm quality were detected at the same time to lay a theoretical foundation for in-depth research on ferroptosis. 2 Experimental materials and methods Animals Twelve healthy male Kunming mice weighing 25-30 g were purchased from Experimental Animal Center of Zhengzhou University and free to eat and drink.Mice were randomly divided into control Group (normal saline, C Group), high iron concentration group (75 mg/kg, H Group, n = 3), low iron concentration group (25 mg/kg, L Group, n = 3) and deferoxamine group (40 mg/kg, F Group, n = 3).Continuous intraperitoneal injections were performed, and testicular tissue of mice were collected 7 days later. Animal experimental studies conducted in this manuscript were performed in accordance with the laid down ethical standards and approved by the Major science and technology special committee of Henan Province and Huanghuai University. Sample collection At the end of the experiment, the mice were anesthetized and blood samples were taken from the eyeballs before sacri cing them by cervical dislocation.Testicular tissue and epididymis were quickly removed by laparotomy and sperm were counted followed by motility test using a sperm analyzer.The left testis of each mouse was taken out and xed with 10% formalin and pathological sections were made to observe the histological changes.The other testis was rapidly frozen in liquid nitrogen and stored in an ultra-low temperature refrigerator at -80℃ for later use. Sperm count and vitality test The bilateral epididymis of mice was removed, weighed, and placed into a 1.5 ml centrifuge tube containing 1 ml warm phosphate-buffered saline.The epididymis was longitudinally cut and mixed with ophthalmic scissors to make sperm suspension.10 µl of the sperm suspension was collected by pipetting gun and examined on the cell counting plate using high-power microscope.Finally, the sperm were counted by a sperm counter and the sperm malformation rate was analyzed by taking photos. Determination of serum reduced GSH content Mouse blood was collected and stored in the refrigerator overnight at 4°C.Supernatant was taken out and centrifuged at 3000 rpm for 10 min at 4°C.After centrifugation, the supernatant was removed and transferred into a new eppendorf (EP) tube.According to the instructions of the GSH kit, the OD value of the sample was measured by microplate reader(Biotek, ELX800) at 405 nm and the GSH content in the sample was calculated. Pathological examination of testicular tissue Testicular tissue blocks were xed in formalin followed by dehydration with different concentrations of ethanol and subsequent treatment with xylene for transparency.The testicular tissue was moved into the melted para n and then taken out and put into the embedding frame lled with melted para n for embedding.Finally, it was cut into slices with a thickness of 5 µm.The tissue sections were baked in an oven at 60℃ for 60 min.Afterwards, the tissue sections were dewaxed and subjected to HE staining after a thorough rinse.The sections were stained with hematoxylin and eosin dye solution in turns followed by treatment with hydrochloric acid alcohol as a differentiating reagent.The stained tissue sections were then observed and photographed under the optical microscope. Observation of ultrastructural changes of testicular tissue After anesthesia, the mice were sacri ced by cervical dislocation.After collecting the testicular tissue, a small piece of tissue was taken by dissecting scissors and placed on a clean cardboard.A drop of cooled 2.5% glutaraldehyde solution was dropped on the tissue for xation.After xation, the tissue was cut into small pieces of around 1 mm wide and 2 ~ 3 mm long by using a new, oil-free sharp blade and then they were nally cut into smaller pieces of 1 mm 3 .These small pieces were put into glutaraldehyde xation solution at 4℃ one by one, and then sent to Wuhan Servicebio Technology Co., Ltd. for ultrastructural observation. Detection of biochemical index The obtained blood samples were stored in anticoagulant tube at 4℃ overnight and then sent to Wuhan Servicebio Technology Co., Ltd. for detection of blood related indicators such as lymph content, hemoglobin (HGB), platelets (PLT), red blood cells (RBC) etc. Genes Primer Sequence(5'-3') 2.9 Statistical analysis SPSS 26.0 software was used for statistical analysis and Least Signi cant Difference (LSD) method was used for one-way analysis of variance.The data were expressed as mean ± standard error.GraphPad Prism 6.0 software was used for plotting, and the signi cant difference was expressed as extremely different (P < 0.01) and signi cantly different (P < 0.05). 3 Results and Analysis Effect of different concentrations of iron ions on body weight of mice During the experiment, mice were weighed once a day and the change in weight was recorded as shown in Fig. 1.The body weight of mice in the control group gradually increased with time and the body weight of mice in the low iron concentration group was slightly higher than that in the control group after 2 days.The weight growth rate of the high iron concentration group was observed to be higher than that of the other groups from day 1 to 4. However, it began to decrease signi cantly from 5th day onwards.The weight growth rate of the deferoxamine group was found to be the slowest and signi cantly lower than that of the other three groups.The results indicated that the low concentration of iron ions may increase the blood content and weight of other tissues and organs, while excessive supplementation of iron ions and iron de ciency have obvious inhibitory effects on the body weight of mice. Weight changes of testis and other organs in mice The mouse organs were weighed and the organ coe cients were calculated during dissection.As compared to the control group, the testis weight of the three treatment groups were observed to be lower.The low concentration of iron ions had a signi cant effect on the testis of mice which was signi cantly different as compared to the control group (P < 0.01), to the high iron ion group(P < 0.05) and the deferoxamine group (P < 0.05) (Fig. 2).In addition, the growth and development of heart and liver in the low iron concentration group were signi cantly inhibited (P < 0.05), while no signi cant effect were observed for kidney, spleen, and lung. Analysis of sperm count in mice After the epididymis was fully cut, the sperms of control group, low iron concentration group, high iron concentration group, and deferoxamine group were observed by sperm analyzer (Mailang, China).According to the results (Fig. 3), the malformation rate and death rate of mouse sperm in the experimental group were signi cantly higher than those in the control group.The malformation rate and death rate of the high iron concentration group were extremely signi cantly higher than those in the low iron concentration group(P < 0.01) and the deferoxamine group (P < 0.01).These results showed that the excessive concentration of iron ions could signi cantly affect the spermatogenesis process and the male reproductive health. HE staining of testis HE staining was used to observe the effect of different concentrations of iron ions on the tissue injury of mice testis (Fig. 4).In the control group, the spermatogenic epithelium of the testis was observed to be thick, the seminiferous tubules were round or oval with complete and full structure, and the supporting cells and cells at all levels in the tubules were arranged regularly (Fig. 4A).In comparison to the control group, the seminiferous tubules of the deferoxamine group were arranged loosely, with a few malformations including fragmented or diminished interstitial cells, enlarged seminiferous tubule space with some seminiferous epithelial cells shed into the lumen (Fig. 4B).On the other hand, the spermatogenic epithelium in the low iron concentration group was thinner and the seminiferous tubules were deformed and arranged irregularly with the pathological damage similar to that in the deferoxamine group (Fig. 4C).In the high iron concentration group, the spermatogenic epithelium was observed even thinner than the other groups and a large amount of spermatogenic epithelium fell off into the lumen with deformed seminiferous tubules.The rate and degree of deformity for the seminiferous tubules were signi cantly higher and the arrangement was irregular in the high iron concentration group, while the number of sperm cells in the lumen was signi cantly less than that in the control group.There were almost no formed sperm cells and the interstitial cells were completely fragmented or disappeared.Most of the cell membranes were ruptured and the cell contents were discharged, resulting in serious pathological damage (Fig. 4D). Ultrastructural observation Transmission electron microscopy was used to observe the pathology of mice testis after different treatments.The results showed that there was no obvious damage to the cell structure in the control group and the structure of mitochondria (as shown by the red arrow) and nucleus (as shown by the asterisk) were normal (Figs.5A and 5a).However, in the high iron concentration and low iron concentration groups, there was obvious ferroptosis, cell membrane rupture and blebbing, mitochondrial atrophy, mitochondrial crista reduction or even disappearance (as shown by the red arrow), increased membrane density, and normal nuclear morphology (as shown by the asterisk) (Fig. 5B, Fig. 5b, Fig. 5C, Fig. 5c).In the deferoxamine group, intracellular mitochondrial deformation higher electron density (as shown by the red arrow), rupture and disappearance of nuclear and cell membrane (as shown by the blue and yellow arrows), and swelling and dilation of endoplasmic reticulum (as shown by the white arrows) were observed through electron microscopy (Figs.5D and 5d).The lesions in the high iron concentration group were signi cantly higher than those in the other three groups, indicating that the iron concentration played a decisive role in the occurrence of ferroptosis and the degree of cell damage.The higher the concentration, the more serious the damage. Whole blood analysis of mice After collection from the eyeball, the blood was transferred to the anticoagulant tube and stored at 4℃.Subsequently, the blood was sent to Wuhan Servicebio Technology Co., Ltd. to detect the changes in the biochemical indicators of blood.The results showed that in comparison to the other three groups, the number of white blood cells (Fig. 6M), lymphocytes (Fig. 6A), monocytes (Fig. 6B) and neutrophils (Fig. 6C) were extremely higher in the high iron concentration group (P < 0.01).The percentage of monocytes and neutrophils in the three experimental groups were higher than those in the control group, while the percentage of lymphocytes was observed to be lower.The number of RBCs, HBG, average RBC, HGB content in RBC and average HGB concentration in RBC for the high iron concentration group were signi cantly (P < 0.05) or extremely (P < 0.01) higher than those in the control group.In addition, the PLT count and mean PLT volume for high iron concentration group were signi cantly lower than those of control group (P < 0.01). Effect of different concentrations of iron ions on serum reduced GSH content in mice After the standard curve for GSH determination was constructed according to the instructions, samples were analyzed and the results were calculated according to the standard curve.The results showed that the reduced GSH content for high iron concentration group and low iron concentration group were extremely (P < 0.01) and signi cantly (P < 0.05) lower than the control group.The reduced GSH content in the deferoxamine group was also higher than the control group, but the difference was not signi cant (P > 0.05). Transcription level of ferroptosis related genes As shown in Fig. 8, the mRNA transcription level of GPX4 gene in the high iron concentration group was signi cantly lower than the control group (P < 0.01), while it was extremely higher in the deferoxamine group (P < 0.01).The transcription level of GPX4 gene in deferoxamine group was also extremely higher than the high iron concentration group (P < 0.01).As compared to the control group, the mRNA transcription level of divalent metal transporter 1 (DMT1) gene in the high iron concentration group was signi cantly higher (P < 0.01).After the mice were treated with deferoxamine mesylate, the expression of DMT1 gene was found to be decreased in the deferoxamine group and it was extremely lower than that in the high iron concentration group (P < 0.01).The transcription level of DMT1 in the high iron concentration group signi cantly increased (P < 0.01) in comparison to the low iron concentration group (P < 0.01).Similarly, the transcription level of HO-1 gene was observed to be increased in both the low and high iron concentration treatment groups and the difference was extremely signi cant as compared to the control group (P < 0.01).The expression level of HO-1 gene in high iron concentration group was signi cantly higher than the low iron concentration group (P < 0.05).After the mice were treated with deferoxamine, the expression level of HO-1 gene was still observed to be increased, but not signi cantly. Compared with the control group, the expression level of FTH1 in the deferoxamine group signi cantly increased (P < 0.01) and the transcription level of FTH1 in the groups treated with different concentrations of iron ions was signi cantly higher (P < 0.01).The transcription of FTH1 in the high iron concentration group was signi cantly lower than the low iron concentration group.These results showed that the high concentration of iron ions can signi cantly promote ferroptosis. Discussion Iron is a crucial micronutrient for almost all living things.It can combine with different ligands and conduct electron transfer which is very important for the optimal functioning of living things.However, the excess iron can react with H 2 O 2 in the form of ferrous ions in the redox cycle.This reaction that is called Fenton reaction generates hydroxyl radical (•OH) and increases malondialdehyde (MDA) content [23] causing harmful oxidative damage to DNA, protein and membrane lipid which in turn induce lipid peroxidation in the testicular tissue [24] .Iron ion intervention can lead to the vacuolization of mouse testicular tissue structure, the shedding of germ cells, the impairment of endocrine function, and the reduction of the number of mature sperm [25] .One study has shown that inhibition of ferroptosis by ferrostatin-1 or deferoxamine can partially alleviate mouse oligozoospermia induced by Busulfan [17] . However, the study on the effect of different iron concentration on mouse spermatogenesis and ferroptosis has not been reported.In this study, different concentrations of ferrous sulfate were injected intraperitoneally to study its effects on spermatogenesis and ferroptosis in mice which is not only of great signi cance for iron supplementation and its rational use as a drug.The study also lays a foundation for the mechanism research of iron ions affecting male reproductive health. The results of this study show that high concentration of iron ion can inhibit weight growth of mice and reduce the viscera coe cient of testis, heart and liver.The presence of excess iron can increase the sperm deformity and death rate, and can also cause the malformation of seminiferous tubules and decrease the number of sperm cells.The structural characterization of sperm cells revealed that the high iron supply can cause the rupture or disappearance of mesenchymal cells, cell membrane rupture and owing out of the cell contents.The results of ultrastructure observation showed that the high iron concentration and low iron concentration groups showed obvious ferroptosis due to the observation of cell membrane rupture and blebbing, mitochondria atrophy, mitochondrial crista reduction or even disappearance and increased membrane density in the cells of mice testicle tissue.These structural damages in cells were also observed for the group of mice injected with deferoxamine.These results show that too high or too low iron concentration is unfavorable for the growth and development of testicular tissue. The content of iron ions in the body has signi cant effect on the level of blood oxygen transport, as well as the RBCs and HBG [26] .Different kinds of white blood cells participate in the body's defense response in different ways [27][28][29] .The results showed that the HGB content in high-dose iron ion mice increased signi cantly, while the lymphocytes and white blood cells increased rather extremely along with the decrease in PLT volume.These ndings indicate that the increase of iron ion content could increase the e ciency of blood oxygen transport.However, it can also cause a strong immune response in the testicular tissue of mice and can reduce the coagulation function.Domenico Girelli et al. had a research on iron metabolism in COVID-19 infections,which showed that iron metabolism also has implications on the functionality of cells of the immune system.Once primed by the contact with antigen presenting cells, lymphocytes need iron to sustain the metabolic burst required for mounting an effective cellular and humoral response [30] .These results suggest that the lymphocytes of mice in the state of high iron ions may secrete a large number of antibodies which can inhibit or promote the occurrence of various diseases.This result is similar to that of Xiaofei Gao's research [31] .From this perspective, it may be more effective to study the relationship between ferroptosis and the disease caused by increase of lymphocytes. When cystine transport proteins are inhibited, intracellular GSH will be depleted, eventually leading to the inactivation of GSH peroxidase (GPX4), resulting in the accumulation of lipid peroxidation, which can induce ferroptosis to a certain extent.In this process, Fe 2+ ions cause the increase in lipid peroxidation in testicular microsomes in a concentration dependent manner and the role of Fe 2+ is far greater than that of Fe 3+ in this process [32] .Studies showed that GPX4-de cient Treg cells can increase the production of mitochondrial superoxide and IL-1β and Gpx4 can prevent lipid peroxidation and ferroptosis of Treg cells in the regulation of immune homeostasis and anti-tumor immunity, which provides a potential therapeutic strategy for improving cancer treatment [33,34] .In addition, FTH1, one of the antioxidants in cells, can inhibit the oxidation reaction of divalent iron ions, thus protecting cells from oxidative damage caused by excessive iron ions.HO-1 is a key regulator of ferroptosis in cells.Magnesium isoglycyrrhizinate can up-regulate the expression of HO-1 and further promote the overexpression of transferrin, transferrin receptor, and FTH1, and can cause the low expression of iron e ux pump -iron membrane transporter which may lead to intracellular iron deposition, lipid peroxides accumulation, and ferroptosis in hepatic stellate cells [35] .In the process of ferroptosis, the circulating iron binds to transferrin in the form of Fe 3+ and then enters the cell through transferrin receptor 1 (TFR1).Fe 3+ is deoxidized by iron oxide reductase six-transmembrane epithelial antigen of the prostate 3 (STEAP3) to Fe 2+ .Finally, Fe 2+ is released into the unstable iron pool in the cytoplasm from DMT1 mediated endosome.In this study, the GSH content in the high iron group decreased with signi cant reduction in the mRNA expression of ferroptosis related genes GPX4 and FTH1.On the other hand, the transcription levels of DMT1 and HO-1 increased signi cantly which promoted the occurrence of testicular ferroptosis. The above results indicate that high concentration of iron ions can play a crucial role in the growth and development of testicular tissue and spermatogenesis by regulating the ferroptosis related genes and oxidative damage of mitochondria.Therefore, the amount of iron ions that people receive in their diet or other sources is crucial for male reproductive health and development.In this study, the ferroptosis related indicators and the expression level of ferroptosis regulatory genes in mouse testis were measured and analyzed to preliminarily investigate the possible involvement of iron ions in causing male infertility. At present, the research on ferroptosis is still in the initial stage and many inducing factors and mechanisms of ferroptosis need to be further studied.Although, ferroptosis can cause damage to normal cells, it can also eliminate the cells in a pathological state, thus maintaining the stability of the body.Further research in this direction may provide new ideas and research prospects for animal disease prevention and treatment. Conclusion Different concentrations of iron ions have different degrees of in uence on testicular tissue.High concentrations of iron ions obviously promote ferroptosis which in turn cause more damaging effects on the tissue structure of testicular tissue.Although deferoxamine can inhibit the occurrence of ferroptosis, it can still cause pathological changes in testicles. Declarations Con icts of interest The authors declare no con icts of interest. Figures Figure 1 Figure 2 Testicular organ coe cient of mice Figure 3 Figures Table 2 Organ coe cients of mice in different groups.* refers to a signi cant difference compared with the control group (P < 0.05); ** refers to an extremely signi cant difference compared with the control group (P < 0.01).
2024-05-22T06:17:49.540Z
2024-05-21T00:00:00.000
{ "year": 2024, "sha1": "2c2469805cd9a2b498777bfeba0b7c416cf20a62", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-3598329/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "ad28ae0c53a777238174300d1f8a118cebb9f38f", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244954522
pes2o/s2orc
v3-fos-license
A Kerr/CFT Correspondence in Twistor Space Ingoing and outgoing principal null geodesics in Kerr spacetimes are characterized as part of parametrized families of strings in complex Kerr geometry and are associated with holomorphic curves in twistor space with help of the Kerr theorem. They are defined on 2-dimensional twistor manifolds, one for outgoing and one for ingoing principal null congruences, and are solutions of free twistor string models on these 2-dimensional twistor manifolds. Such a twistor string model implies a conformal field theory and, assuming the applicability of the Cardy formula, agreement with the Bekenstein-Hawking area law can be achieved depending on the effective central charge and temperature. A couple of (ambi)twistor string candidate models are examined. Introduction The use of complex world-lines (which can be interpreted as string worldsheets) in complex Kerr spacetimes has a long history, see [1,2] and references therein, and also the contribution of Penrose to the book [3] based on his paper [4]. In particular, Burinskii [2] devoted many articles to expose string-like structures in complex Kerr-Schild geometry. A completely different line of investigation was to find holographic duals of Kerr spacetimes, stimulated by the AdS/CFT correspondence and termed Kerr/CFT correspondence. It started with extreme Kerr black holes [5] and was generalized to non-extreme cases [6,7]. These results generated considerable interest. In this paper it is argued that twistor strings similar to ones of Burinskii can be regarded as special holomorphic solutions of free twistor string models on the 2-dimensional twistor manifolds [8,9] that correspond to the two principal null congruences of the Kerr spacetime. Such a twistor string model implies a 2-dimensional conformal field theory (CFT), which is the reason for calling it a Kerr/CFT correspondence, although it is not based on holographic duality, but still might spawn some interest. Dealing with a CFT one can ask whether the Cardy formula [10] applies and agrees with the Bekenstein-Hawking area law. Obviously this depends on the specifics of how the twistor string is gauged. We examine the 4-dimensional ambitwistor string of Geyer, Lipstein, and Mason [11] and an anomaly-free 4-dimensional twistor string found by the author [12,13]. Both models show agreement with Bekenstein-Hawking, but the ambitwistor string is related to non-minimal conformal gravity whereas the other twistor string has more potential to actually describe the Kerr spacetime. In section 2 we review the twistor string approach of Burinskii and show that strings similar to ones he considered are solutions of free twistor string models defined on a pair of holomorphic 2-dimensional twistor manifolds that belong to the principal null congruences through the Kerr theorem [8,9]. In section 3 we check whether the 4-dimensional ambitwistor model [11] agrees with Bekenstein-Hawking using the Cardy formula and come to an affirmative answer if the central charge is made zero with help of the additional current algebra. On the other hand we argue that this model is not a good candidate because it is inherently conformal and cannot describe the Kerr spacetime. In section 4 we examine the twistor string of [12,13]. It is shown that the model agrees with Bekenstein-Hawking like the previous ambitwistor string. On the other hand, because of the gauging of the translation group it has more potential to actually describe the Kerr spacetime. Further the model leads to the correct tree-level gravitational scattering amplitudes exactly because of the presence of ghost fields that come from the gauging of the translations. The last section 5 contains summary and discussion. Twistor Strings in Kerr Spacetime Any analytic shear-free null congruence can be characterized as a complex 2-parameter holomorphic family of α-surfaces (see paragraph after the twistor form of the Kerr theorem 7.4.14 in [8]). When applying this to the two principal null congruences in complexified Kerr spacetime, it leads to two 2-dimensional twistor manifolds endowed with a holomorphic structure, and similarly in dual twistor space for β-surfaces [8,9] 1 . More specifically the Kerr metric in terms of a tetrad in null coordinates can be written as [15] k is a principal null congruence with Y being a solution of the equation in the corresponding Kerr theorem, and the multiplicative coefficient h is 3) The roots of the equation are [15] , , where r is a root of 2) can also be written in terms of a twistor Z a = (µα, λ α ) as a quadratic equation in Z a : This form of the equation was used by Penrose in [8,3]. The solutions for Z a , up to a multiplicative factor, determine the two 2-dimensional twistor manifolds. Equivalently, they are given by Y 1,2 , together with the coordinates. The complex conjugate values Y 1,2 determine the ones in the dual twistor space 2 . Y 1,2 can be represented in a more suggestive manner in terms of Kerr coordinates (u, r, θ, φ) defined by where r still satisfies (2.4), but u is different from the null coordinate u. In these coordinates Y 1,2 simply become They describe, for constant φ and θ, outgoing and ingoing principal null geodesics, with affine parameter ±r 3 . In complex spacetime these coordinates become complex and independent. The principal null geodesics can be represented as families of twistor curves or strings that are holomorphic in w with Re(w) = ±r, parametrized by φ and θ and projected onto the space PN of null twistors (∼ real spacetime). Alternatively, bundling up some of these parameters leads to two families of closed twistor strings holomorphic in w, one of circular form with Re(w) = φ ∈ [−π, π] and θ constant, and one sort of perpendicular to it with Re(w) = θ ∈ (−π, π) and φ constant. The first string family is periodic and the latter antiperiodic, reminiscent of Ramond (R) and Neveu-Schwarz (NS) type strings in superstring theory. The presence of an NS sector is a reflection of the ring singularity in Kerr spacetime and its expansion to extended Kerr spacetime with 2 sheets, one for r > 0 and one for r < 0 [16]. Burinskii [17,2] looked at similar (although not identical) types of strings, mainly in the context of the Kerr spinning particle. Of course, there are many more holomorphic twistor strings possible on these 2-dimensional twistor manifolds. All these holomorphic strings and the analogous ones in dual twistor space can be viewed as special solutions of a twistor string model on the cross product of each of the two 2-dimensional twistor manifolds in twistor space with the corresponding one in dual twistor space, with action where Z denotes a twistor and W a dual twistor, with components The presence of R and NS type strings means that the twistors should be worldsheet spinors. This model defines a Virasoro algebra with a central charge which depends on which symmetries are gauged. One can ask whether the Cardy formula [10] for this model agrees with the Bekenstein-Hawking area law: where T is the temperature and c eff = c − 24∆ 0 is the effective central charge with c the central charge and ∆ 0 the lowest eigenvalue of the L 0 Virasoro operator. Concerning the temperature, the question arises about its value. As the principal null geodesics generate the event horizon, we can take clues from the holographic Kerr/CFT correspondence [6,7] and set where the ingoing and outgoing null congruences are considered to provide left and right temperature, respectively. If we could show c = 12 in units of J and ∆ 0 = 0 like in the holographic Kerr/CFT correspondence, then S Cardy = 2πM r + = S BH , and the Cardy formula indeed would agree with Bekenstein-Hawking. On the other hand, in the following sections we consider a couple of anomaly-free twistor string models with zero central charge where the agreement between Cardy and Bekenstein-Hawking is no longer obvious but still achievable as we will see. Another question is whether such a twistor string defined on the 2-dimensional twistor and dual twistor manifolds determines the Kerr spacetime, i.e. whether the mapping between spacetime and twistor string model is bidirectional. The action (2.6) by itself is clearly not sufficient because it can describe a spacetime only up to a conformal factor, i.e. such a spacetime will generally not satisfy the Einstein equations. The conformal scaling invariance needs to be broken in a specific manner to satisfy these equations. Ambitwistor String The action for the 4-dimensional ambitwistor string with no supersymmetry 4 is [11] where the scaling symmetry GL(1) of the twistor string model in section 2 has been gauged, forcing the twistors and dual twistors to be ambitwistor pairs with zero GL(1) charge, and where an action for a worldsheet current algebra has been added, chosen such that the model is anomaly-free with zero central charge. In contrast to the Berkovits-Witten string [18] it is assumed that the twistor fields are worldsheet spinors as required from section 2. To get an idea about the spectrum one can look at the vertex operators [11,19] which imply that the lowest L 0 eigenvalue is ∆ 0 = 1 in units of J such that |c eff | = 24 · J. Therefore, we come to the conclusion that the Cardy formula gives twice the value of the Bekenstein-Hawking entropy. But one can argue that the model, originally defined on the full twistor space [11], only has one Virasoro algebra when viewed as restricted to the 2dimensional twistor manifolds, not a left and a right one, such that the temperature should be averaged, and we end up with agreement between Cardy and Bekenstein-Hawking. Unfortunately, this ambitwistor model cannot be relevant for the Kerr spacetime because the MHV tree-level gravitational scattering amplitudes describe conformal gravity, similar to the Berkovits-Witten model [19,18]. Alternate Anomaly-free Twistor String The second model we consider has the action [12,13,20] where for i = 1, 2 Z i are twistors, W i dual twistors, Ψ i fermionic bi-spinors, and Θ i fermionic dual bi-spinors, with components and where again the strings are considered to be worldsheet spinors, partitioning them into an NS sector and a R sector. One unusual aspect of this model is the presence of two twistors instead of just one. Assuming that they represent two intersecting α-curves, due to the incidence relations µα i = x αα λ iα , then they are like two rays intersecting the celestial sphere over x and are determined only up to a complex SU(2) 5 symmetry operation. This is the reason for the gauged SU(2) symmetry between the two twistors in (4.1), with Lagrange multiplier field c [12] 6 . Further, Isenberg & Yasskin [21] showed that the twistor space and its dual are in natural correspondence with teleparallel spacetimes, which typically are modeled by gauging the translation group, suggesting the gauging of the translations on the worldsheet as well, with Lagrange multiplier field b ij [12]. Finally, fermionic spinors have been added to the model, with worldsheet supersymmetries based on gauging of supertranslations, with Lagrange multipliers a 1ij and a 2ij [12]. After performing BRST quantization, this model is anomaly-free (the BRST charge is nilpotent) in self-contained fashion, without need of an additional worldsheet current algebra [12]. Analysis of the spectrum [13] shows that the gauging of the translation symmetry reduces the number of complex degrees of freedom for each twistor from three to two. Therefore, we can choose a gauge of the SU(2) symmetry that assigns each twistor to one of the two 2-dimensional twistor manifolds in consistent manner, and the dual twistors in analogous way. This choice will reveal itself as very convenient for going back to the original spacetime. Like in the case of the ambitwistor string ∆ 0 = 1, and to make the Cardy formula agree with Bekenstein-Hawking, it needs to be adjusted to half the value. This makes absolute sense, considering that the single Virasoro algebra has two contributions, one from each of the two twistors, but gauged in such a way that it is to be considered a single contribution. Can this model actually represent the Kerr spacetime? It has been shown that it provides the expected tree-level gravitational scattering amplitudes in the NS sector [12,13], and that this happens precisely because the contractions between ghost and antighost fields arising from the gauging of the translation group are ensuring that only connected trees (or equivalently trees without loops) are allowed amongst the contractions in the worldsheet correlator of gravitational scattering [22]. Also, by being able to associate each twistor of the model with a particular one of the two 2-dimensional twistor manifolds we can keep track of these manifolds separately. And for Kerr spacetimes twistors on these manifolds fulfill an homogeneous quadratic equation of the form (2.5) which can be evaluated and solved for Y 1,2 , and allows to calculate the conformal factor (2.3), up to the mass factor. So, indeed, we get back the original Kerr spacetime, up to the mass which, of course, is nowhere to be found in twistor space. This might look like cheating, inserting the knowledge that the spacetime was Kerr to begin with. On the other hand, one should note that the whole construction of the two 2-dimensional twistor manifolds can be generalized to any Petrov type D spacetime with two gravitational shear-free principal null congruences, the main difference to Kerr being that Z a Q ab Z b in (2.5) is replaced by a general holomorphic homogeneous function in Z a . Knowing the two twistor manifolds Araneda [9] showed that with certain assumptions that for instance are satisfied by Kerr-(A)dS and Kerr-Newman-(A)dS spacetimes, the original Petrov type D spacetime can be recovered via a conformally Kähler structure [23], but only up to a conformal factor which would need to be fixed to ensure the validity of Einstein equations with or without cosmological constant. Whether this twistor string model, with help of the gauged translation symmetry, determines the correct conformal factor in this more general case is an open issue. For more discussion on this topic, in particular on applying the Isenberg & Yasskin programme [21] for general spacetimes to this model, see [20]. Summary and Discussion In this paper we presented a non-holographic correspondence between the Kerr spacetime and a CFT in twistor space together with its dual based on twistor string models that are defined on a couple of holomorphic 2-dimensional twistor manifolds, one for each principal null congruence, and have holomorphic strings of principal null geodesics as classical solutions. We examined a couple of gauged twistor string models to see whether the Cardy formula leads to the same entropy as the Bekenstein-Hawking area law and whether they are candidates for representing the Kerr spacetime. Both models could be made to agree between Cardy and Bekenstein-Hawking, but the first model, the 4-dimensional ambitwistor string [11], relates to non-minimal conformal gravity whereas the second twistor string model [12,13] provides the correct Einstein gravitational scattering amplitudes by gauging the translation symmetry on the worldsheet and actually describes the Kerr spacetime with help of the congruence equations (2.5) and (2.3). In order for this model to go from a Kerr spacetime to more general Petrov type D spacetimes, the effect of the various worldsheet gauge symmetries, especially of the translation symmetry, needs to be investigated more thoroughly. If the second model is actually a viable theory, it makes some interesting predictions. There are quite a few exotic particles in the spectrum never seen before [13]. The spin 2 and 3 2 excitations in the R sector can easily become massive at low energies by picking up corresponding spin 0,1, and 1 2 excitations, but in the NS sector there are no obvious lower spin excitations available to make the graviton, gravitinos, and vector particles massive. This would mean that the NS sector only contains gravitationally interacting massless particles which should be detectable in gravitational waves (theoretically but experimentally with extreme difficulty), and the spin 1 2 matter content is exclusively delegated to the R sector, providing an explanation why semi-classical quantum field theory on curved spaces can be so successful [24] and why gravitation is at a so different and weaker strength than the other interactions. And if the low energy limit of the theory exists, it is a modified gravity model with both hot and cold dark matter. A lot of details would need to be worked out.
2021-12-09T02:15:57.971Z
2021-12-07T00:00:00.000
{ "year": 2021, "sha1": "8003688da327d0c727a94fd41c9927b9730bbfe8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8003688da327d0c727a94fd41c9927b9730bbfe8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
246744589
pes2o/s2orc
v3-fos-license
Differentiable measures for speech spectral modeling Autoregressive models for the envelope of speech power spectral densities (PSDs) are refined by the self-supervised spectral learning machine (S3LM) provided with differentiable spectral objective functions, including the Itakura-Saito divergence (ISD), the Kullback-Leibler divergence (KLD), the reverse KLD (RKLD) and the log spectral distortion (LSD), which display more significant results. However, in order to assess the models more perceptually, a method is proposed based upon perturbations around perfect reconstruction analysis-synthesis configurations. In the cross-excitation analysis-synthesis assessment (CEASA) method, the residual signals generated by analysis filters of the spectral models are injected as excitation into the synthesis filters derived from the same and other models in order to be evaluated by the perceptual evaluation of speech quality (PESQ) and Itakura divergence (ID), which are averaged over a set of models obtained using the objective functions mentioned above. The results lead to a superior performance when the RKLD is used as the loss function for the estimation of the spectral models with the ISD ranking close behind. The focus of these divergences on the spectral peaks is argued and pointed as the most important factor for this behavior. Specifically, using the PESQ scores obtained with CEASA, the RKLD loss is found to improve the performance by 1.0%, 4.0% and 19.3% with respect to the open-loop analysis, the KLD and the LSD models, respectively, while the corresponding improvements for the ISD loss are 0.1%, 3.0% and 18.2%, and the RKLD models excel the ISD models by 1.0% on average. Even though the spectral measures alone are not able to unequivocally distinguish the better of the two, CEASA is shown to have enough sensitivity to distinguish their performances. In summary, the learning machine S3LM fits models for the short-term spectral envelope of speech and, for the evaluation of its performance under several differentiable loss functions, the CEASA assessment tool has been developed. In addition, CEASA may be used for other assessments connected with speech analysis and synthesis. I. INTRODUCTION M ODELS for the envelope of speech spectra [1] are important for various tasks that require speech analysis, such as speech coding, speech synthesis, automatic speech recognition and speech enhancement. Autoregressive models for speech power spectral density S(e jω ) may be obtained by the application of the Wiener-Khinchin theorem to get the autocorrelation function [2] R(m) = 1 2π π −π S(e jω )dω (1) for m = 0, 1, · · · , p, in order to determine an autoregressive model of order p. This model may be obtained by the autocorrelation method of linear prediction, proposed by F. Itakura [3], [4]. The model may be represented by linear prediction coefficients [5] or by other transformed parameters. For instance, the analysis may require formant estimation and tracking [2], [6]. Despite the successful wide use of autoregressive analysis in speech applications [7], it has some shortcomings such as inaccuracies in modeling the discrete spectra arising in harmonic segments of speech [8], [9]. An interesting approach to harmonic spectral envelope estimation is true-envelope linear predictive coding (TE-LPC), which is an iterative cepstral technique based on a band-limited interpolation of the reference sub-sampled spectral envelope [10]. This work also proposes a residual spectral peak flatness measure for discrete spectra. The shortcoming of straightforward autoregressive analysis of harmonic speech segments and other reasons motivate the improvement of autoregressive spectral estimation by means of machine learning methods. For instance, Cui et al. [9] show that adaptive changes performed by a deep neural network (DNN) to the spectra to be analyzed improve the quality of supervised spectral models. Also, models for speech spectral envelopes play a significant role in speech synthesis, where a major problem is the oversmoothing of the reconstructed spectral envelopes [11]. In order to ameliorate this effect, restricted Boltzmann machines and deep belief networks have been proposed for modeling spectral envelopes [12]. It is also important to note that spectral envelope features can be efficiently detected by means of unsupervised methods [6]. Spectral envelopes may be obtained by means of cepstral coefficients also such as in this application of machine learning to speech emotion recognition [13]. In addition, mel frequency cepstral coefficients (MFCCs) are also reported to be used in emotional speech synthesis [14]. In the performance evaluation of diverse speech solutions or applications [15], the speech quality assessment is widely adopted. For instance, in [16], a complex spectral mapping based on DNN is proposed, and its results were evaluated using the algorithm described in ITU-T Rec. P.862 [17], [18], mostly known as PESQ. Another speech quality metric is the Virtual Speech Quality Objective Listener, known as ViSQOL [19], that uses spectral and temporal parameters to determine a listening quality objective score (LQO) using the 5-point quality scale. In connection with these applications, we propose an analysis-synthesis assessment method for the spectral models which is more suitable to evaluate their performance in action. In this context, this work intends to improve the open-loop analytical (OLA) model using a machine learning algorithm in conjunction with several differentiable loss functions that are applied to the reference and reconstructed power spectral densities. The differentiable losses implemented in the S3LM architecture and used in the experimental tests were the ISD [3], [20], the KLD, the reverse KLD, and the LSD. For each loss function, S3LM produces a distinctive spectral envelope model. The cross-excitation analysis-synthesis assessment (CEASA) was used to assess the fidelity of each spectral envelope model considered in the tests, and it permitted to obtain different synthesized speech signals. In summary, each spectral model is used as two filters, namely, an analysis filter and a synthesis filter, which are associated in cascade. Further, a reference speech signal is put into the analysis filter and the synthesized signal comes out of the synthesis filter. Finally, in order to perform a better quality analysis, the synthesized signal is compared with the reference signal using both the PESQ and the ID [20], [21] algorithms. This procedure is carried out for all combinations of analysis and synthesis filters for all spectral model pairs generated with different losses for the spectrum of the same reference signal. In addition, two different window sizes for the speech signal are used to obtain spectral models that, beyond allowing one to analyze the impact of window length on the spectral fitting measures for the spectral models, also underlines the need for a nonspectral assessment tool such as CEASA. This independent assessment is necessary because CEASA tends to amplify the distinction between different models and also dismisses seeming static spectral fit improvements brought about by window length change, which turn out to be illusory. Nowadays, different solutions based on both signal processing methods and machine learning algorithms are applied in several research areas [22], [23]. In the present work, we use signal processing techniques such as autoregressive models, prediction and perfect reconstruction in analysissynthesis systems which are integrated with machine learning structures to come up with tied spectral weighting layers (TSWLs). These techniques are used both in the proposed learning machine for the layers and the losses and in the CEASA diagnostic tool which includes analysis-synthesis techniques based on perfect reconstruction. It is noted that the CEASA assessment tool is intended to be used with rather high quality spectral models since it should cause its analysis-synthesis system to operate around the perfect reconstruction condition. Further, under these conditions, PESQ-LQO is a trustworthy quality score, whose results are also corroborated with those given by the VisQOL metric. Addressing the issues raised above, this article presents the proposed S3LM in Section II, the most important measures for speech spectral analysis in Section III, the spectral measures used as loss functions and the comparison of the spectral models they lead to in Section IV and the description of the CEASA method along with the results of its application to the speech spectral models in Section V. Finally, the major results in this article are connected in conclusion in Section VI. II. THE SELF-SUPERVISED SPECTRAL LEARNING MACHINE As previously stated, we propose a learning machine that inputs a spectrogram as a sequence of one-sided log PSDs with K samples up to the Nyquist frequency for an F s = 16 kHz sampling rate. The network architecture of the proposed S3LM is composed by three tied spectral weighting layers (TSWLs), as shown in Fig. 1, with tied weight vector w 0 and tied bias vector b 0 , both the size K of the PSDs, which are extended over the spectrogram for each training epoch. The S3LM architecture performs spectral pre-processing using the TSWLs. The structure consists of artificial neurons applied to each spectral component. Rather than fully connected networks, the proposed model has a singly connected architecture with two hidden layers and the weights shared between the layers. This structure concentrates attention on each spectral bin for closer convergence up to the same number of epochs while, at the same time, the strategy also brings about a reduction in the number of parameters and training time. We will represent a single log PSD as for k = 0, 1, · · · , K − 1, which forward propagates through the first three layers as where ϕ(·) is the rectified linear unit (ReLU) activation function, • represents the Hadamard or elementwise product, and h 0 , h 1 , and h 2 are the outputs of each weighting layer. So the modified log PSD is h 2 , resulting in the modified PSD obtained as P 2 (k) = 10 h2(k)/10 (4) for k = 0, 1, · · · , K − 1. And now the modified autocorrelation function is obtained by using the Wiener-Khinchin theorem [24] as for m = 0, 1, . . . , p. From these autocorrelation coefficients a prediction analysis is performed, leading to the prediction coefficient vector For a general prediction coefficient vector α, the square prediction error is where R is the (p+1)×(p+1) Toeplitz reference augmented autocorrelation matrix whose entries are given by (5). For the special prediction coefficient vector α = a, the minimum prediction error achieved is After linear prediction analysis, the reconstructed PSD [4] is obtained as for k = 0, 1, · · · , K − 1 or, alternatively, by for k = 0, 1, · · · , K − 1, where the autocorrelation function of the linear prediction vector is Equation (9) is arguably simpler than Eq. (10) for gradient backpropagation. Then, the log reconstructed PSD is obtained as for k = 0, 1, · · · , K − 1 and either the PSD or the log PSD, h 2 , may be used for computing the loss function according to the arguments of this function. The model is implemented using the deep learning framework PyTorch. The weights w 0 of S3LM are initialized to all ones while its biases b 0 are all initialized from samples of a zero-mean Gaussian distribution with a standard deviation σ = 1 × 10 −4 and they are optimized by a stochastic gradient descent algorithm with a learning rate ℓ r = 1 × 10 −4 . Good convergence has been observed after 80 epochs. The model was experimented using the TIMIT Acoustic-Phonetic Continuous Speech Corpus dataset [25]. TIMIT has 6300 utterances (10 sentences spoken by each of 630 speakers) separated in 16 bit-wav files with a sampling rate of 16 kHz. The speakers are distributed across 8 different dialect regions. The utterances in dialect regions 1 through 4 of the test set of TIMIT dataset were selected for modeling K-sample one-sided log PSDs by self-supervised methods with K = 1025. We used a workstation computer with 16 GB of RAM, an Intel ® Xeon ® E-2146G CPU at 3.50 GHz with 6 cores, and a single NVIDIA GPU card with 4 GB. Training and testing are simultaneous since S3LM is self-supervised. III. MEASURES FOR SPECTRAL ANALYSIS Based on square prediction errors, an important measure for comparing autoregressive models is Itakura divergence (ID). For the reference autoregressive vector a and the estimated vector a, Itakura divergence [20], [21] is given by This definition, originally called "likelihood ratio" by Itakura [21], makes it clear that the minimum possible value for ID is unity, corresponding to the minimum square prediction error condition, therefore coinciding with the result for open-loop linear prediction analysis. On the other hand, it does not have any inherent upper bound, even though a practical value of 1.4 has been mentioned as the frontier beyond which synthetic speech quality is too low to be useful [26]. However, in order to be used as a loss function in comparing PSDs, the Itakura-Saito divergence (ISD) is more straightforward than ID and it is defined by [3], [20] where P (f ) is the reference PSD, Q(f ) is the distorted or reconstructed PSD and f Ny is Nyquist frequency. For sampled PSDs, the ISD is given by In Section II, where the proposed S3LM was described, we have reference PSD as P 2 and reconstructed PSD as P 2 . A more general spectral distortion measure which is not conceived for measuring autoregressive spectral fit in particular is the log-spectral distortion (LSD), which is expressed in dB as Notwithstanding their different constitutions, it is interesting to observe that both the square error and the ISD are instances of Bregman divergences [27], which also holds as a class member the generalized Kullback-Leibler divergence (GKLD), defined by In Machine Learning, it is usual to employ probability density functions (PDFs) or probability mass functions (PMFs). First, we observe that PSDs are nonnegative and, while log PSDs may take on negative values, they may be raised to 0 dB by subtracting the minimum value from the whole log-spectrum. Second, if we normalize the log PSDs so that they sum to unity, then the GKLD reduces to the KLD, the Kullback-Leibler divergence. The possibility of processing PSDs just as PDFs for KLD measures and modeling has already been pointed out by [28]. The KLD from PDF q to PDF p is defined as as long as S p ⊆ S q , where S p and S q are, respectively, the supports for the D-variate PDFs p e q, avoiding the occurrence of infinities [14] for points where q(x) = 0 and p(x) > 0 in (18). For PMFs, the KLD from q to p is computed as which represents the direct KLD as long as p is a data PMF and q is a latent variable PMF. By keeping these links while exchanging the positions of p and q, we obtain the reverse KLD (RKLD) as IV. DIFFERENTIABLE LOSS COMPARISONS The open loop analytical (OLA) analysis based on the autocorrelation method is used as a baseline for assessing the refinements brought about by the learning methods. Its objective function is a square prediction error, which is a square distance in polynomial space provided with a timevarying inner product defined by the short-term autocorrelation function [8]. The differentiable losses that have been applied to the reference and reconstructed power spectral densities (PSDs) are the Itakura-Saito divergence (ISD), the Kullback-Leibler divergence (KLD), the reverse KLD (RKLD) and the log spectral distortion (LSD). Most of these measures are also used for assessing the reconstructed PSDs, including in addition Jeffrey's divergence (JD) [20], which provides a balance between the KLD and the RKLD as a measurement tool. It is interesting to observe that it can be demonstrated that the minimization of the ISD with respect to the prediction coefficients is equivalent to the minimization of the square prediction error in polynomial space [4]. However, it may be argued that the path leading to the minimum may be different in an iterative approach. Differentiable signal processing methods have made it possible to perform the short-time Fourier transform (STFT) with variable hop size windows [29]. This research has lead us to discover some interesting spectral fitting differences that depend on STFT window length. All our PSDs have been obtained using sequences of 50% overlapping sine windows. The performance of the various methods for female speakers and 20 ms long windows are shown in Table 1, where quality improvements (QI) are positive for a result greater than the OLA baseline when it is a quality or similarity measure and are also positive for a result smaller than the OLA baseline when it is about a divergence or distortion measure. More precisely, the quality improvement for measure M is computed as where the plus sign is selected if M is a quality measure while the minus sign is selected if M is a divergence measure, P M is the PSD for the model obtained with M as objective function, P OLA is the PSD for model obtained by the openloop analysis and P ref is the reference power spectral density. The utterances in dialect regions 1 through 4 of the test set of TIMIT speech corpus [25] were selected for modeling Ksample one-sided log PSDs by self-supervised methods with K = 1025. Using the self-supervised learning machine several measures are used as objective function alternatively as shown in Table 1 for female speakers and long windows in its leftmost column and the measures appearing as headers for the next columns are alternative measures for the comparisons between the PSDs obtained and the corresponding reference PSDs. Also, the same is shown for male talkers and long windows in Table 2, for female speakers and short windows in Table 3 and for male speakers and short windows in Table 4. However, short windows have been used only in a preliminary way for a couple of utterances. By observing the results in the abovementioned tables it stands out that ISD is the only objective function that can make the learning machine improve the quality under the ISD measure, which is arguably the most significant measure for speech PSD envelopes. The ISD objective function also brings about quality improvement that can be seen by the LSD measure. On the other hand, the KLD objective function, which is widely used in Machine Learning, can consistently improve quality as measured by both the KLD and the JD and even, in most cases, its quality improvement is also seen by the LSD, particularly in Tables 1 and 2, but fails in Table 4. The RKLD objective function may cause quality improvements to be detected by the KLD, the JD and the LSD measures in Table 1, even exceeding the quality improvement of the KLD as seen by itself in Tables 1 and 2, but it may also fail to have any quality improvement seen by any of those three measures as happens in Table 3. A similar behavior is displayed by the LSD objective function, which is able to cause quality improvements detectable by the KLD, the JD and the LSD measures in Table 1 and Table 2 but quality improvements fail to be seen by the KLD and the JD in Table 3. But a final analysis about these apparent shortcomings of the RKLD should be postponed till a more complete performance assessment is disclosed in Section V. Further, the good performance of the RKLD is only partially offset by its not so good performance as measured by the ISD even if it is still the best performing loss as seen by the ISD in the set of losses that includes also the KLD and the LSD. Finally, the LSD models behave in a rather contradictory manner, being the worst as measured by the ISD measure but beating the models obtained with the other losses by the greatest margin in several instances. As a final overall observation, absolute scores are seen to improve for short spectral estimation windows when compared with long windows and particularly so when measured by the ISD measure. The improvement is also significant when measured by the LSD except when the same LSD is used as a loss function as well for male speaker. In this case, when the LSD is used as the loss function, the KLD and the JD also fail to notice any improvement. V. ASSESSMENT RESULTS In order to assess the fidelity of the spectral envelope model in more neutral conditions, the cross-excitation analysissynthesis assessment (CEASA) was used, which is depicted in Fig. 2 for the simple case involving two models, where two prediction vectors a 1 and a 2 are input from S3LM or any other modeling system for that matter. In its turn, CEASA VOLUME 4, 2016 uses the prediction vectors to come up with the analysis filters and where p is the order of the models and the synthesis filters are obtained as H 1 (z) = 1/A 1 (z) and H 2 (z) = 1/A 2 (z). For a given speech signal s(n) and a spectral model, the speech signal is injected into the corresponding analysis filter whose output is its residual signal, either e 1 (n) or e 2 (n), which is injected into both synthesis filters. As a result, different synthesized signals are obtained that are represented by s 11 (n), s 12 (n), s 21 (n), and s 22 (n). These synthesized signals, which provide a realization of their corresponding spectral models, are assessed by the PESQ algorithm, which provides a mean opinion score listening quality objective (MOS-LQO) measure [17], [18], and the Itakura divergence (ID) [20], [21]. Both measures are represented by the block named Meas(·) depicted in Fig. 2. By using each spectral model in turn for the analysis filter, two sets of measures are obtained for each synthesis filter and the mean value of each set of values is ascribed to the spectral model of the corresponding synthesis filter. The basis for the operation of CEASA analysis-synthesis filter cascade is the perfect reconstruction condition which prevails when both the analysis filter and the synthesis filter in the cascade connection are configured with the same prediction vector so that the synthesized signal will coincide with the input signal up to a time delay in the absence of numerical errors. After investigating the application of different window lengths in spectral modeling, the divergences were found to decrease for shorter windows as reported in Section IV. So we have decided to test the modeling algorithms for longer 20 ms windows over the dialect regions 1 through 4 of the test set of the TIMIT corpus [25] for female and male speakers while shorter 7.25 ms windows have been tested only for a couple of speakers due to their CEASA scores to be reported below. In Table 5, longer windows are used for the spectral modeling of the utterances of female speakers, where ISD displays a small, however consistent, better performance which can be checked for the case of shorter windows from female utterances in Table 8 as well as male utterances in Tables 6 and 9 for longer and shorter windowing. In order to check the confidence of the results, we have also assessed them using the ViSQOL measure [19], whose scores and attendant quality improvements for the machine learning methods over OLA are reported in Table 7, which, upon further comparison between methods and ranking of methods, is consistent with Tables 5 and 6. However, if we keep to longer windows, the best performing loss is the RKLD, either assessed by PESQ or ID. This best performance within this set of losses is hinted by a qualitative analysis of the defining equation of the RKLD (20) in comparison to the defining equation of the KLD (19). As the weighting coefficients for the RKLD are the reconstructed masses q(k), when RKLD is used as the loss q(k) should converge to small values in regions where the data masses p(k) are rather small and, by themselves, would increase the argument of the log function unless q(k) converges to a comparable small value. This would lead q(k) to place most of its probability mass near the peaks of p(k), which is a good behavior to be valued by the ISD measure. An illustrated discussion of this convergence behavior under the minimization of the RKLD may be found in [30], where the equations for the divergences are the same as the abovementioned ones but the labels RKLD and KLD are exchanged. Besides, as a matter of fact, shorter windows are found by CEASA to lead to lower performance than longer windows, contrary to what happens for pure spectral analysis in Section IV. This behavior is due to the dynamics of the synthesis filter in the assessment procedure. Further, it seems to indicate that shorter windows may be better for some spectral analysis tasks but longer windows are recommended for synthesis. As a curious outcome, we may find it surprising that the LSD models, which performed very well for all the measures but for the ISD, have ranked last in both the PESQ and ID scores for long windows. This highlights the fact that spectral envelope models should be better in matching spectral peaks than overall spectral details and this is captured more clearly by CEASA assessment than by static spectral measures. It is noticeable by comparing the scores in Tables 5 and 6 that the spectral envelope models for male speakers fit their references more closely than those for female speakers. While modeling the spectral envelopes should not be affected by the local harmonic structure of the spectrum, this is valid when the density of harmonics is high enough so that the spectrum is approximately continuous. The latter is the condition for a lower pitched speaker, which is usually the case of a male speaker, which is consistent with the observation. Nonetheless, by referring again to the two tables mentioned above, we notice that the performances of the loss functions are ranked in the same order, irrespective of whether the speakers are female or male. In short, using the PESQ scores obtained with CEASA, the RKLD loss is found to improve the performance by 1.0%, 4.0% and 19.3% with respect to the open-loop analysis, the KLD and the LSD models, respectively, while the corresponding improvements for the ISD loss are 0.1%, 3.0% and 18.2% and the RKLD models excel the ISD models by 1.0% on average. In a different approach to spectral envelope VOLUME 4, 2016 VI. CONCLUSION Spectral envelope models for speech signals have relied for quite some time on linear prediction analysis. This work proposes a refinement to open-loop analytical (OLA) models by using machine learning algorithms provided with differentiable losses. Losses that have been proposed previously in autoregressive analysis are investigated for this task as well as popular divergences used in machine learning. Since the results obtained by spectral measures are not conclusive at first as to the most suitable losses, a quality assessment method is proposed based on the fundamental perfect reconstruction criterion for analysis-synthesis cascaded systems. Using the original speech signals and the analysis and synthesis filters from the spectral envelope models, all possible analysis-synthesis cascades are mounted in the proposed cross-excitation analysis-synthesis assessment (CEASA) method. For the whole set of signals, the reverse Kullback-Leibler divergence (RKLD) appears to be the one that more closely matches the PESQ MOS-LQO scores and the Itakura divergence (ID) estimates. Ranking close behind, the Itakura-Saito divergence (ISD) comes in the CEASA assessment. As a by-product of these methods, shorter analysis windows have been found to lead to better spectral fitting even though they are not the best for synthesis as indicated by the CEASA assessment results. Future research should focus on the conception of loss functions more suitable to the task such as perceptual losses properly adapted to the structure of the learning machine, as long as they are constrained to be differentiable with respect to the weights. Also the measures of merit should be suitable for the specific tasks in an evolution of the CEASA assessment tool. She is currently an Adjunct Professor with the Department of Computer Science, Federal University of Lavras, Brazil. She has a solid knowledge in computer science based on more than ten years of professional experience. Her current research interests include computer networks, telecommunication systems, machine learning, quality of experience of multimedia service, social networks, and recommendation systems.
2022-02-11T16:21:12.559Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "605025cdd9899c70f587368299dd6155df960e07", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09709279.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "e5dfdb3fd5c24f897a8c1465ebf05067f3ff2c53", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119269949
pes2o/s2orc
v3-fos-license
Quantum correlations of Two-Qubit XXZ Heisenberg Chain with Dzyaloshinsky-Moriya interaction coupled to bath spin as non-Markovian environment We consider the quantum correlations (entanglement and quantum discord) dynamics of two coupled spin qubits with Dzyaloshinsky-Moriya interaction influenced by a locally external magnetic field along $z$-direction and coupled to bath spin-1/2 particles as independent non-Markovian environment. We find that with increasing $D_{z}$ and decreasing $J_{z}$, the value of entanglement and quantum discord increase for both antiferromagnetic and ferromagnetic materials. Not that, this growth is more for the ferromagnetic materials. In addition, we perceive that entanglement and quantum discord decrease with increased temperature and increased coupling constants between reduced system and bath. But, strong quantum correlations within the spins of bath reduce decoherence effects. We discuss about type of the constituent material of the central spins that it can speedup the quantum information processing and as a result, we perceive that one can improve and control the quantum information processing with correct selection of the properties of the reduced system ($J$, $J_{z}$, $D_{z}$). Introduction In the actual dynamics of any real open quantum system we have faced with the various type of interactions between system and its surrounding environment. These interactions which can lead to decoherence, will cause the transition system from pure quantum states to mixed ones and change the quantum properties, especially quantum correlations. Entanglement and quantum discord which are two different faces of the quantum correlations without classical counterpart, due to their notable features in developing the idea of quantum computers and other quantum information devices have attracted much attention in recent years. Since both of them have been realized to performing quantum information tasks, the investigation of decoherence dynamics of them is an important emerging field [1,2,3,4,5,6]. Now, much attention has been paid to the quantum correlation in spin system, such as the Ising model [7] and all kinds of Heisenberg XX, XXZ, XYZ models [8,9,10,11]. Both Ising and XXZ models can be supplemented with a magnetic term, the so called Dzyaloshinskii-Moriya (DM) interaction. Such antisymmetric exchange interaction arises from the spin-orbit coupling and has many important consequences. Recently, CHEN Yi-Xin and YIN Zhi [?] have performed an interesting investigation on thermal quantum correlations in an anisotropic XXZ model with DM interaction and shown that quantum discord is more robust than entanglement concurrence versus temperature T. They find that the characteristic of quantum discord is unusual in this system and this possibly offers a potential solution to enhance entanglement of a system. Just as other quantum systems, spin systems are inevitably influenced by their environment, especially the spin environment. The coupling of spin systems with a spin bath often leads to strong non-Markovian behavior [] which has many physical importance in solid state quantum information processors, such as systems based on the nuclear spin of donors in semiconductors [12,13] or on the electron spin in quantum dots [14]. However the master equations describing the non-Markovian dynamics are rarely exactly solvable, but recently in Ref. [6] the authors present an exact calculation to obtain the reduced density matrix of two coupled spins in a quantum Heisenberg XY spin environment in the thermodynamic limit at a finite temperature. For spin systems, much attention has been devoted to the quantum correlations arising in spin chains at thermal equilibrium with their environment. In this sense, we consider the thermal quantum discord and entanglement within two spin qubit anisotropic XXZ Heisenberg model with antisymmetric DM interaction in the presence the bath spin-1 2 particles as environment. Since the dynamics of the system qubit in the model we study is highly non-Markovian and hence it is not expect the traditional Markovian master equations commonly used, we follow the novel technique in Ref. [6]. In section 2 we introduce the model Hamiltonian describing two-qubit system with DM interaction coupled to an XY spin chain and derive an analytic formula for the exact solution of non-Markovian dynamics. In section 3 we review the concept of entanglement and quantum discord and in section 4 we discuss the effects of the D z (the z-component of the DM interaction), J z (degree of anisotropy), J (J > 0 corresponds to the antiferromagnetic case and J < 0 corresponds to the ferromagnetic case), T (temperate), g (the bath-system coupling constant) and g 0 (the inner-bath-spin coupling constant) on entanglement and quantum discord, separately. Our results show that stronger DM interaction and week degree of anisotropy can decreases the disruptive effects of the environment. In addition, increasing the both temperature and the bath-system coupling constant will reduce the amount of the quantum correlation. Whereas the growth of the inner-bath-spin coupling constant or strong quantum correlations within the spins of bath reduce decoherence and improve the quantum correlations. By comparing the quantum discord with entanglement between two qubits in the model, we have seen that quantum discord is more robust than entanglement versus increasing temperature T and decreasing D z . Conclusions are then presented in Section 5. Model and solution The quantum system we consider consists of two spin-1 2 anisotropic particles with DM interaction influenced by a locally external magnetic field along z-direction coupled to bath spin-1 2 particles as environment. Here the DM interaction is a supplemented magnetic term arising from any interaction of a particle's spin with its motion, which can be represented by the form ij D ij .( S i × S j ). Where D is the DM vector coupling and the sum is over the pairs of spins. To see the effect of the DM interaction, we choose the z-component of the anisotropic parameter D. The corresponding Hamiltonian of total system H tot can be written as the sum of Hamiltonians for the system itself H s , the bath spin H b and the coupling between them H sb . By considering interaction between the two anisotropic system-spin particles as Heisenberg XXZ model, with DM interaction parameter D z , the Hamiltonian of system is given by We restrict the interaction between the N-components of the bath spin and between the system-bath to such that can be describe as Heisenberg XY model For definiteness, the s subscript refers to the system and b the bath spin. Where, the parameter ε characterize the intensity of the magnetic field applied along the z-axis and D z the z-component of the DM interaction. The coefficients J z , J, g 0 and g correspond to the real coupling constants with J z , J > 0 for the antiferromagnetic case and J z , J < 0 for the ferromagnetic case. By using the collective angular momentum operators J ± = N j=1 S ± jb and the Holstein-Primakoff transformation as and J − = ( √ N − a † a) a with [a, a † ] = 1, the Hamiltonians of the bath spin H b and interaction H sb can be rewritten as By considering the thermodynamic limit (i.e., N → ∞) at a finite temperature these above equations are reduced to Observe that, the bath spin is reduced into a single-mode bosonic thermal field with non-Markovian effect on the dynamics of the our system. Since this thermal field will not remain in a thermal equilibrium state as usually assumed for an environment with very large degrees of freedom, therefor, the master equation approach can not be useful. Following the special operator technique introduced in [6], by tracing over the degrees of freedom of the environment from density matrix we can catch the exact non-Markovian dynamics of reduced density matrix for the system at arbitrarily finite temperatures. We assume that the initial density matrix for the total system can be described by a pure and separable state as ρ tot (0) = ρ s (0) ⊗ ρ b . Where ρ b refers to the initial density operator of the single-mode bosonic thermal field which at thermal equilibrium is represented by the Boltzmann distribution as where k B is Boltzmanns constant which we henceforth set equal to one. When two spin particles of the system is initially prepared in a normalized state with maximally quantum correlation as it is easy to check that, the reduced density matrix for the system in the standard basis |00 , |01 , |10 , |11 has the form as Calculation of the above equation could be more difficult due to mutual and internal coupled of the system and environment. One way of circumventing this problem introduced in [6] by converting the time evolution equation of the system under the action of the total Hamiltonian into a set of coupled noncommuting operator variable equations. Then by turning the coupled noncommuting operator variable equations into commuting ones via introducing a new set of transformations on the operator variables, the trace over the environmental degrees of freedom can be performed and the exact reduced dynamics of the system can be obtained. For the case studied here, we can see that Note that, the coefficients A(t), B(t), C(t) and D(t) are functions of operators a and a † and do not commute with each other. The Schrödinger equation which by replacing the Eq. (8), transform to 4 coupled first-order differential equations of noncommuting operator variables as d dt From Eq. (8), the initial conditions are given by Since, operator variables which appear in above coupled differential equations do not commute with each other, we cannot use the conventional methods of solving coupled differential equations. Fortunately, H tot is of an effective Jaynes-Cumming type and it can be block-diagonalized. Therefor, by finding the proper transformations of noncommuting operator variables, we can rewrite the Eq. (10) as the coupled differential equations of complex-number variables. In this model, we can reach our goal by using transformations as Observe that, operator variables A 1 (t), B 1 (t), C 1 (t) and D 1 (t) are functions ofn = a † a and commute with each other. Under these transformations the coupled differential equations of noncommuting operator variables change to the coupled differential equations of complex-number and commuting operator variables as d dt Notice that, if the total Hamiltonian cannot be block-diagonalized, for example, for the spin anisotropic particles with x-or y-component DM interaction, the operator method used here will then not apply to solve the problem exactly. By solving Eq. (13) in the usual way with initial condition A 1 (0) = 1 and B 1 (0) = C 1 (0) = D 1 (0) = 0, we can evaluate the time evolution for the initial two spins state of |00 . A similar analysis as above can be made if the two spins is initially prepared in |11 state. Let For this case, the proper transformations of noncommuting operator variables have the form asà Insertion of these transformations in to Eq. (10) yield d dtà By solving these coupled differential equations of complex-number and commuting operator variables in the usual way with initial conditionà 1 (0) =B 1 (0) =C 1 (0) = 0 andD 1 (0) = 1, we can evaluate the time evolution for |11 . From the results of Eqs. (13) and (16), the reduced density matrix Eq. (7) in the representation spanned by the two-qubit product states |00 = |0 1 ⊗ |0 2 , |01 = |0 1 ⊗ |1 2 , |10 = |1 1 ⊗ |0 2 and |11 = |1 1 ⊗ |1 2 can be written as with Unfortunately, achieved analytic expression of solutions of Eqs. (13) and (16) has no compact form and, thus, we do not show it here explicitly. Quantum correlations The role of entanglement in developing the idea of quantum computers [15] and sending information in novel ways, such as quantum teleportation or quantum cryptography, has turned this intrinsic principle of quantum mechanics into one of the most prolific topics. To investigate the entanglement dynamics of the our bipartite system, we apply Wootters concurrence [16]. The concurrence can be calculated explicitly from the time dependent density matrix ρ s (t) of the two spins where the quantities λ i are the square roots of the eigenvalues of the matrix ϑ = ρ s (t)(σ y ⊗σ y )ρ * s (t)(σ y ⊗σ y ), arranged in decreasing order. Here ρ * s (t) means the complex conjugation of ρ s (t), and σ y is the Pauli matrix. The concurrence varies from zero for a separable state to one for a maximally entangled state. The quantum state Eq. (17) are entangled if and only if either ρ 22 ρ 33 < |ρ 14 | 2 or ρ 11 ρ 44 < |ρ 23 | 2 . Both conditions cannot hold simultaneously [17]. The entanglement of this state is obtained as However, concurrence is not the only type of quantum correlations. The different kind of quantum correlations than entanglement, so called quantum discord, has interesting and significant applications in quantum information processing. Quantum discord is not always larger than entanglement [18,19]. This indicates that discord is not simply a sum of entanglement and some other nonclassical correlation. The relation between quantum discord, entanglement, and classical correlation even for the simplest case of two entangled qubits is not yet clear. In a bipartite quantum state with density matrix operator ρ s (t), that is inclusive of two part and has the form as two-qubit X states [20], quantum discord is introduced as the difference between the total correlation and the classical correlation with the following expression D(ρ s (t)) = I(ρ s (t)) − C(ρ s (t)). There is the quantum mutual information. In this notation γ j are the eigenvalues of the density matrix ρ s (t) and are the von Neumann entropy of ρ 1s and ρ 2s , the marginal states of ρ s (t). In order to computing the classical correlation C(ρ s (t)), we must perform a suitable projection measure on subsystem 2. After this, the state ρ s (t) will change to the ensemble {ρ i ; p i } (i = 0, 1) with Where V ∈ SU(2) is a unitary operator with unit determinant. The ensemble {ρ i ; p i } can be characterized by their eigenvalues as with Θ = 4kl(|ρ 14 | 2 + |ρ 23 | 2 + 2Re(ρ 14 ρ 23 )) − 16mRe(ρ 14 ρ 23 )) + 16nIm(ρ 14 ρ 23 )), (27) and their corresponding probabilities as The parameters m, n, k and l are dependent to the projection of the von Neumann measurement V on the Bloch sphere. By rewriting this operator as V = tI + i Y . σ with t, y 1 , y 2 , y 3 ∈ R and t 2 + y 2 1 + y 2 2 + y 2 3 = 1 we have m = (ty 1 + y 2 y 3 ) 2 , n = (ty 2 − y 1 y 3 )(ty 1 + y 2 y 3 ), k = t 2 + y 2 3 , l = 1 − k. Quantum correlations of two coupled spins in spin bath In this section, we will discuss the time variations of the quantum correlations for antiferromagnetic (J > 0) and ferromagnetic (J < 0) in our model. Since the explicit expressions of entanglement and the quantum discord are very complicated, here we skip the details and give our results in terms of figures. In order to demonstrate the properties of different DM coupling parameter D z on quantum correlations in antiferromagnetic case, in Fig. 1(a) the thermal entanglement and in Fig. 1(b) the thermal quantum discord versus time are plotted for different values of D z . For both figures the coupling constants are J = 2.0 and J z = 1.0 and other parameters are ε = 0.5, g 0 = g = 1.0 and T = 2.0. From these figures, its obvious that the vanishing rate of the entanglement and the quantum discord decreases as the DM interaction parameter D z increases. This phenomenon indicates that, the decays of the quantum correlations due to interaction with spin bath can be compensated by tuning the DM interaction. In addition, it is easy to see that both entanglement and the quantum discord are recovered in the long time as we expected from non-Markovian dynamics. The comparison between these two figures shows that the quantum discord is more robust than entanglement under destroyed conditions. Similarly, Fig. 2 shows the characteristics of different D z on quantum correlations in ferromagnetic case (J = −2.0). Fig. 2(a) illustrates the thermal entanglement and Fig. 2(b) illustrates the thermal quantum discord versus time with J z = 1.0, ε = 0.5, g 0 = g = 1.0 and T = 2.0. In this condition, the quantum correlations decrease with gentle slope than antiferromagnetic case. As is indicated in Figs. 2(a) and 2(b) quantum correlations in ferromagnetic case is more robust than antiferromagnetic case versus increasing the values of the DM interaction. Fig. 3 the coupling constant J z has a lesser value than the same in Figs. 1 and 4. The comparison between these three figures shows that increasing the coupling parameter J z can enhance the fluctuations of the quantum correlations. It seems that increasing the J z will decrease the effects of the DM interaction. In one side, the curves of the higher value of J z are closer than of the lower ones. On the other side, the diminution of the both entanglement and the quantum discord in the long time scale for the higher values of D z is more in system with strong coupling. So as a result, the diminution of the quantum correlations can be compensated both by increasing the D z and by decreasing J z of the reduced system. The effects of temperature T on quantum correlations are illustrated in Fig. 5. Where entanglement and the quantum discord versus time and temperature T with J = 2.0, J z = 1.0, ε = 0.5, g 0 = g = 1.0 and D z = 2.0 are depicted in Figs. 5(a) and 5(b), respectively. We found that in the very small temperature, both entanglement and quantum discord have their maximum value, but the increased temperature can decrease them. This decreasing of quantum correlations can be explained as follows. In the very small temperature, no excitation for the spins of the bath will exist and the bath is in a thoroughly polarized state with all spins down. On the other word, the effects of decoherence induced by the bath are not strong enough to weaken the quantum correlations, but in the high temperature, the spin orientation of the bath becomes the very disorder, consequently effects of thermal fluctuation of the spin bath overcome on the effects of the quantum fluctuations (due to the D z interaction). With intensity decrease of the connection between bath and the reduced system, the role of mediation of the spin bath weaken, therefore two qubits of the reduced system can't exchange information with together and the value of quantum correlations reduced [21]. Not that, the quantum discord unlike entanglement never be zero. As a result, in the such as open quantum systems that the inner-bath-spin coupling constant is the strong, one can with right tuning J z and D z improve the both entanglement and quantum discord. Conclusions In summary, we have investigated the quantum discord and entanglement within twoqubit anisotropic XXZ Heisenberg model with antisymmetric DM interaction tunable parameter, such as D z , influenced by a locally external magnetic field along z-direction in the presence spin bath. We obtain that with decreasing J z and increasing D z , the value of entanglement and the quantum discord increase for the antiferromagnetic. In the same conditions for the ferromagnetic the quantum correlations decrease with gentle slope than antiferromagnetic case. In addition, we obtain that increasing the values of T (temperature) will reduce the amount of the both entanglement and quantum discord. Finally, we propose that in the such as open quantum systems that we can't reduce the devastating effects of T and g, we can with right tuning J z and D z or exchanging the constituent material of the central spins improve the both quantities of the entanglement and the quantum discord.
2012-05-01T10:34:14.000Z
2012-04-27T00:00:00.000
{ "year": 2012, "sha1": "0584520a59398e12ad4c022bb19b223f739ccc76", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "17526ca62192d9e91633dd04cfb517024635e478", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
204849186
pes2o/s2orc
v3-fos-license
Association of Flavored Tobacco Use With Tobacco Initiation and Subsequent Use Among US Youth and Adults, 2013-2015 Key Points Question What is the association between first flavored use of a given tobacco product and subsequent tobacco use, including progression of tobacco use, among US youth (aged 12-17 years), young adults (aged 18-24 years), and adults (aged ≥25 years)? Findings In this cohort study of 11 996 youth and 26 447 adults who participated in waves 1 and 2 of the Population Assessment of Tobacco and Health Study, most youth and young adult new tobacco users first tried a flavored product. First use of flavored tobacco products was positively associated with subsequent product use compared with first use of a nonflavored product. Meaning First use of flavored tobacco products may place youth and adults at risk of subsequent tobacco use. Introduction Children prefer sweet flavors more than adults do, 1 and tobacco industry documents [2][3][4][5] confirm that flavors in tobacco products can increase their appeal to young and inexperienced tobacco users. Consistent with studies [6][7][8] on menthol cigarettes and flavored cigars, data from the first wave of the Population Assessment of Tobacco and Health (PATH) Study [9][10][11] revealed a strong inverse age gradient in the prevalence of flavored tobacco product use, with the highest use among youth aged 12 to 17 years, followed by young adults aged 18 to 24 years, and the lowest use among adults aged 25 years and older.These data 9,10 also show a strong association between first use of a flavored tobacco product and current tobacco use among youth and adults. Few longitudinal studies to date have examined the association between flavored tobacco product use and initiation or continuation of tobacco use, and these studies [12][13][14] have largely been limited to menthol cigarettes.These studies highlight that menthol brand recognition is associated with smoking experimentation among youth, 12 that adolescents who initiate smoking with menthol cigarettes are more likely to progress to established smoking by the end of 3 years than those who initiated with nonmenthol cigarettes, 13 and that prior initiation with a menthol cigarette compared with a nonmenthol cigarette is associated with current cigarette smoking at follow-up among young adults. 14Five other cross-sectional studies 10,[15][16][17][18] support these findings. The current study extends prior research by leveraging longitudinal data from waves 1 and 2 of the PATH Study to assess whether there is a prospective association between first flavored use of a given tobacco product and subsequent use of that specific product (eg, e-cigarettes).In addition, this study examines whether first use of a flavored tobacco product at wave 1 is associated with progression to greater frequency of tobacco use at wave 2. The primary aims of this study are to report the proportions of new tobacco users at wave 2 whose first use of a given tobacco product was flavored (ie, first flavored use) and to assess the association between first flavored use of a given tobacco product at wave 1 and subsequent tobacco use, including frequency of tobacco use, at wave 2 for youth (aged 12-17 years), young adults (aged 18-24 years), and adults (aged Ն25 years). Methods The National Institutes of Health, through the National Institute on Drug Abuse, is partnering with the adults.An in-person screener was used at wave 1 to select youth and adults from households for participation. The PATH study was conducted by Westat and approved by the Westat institutional review board.All participants aged 18 years and older provided written informed consent, with youth participants aged 12 to 17 years providing assent while their parent or legal guardian provided written informed consent.This study follows the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for observational studies. 19pulation and replicate weights using the balanced repeated replication method with the Fay adjustment (ρ = 0.3) were created that adjusted for the complex study design characteristics (eg, oversampling at wave 1) and nonresponse at waves 1 and 2. Combined with the use of a probability sample, the weights allow analyses of the PATH Study data to compute estimates that are robust and representative of the noninstitutionalized, civilian US population aged 12 years and older.The longitudinal sampling weights provided for wave 2 are adjusted for wave 2 nonresponse to ensure that the wave 1 sample is representative of the population in the longitudinal estimates.Further details regarding the PATH Study design and methods have been published elsewhere. 20Details on survey interview procedures, questionnaires, sampling, weighting, and information on accessing the data are available online. 21 wave 1, the weighted response rate for the household screener was 54.0%.Among households that were screened, the overall weighted response rate at wave 1 was 74.0% for the adult interview and 78.4% for the youth interview.At wave 2, the overall weighted response rate was 83.2% for the adult interview and 87.3% for the youth interview. At wave 1, interviews were completed with 32 320 adults (aged Ն18 years) and 13 651 youth (aged 12-17 years).At wave 2, interviews were completed with 28 362 adults and 12 172 youth.The differences in number of completed interviews between wave 1 and wave 2 reflect attrition due to nonresponse, mortality, and other factors.The sample at wave 2 also includes 1915 youth aged 17 years at wave 1 who responded to the youth questionnaire in wave 1 and then turned 18 and responded to the adult questionnaire in wave 2. 21 Between waves 1 and 2, retention rates were 88.4% for continuing youth, 83.1% for continuing adults, and 85.7% for aged-up adults ( Measures Tobacco Product Use Ever and current tobacco use was assessed at waves 1 and 2 among youth, young adults, and adults for cigarettes, e-cigarettes, traditional cigars, cigarillos, filtered cigars, hookah tobacco, pipe tobacco, smokeless tobacco (eg, moist snuff or chew), snus pouches, and dissolvable tobacco.Any cigar use was defined as using traditional cigars, cigarillos, or filtered cigars.Any smokeless tobacco use was defined as using smokeless tobacco or snus pouches.Youth, young adults, and adults who tried a tobacco product for the first time between waves 1 and 2 were defined as new users, with age at tobacco trial defined as their age at wave 1.Current use was defined in multiple ways as outlined in previous analyses 22 and in the eTable in the Supplement.Participants missing data on moderate, frequent, or daily use of a product because of an instrument skip pattern were coded as not having the outcome and were included in the denominator. Wave 1 First Flavored Tobacco Product Use At wave 1, ever cigarette users were asked whether, when they first used a cigarette (youth) or when they first started smoking cigarettes (adults), it was "flavored to taste like menthol or mint."Ever cigarette users who replied "no" to the menthol question were then asked whether their first users of other tobacco products were queried about whether, when they first used the product (youth) or when they first started using the product (adults), it was "flavored to taste like menthol, mint, clove, spice, candy, fruit, chocolate, alcohol (such as wine or cognac), or other sweets" (eTable in the Supplement). Wave 1 Covariates All covariates were assessed at wave 1 and were selected on the basis of previous work 10 Statistical Analysis Data analysis was conducted from July 2016 to June 2019.Analyses were conducted using svy procedures in Stata/SE statistical software version 14.2 (StataCorp) to account for the complex study design.Analysis of new users focused on the prevalence of using a flavored product at first tobacco use at wave 2 (Figure).For all other analyses, the main outcome was current product-specific use at wave 2 as defined in the eTable in the Supplement.The prevalence of each outcome was estimated for youth, young adults, and adults aged 25 years and older according to age at wave 1. Estimates with denominators less than 50 or relative SE greater than 30% were suppressed. 24Missing data on age, sex, race or Hispanic ethnicity, and adult education were imputed at wave 1 as described elsewhere. 21Participants missing any response to a composite variable (eg, any past 30-day tobacco use) were treated as missing; missing data were handled with listwise deletion.Multivariable models were built separately for youth, young adults, and adults aged 25 years or older; all models included sex, race/ethnicity, education, past 30-day alcohol, marijuana, or other drug use, and the 3 Global Appraisal of Individual Needs-Short Screener subscales as covariates.Adult models also included age and income.Modified Poisson regression models 25 estimated the association between first flavored tobacco use among ever users at wave 1 and current tobacco use at wave 2, as well as moderate, frequent, and daily use at wave 2. For the 3 products for which there were sufficient sample sizes in each of the age groups (flavored cigarettes, menthol cigarettes, and flavored e-cigarettes), we conducted multivariable multinomial logistic regression models of increasing frequency of tobacco use compared with no use from the mutually exclusive categories of tobacco use frequency.For all analyses, α was set at P < .05using 2-sided tests.Stata's svy commands used a logit transformation to produce confidence intervals with limits between 0 and 1. 26 Results The mean (SE) age of the participants at wave 2 was 14.5 (0.0) years for youth, 21.1 (0.0) years for young adults, and 50.Percentages are weighted to represent the US population, and 95% CIs (whiskers) are estimated using the balanced repeated replication method.New use is ascribed to the participants' age at wave 1. Respondents were categorized into age groups (youth aged 12-17 years, young adults aged 18-24 years, and adults aged Ն25 years) according to their ages at wave 1.New use of a tobacco product is defined as starting to use a product between waves 1 and 2. This can include never users at wave 1 who start tobacco use at wave 2 and ever users at wave 1 who report use of a new product or products at wave 2. Individuals who reported "don't know" or refused to answer any part of the definition of ever use or first flavored use were excluded from the denominator.Unweighted numbers and unweighted percentages are presented for each age group: Among 11 996 youth, 2136 (17.8%) reported new use of a tobacco product, 9622 (80.2%) reported no new initiation, and 238 (2.0%) did not provide information on initiation between wave 1 and wave 2. Among 7325 young adults, 2058 (24.9%) reported new use of a tobacco product, 5232 (74.7%) reported no new initiation, and 35 (0.4%) did not provide information on initiation between wave 1 and wave 2. Among 19 116 adults aged 25 years and older, 2580 (8.1%) reported new use of a tobacco product, 16 407 (91.4%) reported no new initiation, and 129 (0.5%) did not provide information on initiation between wave 1 and wave 2. First flavored use is defined as reporting that the first product used was "flavored to taste like menthol, mint, clove, spice, candy, fruit, chocolate, alcohol (such as wine or cognac), or other sweets."Individuals who did not report "yes," "no," or "I don't know" or refused to answer whether their first product was flavored were excluded from the denominator.Flavored pipe tobacco and dissolvable tobacco use was not assessed among youth.Unweighted numbers and unweighted percentages are presented for each age group.For 2136 youth new tobacco users, 95 (4.5%) did not report whether they had used any flavored product between wave 1 and wave 2. For 2058 young adult new tobacco users, 58 (2.8%) did not report whether they had used any flavored product between wave 1 and wave 2. For 2580 adult (aged Ն25 years) new tobacco users, 58 (2.3%) did not report whether they had used any flavored product between wave 1 and wave 2. Any tobacco product included cigarettes, e-cigarettes, traditional cigars, cigarillos, filtered cigars, hookah, pipe (for adults only), smokeless tobacco, and snus or dissolvable tobacco (for adults only); any cigar use reflects use of a traditional cigar, cigarillo, or filtered cigar.Data are from the Population Assessment of Tobacco and Health (PATH) Study, [9][10][11] Abbreviations: aPR, adjusted prevalence ratio; NA, not applicable. c Moderate use was defined as having smoked or used the product on at least 6 of the past 30 days.Frequent product use was defined as having smoked or used the product on at least 20 of the past 30 days.Daily use among youth was defined as having smoked or used the product on 30 of the past 30 days.Individuals who responded "don't know" or refused to answer were excluded from the denominator.Unweighted numbers (percentages) of respondents excluded from the denominator for moderate, frequent, and daily use were as follows: cigarettes moderate or frequent use, 25 Abbreviations: aPR, adjusted prevalence ratio; NA, not applicable. c Moderate use was defined as having smoked or used the product on at least 6 of the past 30 days.Frequent product use defined as having smoked or used the product on at least 20 of the past 30 days.Individuals who responded "don't know" or refused to answer were excluded from the denominator.Unweighted numbers (percentages) of respondents excluded from the denominator for moderate and frequent use were as follows: Abbreviations: aPR, adjusted prevalence ratio; NA, not applicable. c Moderate use was defined as having smoked or used the product on at least 6 of the past 30 days.Frequent product use was defined as having smoked or used the product on at least 20 of the past 30 days.Individuals who responded "don't know" or refused to answer were excluded from the denominator.Unweighted numbers (percentages) of respondents excluded from the denominator for moderate and frequent use were as follows: cigarettes Discussion The current study found that (1) youth and young adults who were new users of a tobacco product at wave 2 (over the 10-to 13-month follow-up period) were more likely to try flavored tobacco products than adults; (2) first use of a flavored cigarette documented at wave 1 was positively associated with past 12-month and past 30-day cigarette use among youth, young adults, and adults aged 25 years and older at wave 2; (3) first use of a menthol or mint flavored cigarette documented at wave 1 was positively associated with past 12-month and past 30-day cigarette use at wave 2 in all age groups; (4) first use of flavored e-cigarettes, cigars, hookah, and smokeless tobacco was associated with subsequent use of those products at wave 2 among young adults and adults aged 25 years and older; (5) first flavored use of a cigarette, e-cigarette, any cigar, cigarillo, filtered cigar, hookah, and any US Food and Drug Administration's Center for Tobacco Products to conduct the PATH Study under a contract with Westat.The PATH Study is an ongoing, nationally representative, longitudinal cohort study of adults and youth in the United States.The PATH Study uses audio computer-assisted selfinterviews available in English and Spanish to collect self-reported information on tobacco-use patterns and associated health behaviors.Wave 1 data collection was conducted from September 12, 2013, to December 14, 2014; wave 2 data were collected from October 23, 2014, to October 30, 2015.The PATH Study recruitment used a stratified, address-based, area-probability sampling design at wave 1 that oversampled adult tobacco users, young adults (aged 18-24 years), and African American JAMA Network Open | Public Health Association of Flavored Tobacco Use With Tobacco Initiation and Subsequent Use JAMA Network Open.2019;2(10):e1913804. doi:10.1001/jamanetworkopen.2019.13804(Reprinted) October 23, 2019 2/17 Downloaded From: https://jamanetwork.com/ on 09/16/2023 Table 1 . Association Between First Tobacco Product Flavored Among Youth Ever Tobacco Users at Wave 1 and Product-Specific Tobacco Use at Wave 2 of the Population Assessment of Tobacco and Health Study (continued) Table 2 . Poisson regression models among youth were adjusted for sex, race/ethnicity, education, past 30-day alcohol, marijuana, or other drug use, and 3 Global Appraisal of Individual Needs-Short Screener subscales (internalizing problems, externalizing problems, and substance use problems).Individuals who reported "don't know" or refused to answer any of these items were treated as missing.Among youth ever tobacco users, data were missing for sex (0 participants [0.0%]), race/ethnicity (0 participants [0.0%]), education (128 participants [5.2%]), past 30-day alcohol, marijuana, or other drug use (64 participants [2.6%]), internalizing problems subscale (52 participants [2.1%]), externalizing problems subscale (93 participants [3.8%]), and substance use problems subscale (91 participants [3.7%]).Respondents who reported never alcohol, marijuana, or other drug use were categorized as non-past 30-day users.was made between wave 1 cigarette smokers who first smoked a menthol or mint flavored cigarette vs smokers who first smoked a nonflavored cigarette.Individuals who reported first other flavored cigarette use were excluded from the denominator.suppressed because it has low statistical precision.It is based on a sample size of less than 50, or the coefficient of variation of the estimate is larger than 30%.Association of Flavored Tobacco Use With Tobacco Initiation and Subsequent Use Association Between First Tobacco Product Flavored Among Young Adult Ever Tobacco Users at Wave 1 and Product-Specific Tobacco Use at Wave 2 of the Population Assessment of Tobacco and Health Study d Multivariable modified e A separate comparison f The estimate was g There were insufficient observations to compute balanced repeated replication SEs.h Any smokeless tobacco use was defined as smokeless and/or snus use.JAMA Network Open | Public Health JAMA Network Open.2019;2(10):e1913804. doi:10.1001/jamanetworkopen.2019.13804(Reprinted) October 23, 2019 7/17 Downloaded From: https://jamanetwork.com/ on 09/16/2023 JAMA Network Open.2019;2(10):e1913804. doi:10.1001/jamanetworkopen.2019.13804(Reprinted) October 23, 2019 8/17 Downloaded From: https://jamanetwork.com/ on 09/16/2023 Table 2 . Association Between First Tobacco Product Flavored Among Young Adult Ever Tobacco Users at Wave 1 and Product-Specific Tobacco Use at Wave 2 of the Population Assessment of Tobacco and Health Study (continued) Poisson regression models among young adults were adjusted for age, sex, race/ethnicity, education, income, past 30-day alcohol, marijuana, or other drug use, and 3 Global Appraisal of Individual Needs-Short Screener subscales (internalizing problems, externalizing problems, and substance use problems).Individuals who reported "don't know" or refused to answer any of these items were treated as missing.Among was made between wave 1 cigarette smokers who first smoked a menthol or mint flavored cigarette vs smokers who first smoked a nonflavored cigarette.Individuals who reported first other flavored cigarette use were excluded from the denominator.suppressed because it has low statistical precision.It is based on a sample size of less than 50, or the coefficient of variation of the estimate is larger than 30%. d Daily use among adults was defined as now smokes or uses product every day.Current regular use was defined for cigarettes as having smoked at least 100 cigarettes in lifetime and now smokes every day or some days; for all other products, regular use was defined as having ever used a product "fairly regularly" and now uses it every day or some days.Individuals who responded "don't know" or refused to answer were excluded from the e Multivariable modified f A separate comparison g The estimate was h There was insufficient observations to compute balanced repeated replication SEs.i Any smokeless tobacco use was defined as smokeless and/or snus use. Table 3 . Association Between First Tobacco Product Flavored Among Adults Aged 25 Years and Older Ever Tobacco Users at Wave 1 and Product-Specific Tobacco Use at Wave 2 of the Population Assessment of Tobacco and Health Study Association of Flavored Tobacco Use With Tobacco Initiation and Subsequent Use JAMA Network Open.2019;2(10):e1913804. doi:10.1001/jamanetworkopen.2019.13804(Reprinted) October 23, 2019 10/17 Downloaded From: https://jamanetwork.com/ on 09/16/2023 Table 3 . Association Between First Tobacco Product Flavored Among Adults Aged 25 Years and Older Ever Tobacco Users at Wave 1 and Product-Specific Tobacco Use at Wave 2 of the Population Assessment of Tobacco and Health Study (continued) Table 4 . Multivariable Multinomial Logistic Regression Models of Frequency of Use at Wave 2 Among Ever Users of Specified Product at Wave 1 of the Population Assessment of Tobacco and Health Study, by Age Group
2019-10-24T09:17:56.850Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "32aecd29a6e9662192031f5c16597527b8f923ba", "oa_license": "CCBY", "oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2753396/villanti_2019_oi_190526.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5a585dbf836098925123420036ab6523c75c9faa", "s2fieldsofstudy": [ "Medicine", "Agricultural And Food Sciences", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
221182836
pes2o/s2orc
v3-fos-license
Unsymmetrical polysulfidation via designed bilateral disulfurating reagents Sulfur-sulfur motifs widely occur in vital function and drug design, which yearns for polysulfide construction in an efficient manner. However, it is a great challenge to install desired functional groups on both sides of sulfur-sulfur bonds at liberty. Herein, we designed a mesocyclic bilateral disulfurating reagent for sequential assembly and modular installation of polysulfides. Based on S-O bond dissociation energy imparity (mesocyclic compared to linear imparity is at least 5.34 kcal mol−1 higher), diverse types of functional molecules can be bridged via sulfur-sulfur bonds distinctly. With these stable reagents, excellent reactivities with nucleophiles including C, N and S are comprehensively demonstrated, sequentially installing on both sides of sulfur-sulfur motif with various substituents to afford six species of unsymmetrical polysulfides including di-, tri- and even tetra-sulfides. Life-related molecules, natural products and pharmaceuticals can be successively cross-linked with sulfur-sulfur bond. Remarkably, the cyclization of tri- and tetra-peptides affords 15- and 18-membered cyclic disulfide peptides with this reagent, respectively. S ulfur-sulfur bond has unique and significant roles in biological, pharmaceutical, and material fields. In organism, tertiary structures of proteins are fixed and stabilized via the linkage of sulfur-sulfur bridge among secondary structures, contributing to the versatility of proteins with complex threedimensional structure (Fig. 1a) 1,2 . Polysulfides such as trisulfides and tetrasulfides are primary H 2 S donors 3 , signaling of which endogenous gasotransmitter occurs via persulfidation of cysteine residues (RSH) to persulfides (RSSH) in proteins 4 with the reduction of glutathione 5 (Fig. 1b). As a powerful linker, sulfur-sulfur bridges cyclized peptide drugs with higher stability, activity, and potency compared with corresponding linear ones (Fig. 1c) 6 . Given the excellent metabolism of sulfur-sulfur bond in organism, cutting-edge drug design strategies of antibody-drug conjugates (ADCs) [7][8][9] and small molecule-drug conjugates (SMDCs) [10][11][12][13] involved disulfur extensively, such as Mylotarg 14 and Vintafolide 15 , in which sulfur-sulfur bond serves as a reversible cross-linker. Cytotoxic drug molecule can be programmatically released relying on the reduction of glutathione when delivered to the target cells (Fig. 1d) 16 . Furthermore, polysulfides also possess high-capacity potential in cathode materials for rechargeable lithium battery, among which outstanding capacity of tetrasulfides are higher than that of trisulfides ( Fig. 1e) [17][18][19] . Despite of the great significance of disulfur, the construction of disulfide is not yet unhindered since of high reactivity from sulfur-sulfur bond [20][21][22][23][24] . Though both nucleophilic 25 and electrophilic 26 disulfurating reagents have been developed, free and flexible installation on both sides of sulfur-sulfur motif is still an insurmountable challenge. Based on our concept of mask effect 27 , we envision that disulfurating reagents with bilateral masks will cross-link two designated functional molecules with sulfur-sulfur bond sequentially and modularly (Fig. 2a). Dialkoxydisulfide 28,29 and diaminodisulfide [30][31][32] have been investigated as sulfur transfer reagents since 1970s. However, unsymmetrical polysulfidation with disulfanyl motif has never been achieved owing to the sharp contradiction raised by sequential and selective cleavage of dual S-O(N) bonds. Based on preliminary calculation of an assumed disulfurating process with molecular mechanics method (MM2, Fig. 2b), energy released from the first S-O cleavage is higher than the second due to the ring tension energy (>5.34 kcal mol −1 ) when application of mesocyclic bilateral disulfide, enabling to differentiate S-O bond cleavage (for details, see the Supplementary Figs. 7-9). Herein, we show a mesocyclic bilateral disulfurating reagent for sequential assembly and modular installation of unsymmetric polysulfides. Results Syntheses of diaza-disulfides 3 and aza-trisulfides 4. With this concept, a series of bilateral disulfurating reagents were synthesized (Fig. 3a), whose structures were further confirmed through X-ray analysis of 1f. In order to demonstrate the ladder-type reactivities of reagents, aniline was used as a nucleophile under the assistance of B(C 6 F 5 ) 3 as catalyst (Fig. 3b). As expected, linear disulfurating reagents 1a and 1b resulted in poor selectivities between two S-O/N bonds, bringing mixture when coupling with aniline. Cyclic diaminodisulfide 1c refused to transfer disulfur owing to week reactivity. Cyclic disulfane 1d and 1e failed to generate mono-coupling product 2 owing to the decomposition of starting material. Mono-aza-disulfide 2f was quantitatively obtained when 10-membered disulfane 1f was employed as a disulfurating reagent (for details, see the Supplementary Table 1). Since the first S-O cleavage was controllably realized, another nucleophile was subsequently subjected to aza-disulfide 2f under the assistance of weak base lithium carbonate, affording unsymmetrical diaza-disulfides 3 in Fig. 4. Diverse anilines bearing electron-withdrawing and electron-donating functional groups could be cross-linked with benzyl amines, straight-chain alkylamines, diallylamine, pyridyl methylamine, tryptamine, and even amino-acid esters at liberty (3a-3j). The unsymmetrical diaza-disulfide structure of 3a was further confirmed through Xray analysis. Among them, compound 3d was afforded with a yield of 55% with occurrence of the polymerization of vinyl group accelerated by B(C 6 F 5 ) 3 33 . The relative low efficiency of 3l, 3n, and 3o with tryptophan motif resulted from a cyclic disulfide intermediate generated from nucleophilic cyclization of 2position of indole (for details, see the Supplementary Fig. 2). Amines with sensitive enamine structure, amino-acid esters and antibiotic sulfamethazine underwent the connection smoothly (3k-3m). Moreover, two different amino-acid esters could be cross-linked through sulfur-sulfur bond straightforward (3n and 3o). Notably, sulfamethazine and sulfamethoxazole could be successfully linked with different peptides in good yields (3p and 3q), which displayed a great potential for the synthesis of SMDCs drugs. Besides, the antibacterial sulfamethazine and cinacalcet, a kind of calcimimetics, could be connected efficiently in good yield (3r). With this strategy, amines could be cross-linked with mercaptans via disulfur motif, affording aza-trisulfides 4 smoothly. Both electron donor and acceptor substituted anilines were applied in the connection compatibly (4a-4g). Weak nucleophilic thiophenol, straight-chain dodecamercaptan, electron-rich furfuryl mercaptan, electron-deficient 2-mercaptopyrimidine, and even cysteine could be successfully introduced in this connection to afford azatrisulfides (4h-4m). Tryptamine, peptide, amines with sensitive enamine structure, even sulfonamides like sulfamethazine, sulfacetamide and sulfamethoxazole were cross-linked with thiols by disulfur perfectly (4n-4s), which supplies an efficient protocol for drug-linkage. Interestingly, we successfully synthesized an aza-trisulfide with a long chain of thirty-four-atoms via this method (4t). Tripeptides like H-Ala-Phe-Lys-OMe could be cyclized to form 15-membered cyclic peptides 5a and tetrapeptides like H-Ala-Phe-Trp-Lys-OMe could be cyclized to form 18-membered cyclic peptides 5b under the standard conditions (Fig. 5). Syntheses of aza-disulfides 6, trisulfides 7 and disulfides 8. Furthermore, we established the cross-linkage between carbon and nucleophiles with phenyl boric acid and bilateral disulfurating reagent 1f as coupling partner first (Fig. 6). With the optimized conditions, mono-coupling was obtained in 84% yield (for details see the Supplementary Table 2). Investigating on nucleophiles, diverse aromatic rings were cross-linked with amines, mercaptans, and electron-rich aromatics modularly, affording aza-disulfides, trisulfides, and diaryl disulfides, respectively. The arylboronic acids substituted with electronwithdrawing and -donating functional groups afforded the corresponding aza-disulfides readily (6a-6f, 7b-7g, and 8h). Arylboronic acids derived from L-tyrosine and estrone were compatible in the cross-linkage, affording a pathway to late-stage modification of natural products (6g and 7h) insufficient efficiency in the first step. The scope of amine is quite broad when it is served as a nucleophile. Anilines (6a and 6b), aliphatic amines (6c-6e), amino-acid esters (6f and 6g), and antibiotic sulfamethazine (6h) were all efficiently transformed to the corresponding aza-disulfides in moderate yields. Trisulfides could be easily obtained when mercaptans were applied in the cross-coupling. Arylthiophenol like 2-mercaptopyrimidine provided diaryl trisulfide (7a). Other thiols even containing hydroxyl (7b) and triethoxylsilyl ether (7h) could afforded trisulfides in moderate yields. Sterically bulky aliphatic thiols, such as tertbutylthiol and 1-adamantanethiol, showed great reactivity in this reaction (7e and 7f). Furthermore, cysteine derivatives were successfully converted to trisulfide derivatives (7d). Diaryl disulfides were generated when electron-rich aromatics were accommodated under the standard conditions. (+)-δ-Tocopherol, a kind of vitamin E, could be disulfurated directly despite of the presence of free hydroxyl group (8b) under nitrogen atmosphere. Indole derivatives were excellent reactants even there is a free amino group (8c). Heterocycles like thiophene could be connected in the reaction as well (8g). The 2-position of Nmethyl pyrrole possesses sufficient reactivity in the reaction (8h). The structure of 8f was further confirmed through X-ray analysis. Amine-mercaptan cross-linkage Synthesis of tetrasulfides 9. Unsymmetrical tetrasulfides was a challenging subject in organic synthesis, but the connection between two different mercaptans with 1d as a disulfurating reagent afforded the desired tetrasulfides highly efficiently, owing to large difference betwen two S-O bonds of eight-membered 1d (9.53 kcal mol −1 ) (for details see the Supplementary Table 3). The unsymmetrical tetrasulfide linkage was comprehensively investigated in Fig. 7. Pyrimidine and pyrazine can be easily accomodated under the standard conditions (9a-9c). The structure of 9a was further confirmed through X-ray analysis as a linear tetrasulfide. Penicillamine and cysteine, two different amino acids, were cross-linked with tetrasulfur fragment via this method (9d). Cysteine (9e), tripeptide (9f), and even glucosinolate (9g) could be cross-linked with 1-adamantanethiol, forming unsymmetrical tetrasulfides. Sensitive thiols, which even contained hydroxyl and triethoxylsilyl ether, could afforded tetrasulfides (9h). Remarkably, volatile and low-polar allicin analog was modularly provided when propanethiol and allyl mercaptan were used as nucleophiles (9i). Bilateral reagents 1d and 1f are odorless and stable solid stored at −10°C regardless of air and water. No decomposition was observed even after 5 months, whereas they will deteriorate at room temperature after 24 h. With these designed bilateral reagents, we have established six different kinds of polysulfides, most of which are quite stable under room temperature except aza-trisulfides. They need to be stored in fridge (−10°C) for longterm preservation. Diaza-disulfides, aza-trisulfide, aza-disulfide, and tetrasulfides are fragile to acidic conditions. S 2 Cl 2 , a common disulfur structure, hardly achieves multiple heteroatom hybrid connection on account of its fractious activity and strong acidity. Taking synthesis of 9a as an example, the selectivity of 9a to 9a-S 5 is 2.5:1 when S 2 Cl 2 was involved, much lower than 15:1 afforded by our reagent 1d. Besides, there is a huge gap between the efficiency afforded by S 2 Cl 2 and 1d (8% vs 70%) (Fig. 8a). Di(1-phthalimidyl)disulfane (1b) reagent developed by Harpp 30 , which avoids the disadvantage of acidity with S 2 Cl 2 , still remains less selective and efficient owing to the nondistinctive S-N bonds. For instance, Harpp's reagent gave a mono-coupling product only in 30% and a bis-coupling byproduct in 60% in the first step coupling with aniline. Optimistically, quantitative yield of 2f could be obtained with our reagent 1f (Fig. 8b). Data availability The X-ray crystallographic coordinates for the structures reported in this study have been deposited at the Cambridge Crystallographic Data Centre (CCDC), under deposition number CCDC-1941481 (1f), 1941479 (3a), 1941480 (4d), 1941478 (8f), and 1941482 (9a). These data can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif. Source data are provided with this paper.
2020-08-20T14:17:23.729Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "f2b68b1d75845b213a9ded52a6e99808886905b7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-18029-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2b68b1d75845b213a9ded52a6e99808886905b7", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237782434
pes2o/s2orc
v3-fos-license
Student Creative Thinking Analysis in Ethnomathematics Based Inquiry Learning on Transformation Materials Creative thinking is a very important aspect of learning, with creative thinking students can develop their abilities in every material taught. This study aims to describe students' creative thinking skills in learning using an ethnomathematicsbased inquiry model. This research is descriptive qualitative research. The subject of this study was a grade XI student MIA MA Al Mukhtar Adipala Cilacap who numbered 10 people. The results of this study showed that at the first meeting for the average creative thinking level of students in all three groups was 85% (high),at the second and third meeting the average level of creative thinking of students was 83%(high). At the fourth meeting, the average level of creative thinking of students reached 85% (high). INTRODUCTION The development flow of science and technology runs so fast that it has a big influence on the quality of human resources (Purwasih &Sariningsih, 2017). In the world of Education, one of the consequences of students are required to continue to develop the skills and potential of various learning activities they get in the classroom (Mawaddah &Maryanti, 2016). The progress of these developments is closely related to the way and ability to think. So it takes a skills and obtain, choose and process information. These abilities require critical, systematic, logical and creative thinking (S. K. Putri et al., 2019). The current reform trend is to address the needs of students through accommodation of pedagogical approaches that improve math learning. Today's challenges make mathematics relevant to today's students because students love direct, visual, and contextual learning rather than abstract-theoretical learning (Grace-Bridges, 2019). Teachers should be aware of these preferences and use techniques that stimulate learning. One of the challenges of teaching teachers is how to expose students to the ever-present relationship between real-world practice and mathematical ideas, between visual-intuitive and rational. Efforts to make mathematics more collaborative, practical, and connected to the real world align with the mathematical education reforms that began at least three decades ago as NCTM standards (Verner et al., 2019). 21st century proficiency is achieved when students have good communication, co-ordination, creativity and critical thinking skills. Kemendikbud formulates about 21st century learning focusing on the ability of learners in finding out from various sources, formulating problems, critical thinking and collaborating and collaborating in solving problems (Salim Nahdi, 2019). The process to develop creativity includes cognitive abilities, features and products produced (HAYLOCK, 1987). An approach based on daily life and a culture close to students becomes a solution to achieve on the goal of learning mathematics. In the last decade, literature has developed related to culture and mathematics as well as describing mathematical examples in a cultural context otherwise known as ethnomathematics (Barton, 1996). D'Ambrosio (2001) defines ethnomathematics as mathematical activities practiced by identifiable cultural groups, meaning they relate to mathematical conceptions and techniques developed in different cultures to solve real-life problems. Mathematical concepts obtained from cultural environments and embedded through generations become one of the first steps in learning mathematics so that mathematics can be learned more easily by society. Ethnomathematics is formed from ways or habits that integrated with the traditions of the local community. Habits or ways that are done hereditary and have benefits for people's lives so that it is still maintained to this day (L. I. Putri, 2017). Mathematics has a special purpose as a subject that is favored by many students. So it takes the role of teachers with new breakthroughs in transferring knowledge. One of the breakthroughs is with creativity in the learning process that teachers pour in the form of Group Worksheets (LKK) Ethnomathematics-based Unquenching. Group Worksheet (LKK) Ethnomathematics-based Inquiry is a worksheet of research innovation to help the learning process of students in learning the transformation of geometry. The steps of inquiry learning with ethnomathematics approach are packaged in a Group Worksheet (LKK) that aims to help students think creatively in understanding the concept of geometry transformation. Creative thinking is an ability resulting from cognitive activities to get an idea in solving problems by collaborating concepts that have been authorized by smoothness, flexibility, novelty and elaboration (Jagom, 2015). Inquiry-based learning is a learning process that helps students to find a hypothesis or temporary conjecture about a new knowledge. The knowledge is then used by students to conduct an experiment until they find conclusions about the learning materials(Ministry of Learning, 2004). The enculture learning model allows for the cultivation of students' scientific prowess and creative thinking (Panjaitan et al., 2015). The process of discovery becomes very important in the process of inquiry learning. In line with Bruner's opinion, Bruner said that a learning process will go well and foster a creative attitude if the teacher gives students the opportunity to explore and discover concepts, theories, rules, or understanding concepts through the examples faced (Hasibuan &Amry, 2017). Rizky (2018) in his research said the development of ethnomathematics-based Mathematics E-Module began from the stage of definition where data was found that most of the teaching materials circulating in the community use content that is foreign to students, so that more effort is needed so that students are able to customize with the scheme of knowledge that has been owned so that it is compiled draft 1 ethnomathematics-based teaching materials that use cultural content around learners (Utami et al., 2018). Noor Aishikin Adam (2010) a Malaysian researcher in his research showed that the interaction between weavers and mathematicians succeeded in uncovering some perspectives that concern both sides. This is evidenced by innovative ideas developed by one of the weavers since participating in dialogues with mathematicians, as shown from his work double peak serving hood and triple peak (Adam, 2010). Diyarko (2016) in his research entitled Analysis of Literacy Skills that Mathematics Reviewed from Metacognition in Learning Inquisition Assisted Independent Worksheet Mailing Merge stated able to facilitate students to solve math problems and impact on complete mathematical literacy skills (Diyarko,2016). This suggests that ethnomathematics is a relatively new field of study and is supported by many researchers in the field of mathematics education (Cimen, 2014). Indonesia is one of the countries in Asia with a multi-cultural population (Haryanto et al., 2017). Culture is a typical way for humans to adjust to the environment, whereas mathematics manifests itself because of human activity. This corresponds to Freudenthal's phrase, "mathematics as human activity" (Supiyati et al., 2019). Thus, ethnomathematics is a bridge that integrates mathematics with ideas and community culture practice (Barton, 1996). Cilacap is a district in central Java that is directly adjacent to the province of west Java so that it has a culture, a characteristic that is different from other regions. One of them is in the pattern and art of batik clothes produced as well as cilacap batik motifs. UNESCO has designated Batik as masterpieces of the Oral and Intangibel Heritage of Humanity on October 2, 2009. Indonesia determined that on October 2nd became the National Batik Day as a form of Indonesian nationality towards batik that has gained world recognition as a world heritage that should be developed. The motifs in Cilacap Batik appear to show a variety of mathematical concepts, especially in the concept of transformation. Wijaya Kusuma flower is one of the icons of Cilacap city which then Wijaya Kusuma batik motif is most in demand. Srandil is presented on the basis of buildings that become cultural heritage in Adipala sub-district. Ngasem is taken from a kerse mtree. These three icons were taken by researchers as a medium of learning in this study. In addition, many motifs of various marine animals that in fact Cilacap is located on the south coast, so many batikpun motifs that use marine biota. The process of making batik with batik writing and printing techniques. This indicates that mathematical concepts, especially transformation concepts, have indirectly taken root in society. METHODS The method used in this study is descriptive qualitative. Subject this research is class XI MIA MA Al Mukhtar numbered 10 people with a record of complying with the Covid-19 Health Protocol. Instruments or data retrieval tools in this study is 1) Lembar Kerja Kelompok (LKK) (2)questionnaires in response to learning using LKK with ethnomathematics-based inquiry models. Both instruments have been validated by one maths lecturer and two math teachers. Directed LKK can mendukung creative thinking students, especially on material geometry transformation. The preparation of LKK is adjusted to the steps-in ethnomathematics-based inquiry, namely: (1) observation of problems, (2) formulating problems, (3) making conjectures, (4) designing experiments, (5) trials, (6) drawing conclusions. The percentage of students' creative thinking at each meeting is categorized in table 1 (Arifani et al., 2015). Presenting information Fluency Understand about the teacher's explanation of the problems / problems in LKK 2. Orientation to the problem Observing the problems /questions presented in the LKK 3. Formulating problems Students discuss with their group 4. Collecting data Flexibility inding solutions to ideas from a given problem 5. Testing hypotheses Novelty plans a new strategy that students don't usually use to answer problems 6. conclude Presenting ideas and solutions and problem solving The data obtained from this study is the result of the answers of each heterogeneously formed group at each step of ethnomathematics-based inquiry learning. The data is then analyzed based on predetermined creativity indicators and converted into percent value. RESULTS AND DISCUSSION In this research, during the teaching and learning activities using the Ethnomathematics-based Inquiry learning model was formed a group of 3 groups. Each group worked on the Worksheet of the group (LKK) as many as 4 LKK consisting of 4 sub-subjects namely translation at meeting 1, reflection on meeting 2, rotation at meeting 3, and dilatation at meeting 4. Directed LKK can support creative thinking skills of students, especially in the material of geometry transformation. The preparation of LKK is adjusted to the steps in ethnomathematics-based Inquiry, namely: (1) observation of problems, (2) formulating problems, (3) making conjectures, (4) designing experiments, (5) trials, (6) drawing conclusions. The details of LKK developed are as follows. First of all,p there isthefirst page (cover) there is a title LKK and column name of group members to be filled by the group in question. Second, the second page contains the identity of LKK containing the title, education level, subjects, classes/ semesters, subjects, sub-subjects and time allocation), core competencies, basic competencies, indicators and learning objectives. Third, there is a third page there is a material title, instructions for use LKK are menu "know you" which aims to provide knowledge about ethnomathematics to lead to ethnomathematics-based inquiry activities. Fourth, there is a section "problem observation" students are asked to observe the problem in the LKK in groups. This section is the first step of ethnomathematics-based inquiry learning. Fifth, "formulating the problem" students are asked to write what is gained from the activity and make inquiries about the material being studied. Sixth, in the "making guesses" section students are asked to hypothesize the problem. Seventh, there a section "building experiment" students are asked to do the design or design of the experiment to be made. Eighth, In the "yukss trial" section students apply what has been designed or made the design to be applied or tested in the available fields. Ninth, In the last part is "drawing conclusions" students are asked to make conclusions about the material studied. Tenth, the last page of LKK there are questions that must be solved bythe group. The exercise its aims to understand the extent of students' understanding of the concept studied. The developed LKK is designed for a sub-subject at faithful meetings. So in this development of research produced 4 LKK Inquiry based ethnomimetic for 4 meetings. This process involves students in learning, formulating questions, investigating widely and then building the creativity of students who are adapted from the culture in the surrounding environment to express the relationship between mathematics and culture in the surrounding environment. The culture referred to in this study uses Cilacap Batik. Researchers used three batik motifs at each meeting by being distributed to groups formed with heterogeneous. The motifs used in this study are Wijayakusuma batik motif, Ngasem Batik Motif and Srandil Batik Motif. The three motifs are typical motif designs of Cilacap district. Motif Ngasem Motif Srandil Siswono (2008) stated that there are several categories of thinking ability if. First, it is very creative that students are able to show fluency, flexibility, novelty and flexibility in solving problems. Second, creative students can show fluency and novelty or fluency and flexibility in solving problems. Third, creative students are able to show novelty or flexibility in solving problems. Fourth, i.e. less creative; students are able to show fluency in solving problems. Fifth is not creative; students are unable to demonstrate all three aspects of creative thinking indicators (Lisliana et al., 2016). The success of students' creativity is seen from the answers of groups that have been formed heterogeneously in solving problems given to LKK Inkuiri based on Metamathematical Batik Cilacap. The performance indicators in this study were at least 70% of students achieved on the student creativity indicator about transformation assuming the level of creativity of the students is equal to the level of creativity in the group. LKK results at the first meeting, the level of creativity of students is seen in the group's answer in completing the steps of LKK Inquiry based on ethnomathematics Batik Cilacap on translation materials as shown in the following table: Understand about the teacher's explanation of the problems / problems in LKK 2 2 2 Finding solutions to ideas from a given problem 2 1 1 5. Solve problems by designing new strategies that students are not used to solve problems 1 1 1 6. Presenting the results of ideas and solutions and problem solving 2 2 2 From the table above, it can be concluded that the percentage of creativity of students in group one and group three reached 91% and group two reached 75%. Students together with their group are very enthusiastic about learning when given a real problem. This makes students have their own positive values to be able to complete the land. Here is the answer of one of the groups at meeting 1, namely in the translation sub-chapter in Figure 2. Figure 2. Group 1 Answers From the picture above, it can be seen that at first the students saw the concept of translation in Batik Wijaya Kusuma in real time then students can express the material received clearly. LKK results at the second meeting, the level of creativity of students is seen in the group's answer in completing the Steps LKK Inquiry based on ethnomathematics Batik Cilacap on reflection materials as in the following table: Understand about the teacher's explanation of the problems / problems in LKK 2 2 2 2. Observing the problems /questions presented in the LKK 2 2 2 Finding solutions to ideas from a given problem 2 1 2 5. Solve problems by designing new strategies that students are not used to solve problems 1 1 1 6. Presenting the results of ideas and solutions and problem solving 2 1 At this second meeting the percentage of group one reached 91%, group two reached 75% and group three reached 83%. this indicates that groups one, two and three meet the prerequisites to state that students' creativity can be seen through ethnomathematics-based inquiry learning. LKK results at the third meeting, the level of creativity of students is seen in the group's answer in completing the steps of LKK Inquiry based on Ethnomathematics Batik Cilacap on rotational materials as in the following table: Understand about the teacher's explanation of the problems / problems in LKK 2 2 2 2. Observing the problems /questions presented in the LKK 1 2 2 Finding solutions to ideas from a given problem 2 1 2 5. Solve problems by designing new strategies that students are not used to solve problems 2 1 1 6. Presenting ideas and solutions and problem solving 2 2 2 At this third meeting the percentage of group one reached 83%, group two reached 75% and group three reached 91%. At this meeting students can answer and apply the concept of rotation on Cilacap batik. LKK results at the fourth meeting, the level of creativity of students is seen in the group's answer in completing the Steps of LKK Inkuiri based on ethnomathematics Batik Cilacap on dilation material, with the descriptor table as follows: Receive teacher explanation about problems / problems in LKK 2 2 2 Finding solutions to ideas from a given problem 2 2 2 5. Solve problems by designing new strategies that students are not used to solve problems 2 1 1 6. Presenting the results of ideas and solutions and problem solving 1 1 2 At the fourth meeting the percentage of groups one and two reached 83%, while the third group reached 91%. In the learning process conducted by each group can find the concept of dilation in Batik Cilacap. This is an introduction for students in learning the concept of dilation itself to understand dilation in a field. In this study, researchers were able to show that students' creativity is high in understanding, answering, and applying the concept of geometry transformation seen by using the learning process through ethnomathematics-based Inquiry LKK. Creative thinking is rapidly becoming a common goal around the world and fostering better creative thinking skills in students has become an important trend in the educational revolution (Sharma, 2014). Therefore, matematika is studied systematically and regularly and must be presented in a clear order and must be adapted to the intellectual development of students and the prerequisite abilities that have been possessed. Ethnomathematics-based inquiry learning becomes an alternative learning to answer the problem of lack of creativity in learning a material. Just like the research conducted by Diyarko and Budi (2016) Lemba Kerja Mandiri Mailing Merge provides problems can improve students' literacy skills. Inquiry learning is a learning designed with teaching methods by identifying problems and then applying appropriate ways to achieve results and assessing the extent to which students have achieved them (Novak &Krajcik, 2007). In line with the research conducted by Agus Mardani & I putu Artayasa (2020) that students who apply the incubate model have higher creative thinking skills than students who apply conventional learning (Artayasa, 2020). Group Worksheet (LKK) ethnomathematics-based inquiry that provides the problem of geometry transformation is not only able to apply the concept and application of geometry transformation in order to calculate with formulas but given problems in order to interpret part of geometry transformation with real objects nuanced local wisdom. An important change in mathematical instruction needs to be made to accommodate the ongoing changes in student demographics in Indonesia (Rosa &Orey, 2011). The emerging creative thought process can help students in reducing the difficulties faced. CONCLUSIONS AND SUGGESTIONS From the results of the research that has been described it can be concluded that at the first meeting the level of creative thinking of students in the three groups is 85% with the category High, at the second and third meeting the average level of creative thinking of students is 83% with the category High. At the fourth meeting the average level of creative thinking of students was also 85% (High).
2021-09-01T15:38:18.181Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "366962546576034a850ed21f8f047ca723d12736", "oa_license": "CCBYSA", "oa_url": "http://ejournal.ijshs.org/index.php/edu/article/download/218/175", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e2bf28fac050d64ce00764bcc4e39e1017eb1092", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
218658022
pes2o/s2orc
v3-fos-license
Increased Demodex Density in Patients Hospitalized for Worsening Heart Failure Infection is an important factor leading to the exacerbation of heart failure (HF), resulting in hospitalization. Demodex species are obligatory parasites in human skin, and increased density was reported in immunocompromised patients. In this study, we aimed to investigate the Demodex density in hospitalized HF patients compared to that of healthy controls. Methods: This study included 36 HF patients and 36 age and sex-matched healthy controls. Five standardized biopsies were taken from the face of participants and assessed for Demodex by a light microscope. Results: At least one Demodex mite was detected in 20 HF patients and nine of the control group. The number of Demodex mites was significantly higher in the HF group (median 1; min. 0 and max. 10) compared to the control group (median 0; minimum. 0 and maximum. 3). Demodicidosis was positive in 14 of the HF patients. Demodicidosis was not detected in the control group. Conclusions: This study showed that Demodex positivity is more common in HF patients hospitalized for HF exacerbation. Demodicidosis should be considered in hospitalized HF patients. Introduction Heart failure (HF) is defined by the presence of typical symptoms and signs caused by structural and functional cardiac abnormality, resulting in reduced cardiac output and/or elevated intracardiac pressures [1]. The prevalence of HF in the adult population is approximately 1-2% [2][3][4][5]. In the elderly population, heart failure is one of the most common causes of hospital admissions [6,7]. Infections are one of the leading precipitating factors that are responsible for exacerbation in HF patients. They have been shown to be associated with increased short-term mortality in HF patients compared to other precipitating factors, such as atrial fibrillation and hypertension [8][9][10]. Demodex folliculorum and Demodex brevis are obligatory parasites and usually found in the hair follicles and pilosebaceous glands of human skin [11,12]. They were commonly detected on the facial skin especially, forehead, cheeks, nose, and nasolabial fold [12,13]. Their incidence was reported to be as high as 93% [11]. Demodex mites could be detected in healthy individuals, but the density is low. Demodex mites are accepted as pathogenic when their number exceeds the five mites/cm 2 of skin [14,15]. Their growth might be facilitated by some local or systemic factors [16,17]. In the literature, increased Demodex densities were reported in immunocompromised patients with leukemia and acquired immune deficiency syndrome and also those under immunosuppressive treatments [17][18][19]. Besides the immunocompromised patients, increased Demodex density was also reported in patients with end-stage renal failure [20,21]. In this study, we aimed to investigate the Demodex density in hospitalized patients with HF exacerbation and compared those with age and sex-matched healthy controls. Study Population This study was conducted between January and March 2020. Among seventy-seven HF patients who were admitted to emergency and outpatient clinics for worsening heart failure, forty-one patients with malignancies, end-stage renal failure, facial erythematous lesions, whose ages were older than 75 years because of the high rate of reported Demodex positivity [22], and who were unable to consent or unwilling to participate were excluded from the study. The final study population included 36 HF patients (25 male, 11 female; mean age 67 ± 7 years) hospitalized for the exacerbation of HF and the control group consisted of age and sex-matched 36 healthy individuals (22 male, 14 female; mean age 64 ± 6 years). The institutional ethics committee approved the study protocol with code number OMU-KAEK-2020-32 on 16.01.2020. All participants provided written informed consent. This study was conducted in accordance with Declaration of Helsinki. Demodex Investigation Standardized skin surface biopsies (SSSBs) were taken from the forehead, nose, chin and, cheeks of all participants. For biopsy procedure, a drop of cyanoacrylate glue was put on a 1-cm 2 marked area of a slide. The glue-bearing side of the slide is applied over the skin for 30 s. Then, the slide was gently removed from the skin, 2-3 drops of immersion oil were applied. The slides were investigated for parasites under a light microscope at ×10 and ×40 magnifications. When at least one Demodex mite was detected, the test was accepted as positive. Demodicidosis was considered when five or more parasites in a 1-cm 2 area were detected [14]. Statistical Analysis The continuous variables with normal distribution were presented as mean ± standard deviation values, those without normal distribution as median (minimum and maximum). The categorical variables were presented as percentages. For the analysis of the normal distribution of the variables, the Kolmogorov-Smirnov and Shapiro-Wilk tests were used. The comparison of continuous variables with normal distribution, the student t-test was used. The Mann-Whitney U test was used to compare those without normal distribution. The Chi-square test and Fisher's exact test were performed for comparison of categorical data. All statistical analyses were performed using the SPSS version 20 (SPSS Inc, Chicago, IL, USA). All statistical tests were two-sided, and a p-value < 0.05 was accepted as statistically significant. Results A total of 36 patients with HF and age, sex-matched 36 healthy controls were included in this study. The etiology of HF was ischemic heart disease in 17 (47%) patients. The mean left ventricular ejection fraction (EF) of HF patients was 40% ± 14%. The types of HF were HFREF (Heart Failure with Reduced Ejection Fraction) in 19 (53%) and HFPEF (Heart Failure with Preserved Ejection Fraction) in 17 (47%) patients. The reasons for HF exacerbation were fluid retention due to noncompliance with medications and diet in 19 (53%) patients, infections most commonly pneumonia in seven (19%) and arrhythmias in six (17%) patients. The clinical and laboratory parameters of HF patients were presented in Table 1. Table 1. Baseline clinical and laboratory parameters of heart failure patients. The Demodex test was performed in both heart failure and age and sex-matched control groups. At least one Demodex mite was detected in 20 (56%) out of 36 HF patients and nine (25%) of control group (p = 0.008). (Figures 1 and 2) The number of Demodex mites detected was significantly higher in the HF group (median 1; min. 0 and max. 10) compared to the control group (median 0; min. 0 and max. 3) (p < 0.001). Demodicidosis, which is defined as the determination of five or more Demodex mites in a 1-cm 2 area, was positive in 14 (40%) HF patients. In none of the controls was demodicidosis detected (p < 0.001) ( Table 2). Discussion In this study, increased Demodex density was observed in patients who were hospitalized for HF exacerbation compared to a healthy control group. Additionally, demodicidosis was significantly more common in HF patients. Demodex positivity was reported in asymptomatic healthy individuals but in low densities. Mother-to-infant transmission occurs soon after birth and their number increases by puberty. With aging, the percentage of individuals who are infected increases, reaching a peak in the fifth and sixth decades. The prevalence of Demodex infestation is nearly 95% in individuals higher than 71 years. Demodicidosis is considered when they multiply and reach to the ≥5 mites/cm 2 of skin [14,15,22]. Although some local or systemic factors were suggested, the exact cause of this increase in density has not been clarified. Heart failure is usually seen in the elderly population. Although Demodex positivity is expected to be high in this age group, we found a significant increase in Demodex positivity and demodicidosis in HF patients compared to age and sex-matched healthy controls. Demodicidosis is common in immunocompromised patients. In acute lymphoblastic and myelocytic leukemia patients, demodicidosis was described and those taking cytosine arabinoside, daunorubicin, hydroxyurea and mitoxantrone treatments had the highest densities. [15,17,18,23,24]. Acquired immunodeficiency syndrome patients were also reported as having increased rates of demodicidosis. Therefore, the conditions and medications affecting humoral and cellular immunity might cause the proliferation of Demodex mites [17,25]. The activation or dysregulation of the immune system plays a major role in the development and progression of heart failure [26]. Therefore, the presence of demodicidosis in our heart failure patients could be related with this situation. Patients with systemic diseases were also evaluated for Demodex positivity. End-stage-renal failure patients were found to have increased Demodex mites reaching the mean number of 6/cm 2 [20,21]. In our study, the median number of Demodex mites was found to be 1; however, demodicidosis was detected in 39% of HF patients. Even though the median number is less compared to end-stage renal disease patients, the demodicidosis ratio is similar. These rates of Demodex positivity might support the potential shared mechanisms of immune system dysfunction in these patient groups. Uremia in chronic renal failure patients was proposed to cause changes in immune response, such as impaired neutrophil and lymphocyte functions [27]. Thus, immune dysfunction allows for the growth of obligatory mites like Demodex species. Immune system activation and inflammation are considered to play a significant role in the progression of HF [26]. In particular, discrepancies in lymphocyte, monocyte, eosinophil and mast cells have been recognized in high-risk HF patients. Decreased lymphocyte count was found to be a poor prognostic factor in hospitalized chronic HF patients [28]. Okamato et al. demonstrated that circulating T regulatory cells were decreased in decompensated HF patients. This situation was associated with inflammation and left ventricular dysfunction. Therefore, T regulatory cells might play a critical role in controlling inflammation via the suppression of cellular immune responses. They also found that these T cells were an independent predictor of recurrent hospitalization in HF patients [29]. Although the infections were the cause of exacerbation in 19% of heart failure patients in our study, Demodex positivity and demodicidosis were detected in 56% and 39% of patients, respectively. A possible dysfunction in the immune system of HF patients might cause an increase in local infections of obligatory microorganisms like Demodex mites. The small number of patients and being a cross sectional study without long-term follow-up are limitations of this study. Prospective studies with larger patient populations would give more powerful data regarding the importance of Demodex positivity in HF patients. Additionally, studies with long-term follow-up enable us to get more data regarding the Demodex positivity in HF patients. In conclusion, this study shows that Demodex positivity and demodicidosis were common in hospitalized HF patients compared to healthy controls. Demodicidosis should be considered in hospitalized HF patients. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2020-05-17T13:03:39.204Z
2020-05-13T00:00:00.000
{ "year": 2020, "sha1": "9600e8ec27104232f3cf5b094355a709d044f1b7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4426/10/2/39/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b53aeec4525e6022b24f7aa98853b2f8687f77c0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118330275
pes2o/s2orc
v3-fos-license
Three Years of Mira Variable CCD Photometry: What Has Been Learned? The subject of micro-variability among Mira stars has received increased attention since DeLaverny et al. (1998) reported short-term brightness variations in 15 percent of the 250 Mira or Long Period Variable stars surveyed using the broadband 340 to 890 nm Hp filter on the HIPPARCOS satellite. The abrupt variations reported ranged 0.2 to 1.1 magnitudes, on time-scales between 2 to 100 hours, with a preponderance found nearer Mira minimum light phases. However, the HIPPARCOS sampling frequency was extremely sparse and required confirmation because of potentially important atmospheric dynamics and dust-formation physics that could be revealed. We report on Mira light curve sub-structure based on new CCD V and R band data, augmenting the known light curves of Hipparcos-selected long period variables [LPVs], and interpret same in terms of [1] interior structure, [2] atmospheric structure change, and/or [3] formation of circumstellar [CS] structure. We propose that the alleged micro-variability among Miras is largely undersampled, transient overtone pulsation structure in the light curves. Introduction From European Space Agency's High Precision Parallax Collecting Satellite, HIPPARCOS (ESA, 1997) mission data, deLaverny et al. (1998) discovered a subset of variables (15 percent of the 250 Mira-type variables surveyed) that have exhibited abrupt short-term photometric fluctuations, within their long period cycle. All observations were made in a broadband mode, 340 to 890 nm, their so-called Hp magnitude. They reported variation in magnitude of 0.23 to 1.11 with durations of 2 hours up to almost 6 days, preferentially around minimum light phases. Instrumental causes could not be identified to produce this behavior. Most of these variations are below the level of precision possible with purely visual estimates of the sort collected by AAVSO, but may contribute to some of the scatter in visual light curves. 51 events in 39 M-type Miras were detected with HIPPARCOS, with no similar variations found for S and C-type Miras. These short-term variations were mostly detected when the star was fainter than Hp = 10 magnitude including one star at Hp = 13 magnitude and one at Hp = 8.3. For 27 of the original 39 observations, the star underwent a sudden increase in brightness. From their study, deLaverny et al. found that 85% of these short-term variations were occurring around the minimum of brightness and during the rise to the maximum, at phases ranging from 0.4 to 0.9. No correlation was found between these phases and the period of the Miras, but that brightness variations do occur preferentially at spectral types later than M6 and almost never for spectral types earlier than M4. Similar results were reported by Maffei & Tosti (1995) in a photographic study of long period variables in M16 and M17, where 28 variations of 0.5 mag or more on timescales of days were found among spectral types later than M6. Schaeffer (1991) collected reports on fourteen cases of flares on Mira type stars, with an amplitude of over half a magnitude, a rise time of minutes, and a duration of tens of minutes. In analogy to the R CrB phenomenon, brightness variation could also be consequence of dust formation (fading) and dissipation (brightening) in front of a star's visible hemisphere. Future narrow band infrared interferometric observations will help resolve this. Recently, Wozniak, McGowen and Vestrand (2004) reported analysis of 105,425 I-band measurements of 485 Mira-type galactic bulge variables sampled every other day, on average, over nearly 3 years as a subset of the OGLE project. They failed to find any significant evidence for micro-variability, to a limit of 0.038 I-band events per star per year. They conclude that either Hipparcos data are instrumentally challenged, or that discovery is subject to metallicity or wavelength factors that minimize detection in Iband among galactic bulge objects. In contrast, Mighell and Roederer [2004] report flickering among red giant stars in the Ursa Minor dwarf spheroidal galaxy, including detection of lowamplitude variability in faint RGB stars on 10 minute timescales! However, Melikian [1999] provides a careful analysis of the light curves for 223 Miras based on Hipparcos data, finding that 82 stars [37%] show a post-minimum hump-shaped increase in brightness on the ascending branch of the light curve. Melikian advocates that differing physical processes and perhaps stellar propertiese.g. later spectral types, longer periods and higher luminosity -differentiate behaviors among these stars. The purpose of this report is to provide V and R-band photometry of objects related to the deLaverny et al. results, with dense temporal sampling. We find a similar lack of microvariability as noted by Wozniak et al., but do confirm facets of the Melikian report. This suggests that these phenomenon can be placed in a larger context of pulsational variations and episodic dust formation, with implications for ongoing spectroscopic and interferometric observations of mid-infrared studies of LPV stars. Observations and Data Reduction Our target list primarily was drawn from the objects listed by deLaverny et al. (1998), although limited to the northern sky. Of the 39 M type Mira's described therein, 20 are relatively bright and visible from the northern hemisphere. Because of the efficiency of automated sampling, we augmented this list with additional M type Miras and the brightest C, CS and S stars where one can obtain good signal to noise with low to moderate resolution spectroscopy on a small telescope. These stars and associated characteristics are detailed in Table 1. These along with a variety of brighter S and C type stars were also chosen. Brighter stars were chosen since they represented stars with magnitudes such that moderate resolution spectroscopy could be performed as part of the monitoring process. To accomplish this in a semi-automated manner, the telescope, camera and filter wheel are controlled by a single computer using Orchestrate software (www.bisque.com). Once the images are reduced, a script written by one of the authors (David Richards) examines the images performing an image link with TheSky software (www.bisque.com). The images obtained in this manner are stamped both with the name of the variable star, since this was how Orchestrate was instructed to find the object, and the position of the image in the sky. This allows TheSky to quickly perform the links with its USNO database. Once the astrometric solution is accomplished, the program reads through a reference file with the pertinent data such as reference star name and magnitudes along with variable star of interest. The input file is highly flexible, stars and filter magnitudes of reference stars can be added freely as image data require. This file only needs to be created once, especially convenient for a set of program stars, which will have continuous coverage over time. There is no need for entering magnitude information of reference stars in a repeated manner. The results file is readily imported to spreadsheet software, where the various stars and their magnitudes can be plotted, almost in real time. This is an important aspect of this project, the ability to see changes (flare-ups) quickly and as a result respond to these changes with spectroscopic observations. Photometry was conducted with an Astrophysics 5.1-inch f/6 refractor located in rural San Diego county, California, using an ST-10XME camera and 2x2 binned pixels and the Johnson V and R filters. Images were obtained in duplicate for each band and two reference stars used per variable star for analysis. Image reduction was carried out with CCDSOFT (www.bisque.com) and Source Extractor (Bertin and Arnout, 1996) image reduction groups and specially written scripts for magnitude determinations, which allowed for rapid, nearly real time magnitudes to be found (see below). The project has been underway since 2003 and involves a total of 96 stars, 20 M type Miras, 19 S types and the remainder C types. While there are certainly many more of these type stars, only those that had a significant part of their light curve brighter than visual magnitude 8 were considered, due to magnitude limitations in the spectroscopy part of the project. Fortunately, these stars are much brighter in the R and I bands, often by 2-4 magnitudes when compared to their V magnitudes, and many of the interesting molecular features are found in this region of the spectrum. The photometric analysis involves using 2 different reference stars. Their constant nature is readily discerned over the time period by the horizontal slope of their light curves, both in the V and R bands. After considerable effort, magnitudes are now determined at the 0.02 magnitude level. Thus any flare-ups in the range of 0.1 magnitude and brighter should be readily discerned. Early on it was felt that semi-automating the process was the best way to proceed. The use of a precision, computer-controlled mount (Paramount, www.bisque.com) along with the suite of software by Software Bisque got the project rapidly underway. TheSky in conjunction with CCDSOFT lends itself to scripting, and a script was put together that automated the magnitude determinations. To give an example of how this has streamlined the effort, on a typical night, initially using Orchestrate and later using a script developed by coauthor David Richards to control the telescope, camera and filter wheel, 40 stars, visible at the time, are imaged in duplicate in each of the V and R bands. This takes about 1 hour. Reduction of the images using image reduction groups in CCDSOFT takes another 5 minutes. The script that determines the magnitudes takes about 10 minutes to churn its way through all the images. Within another 20 minutes, the data, via spreadsheet, has been added to each variable stars growing light curve. Thus in less than 2 hours all of the program stars have been observed and their results tallied. Until more of program stars rotate into view, one is free to pursue spectroscopic examination of the program stars, establishing baseline observations. Another portion of this effort included standardizing the reference stars in each of the fields using the Landolt standards (Landolt, 1983). Once this is done all previous and subsequent observations of the variable stars will have their magnitudes expressed in absolute Johnson-Cousins magnitudes. Results & Analysis 3a. Evidence for flares? Three years of monitoring of 96 Mira-like variables, including all the northern objects included 3b. Light curve "bump" phenomena In contrast to high-frequency events like flares [hours or days], examining the light curves for the stars observed between 2003 and 2006 revealed persistent low frequency changes on timescales of weeks. Following the discussion of these by Melikian [1999], we label these "bumps" in the Mira light curves. Good examples of this are seen in the light curves of RT Boo, R CMi, X CrB, U Cyg, XZ Her [ Fig.2a], S Ser [ Fig.2b], RU Her, U Cyg and R Lyn. Some of these are seen in visual light curves compiled by AAVSO and AEFOV, but others are seen at levels below the ~0.1 mag precision typical of visual observations. This is one of the important benefits of high precision photometry. Most light curve bumps are nonrecurring and seem to appear after especially deep minimum light. The correlation of bumps with Mira properties deserves further attention. A few extremes in our sample are noted: double maxima in T Cam, S Cas, RR Her, S Cep, RS Cyg, RR Her and Y Per. Two stars show the bump feature postmaximum light: V CrB and T Dra. 3c. Period determination Period finding was performed using PerAnSo software suite by Tonny Vanmunster, [http://users.skynet.be/fa079980/peranso/index.htm] which reports best fits using the ANOVA method. Initial application shows agreement with literature periods for most variables, within a few percent and with errors in fit of 1 to 20 days, depending on the data interval and phase coverage thus far. More work is in progress. Conclusions Among our conclusions based on V and R band measurements, with ~10 millimag precision, of nearly one hundred brighter Mira type stars are: [1] flare events are rare, and statistically similar to the OGLE result for I band monitoring of 0.038 events per star per year, with some evidence that "flares" are bluer in color; [2] we are confirming indications of correlations between depth of minima and occurrence of a "bump" or change of slope on the ascending branch of some light curves [cf. Melikian 1999]; [3] our coverage of approximately 3 cycles is sufficient to confirm the majority of previously published periods; [4] we hypothesize that bump phasing and contrast varies with internal structure and opacity in analogy with similar phenomena among the "bump Cepheids" and deserves further study. Acknowledgements The authors wish to acknowledge the assistance of Thomas Bisque of Software Bisque for many useful discussions and help with the scripting, and of the AAVSO for its compilation of variable star visual light curves referenced in this paper, and the estate of William Herschel Womble for partial support of University of Denver astronomers participating in this effort. Tables Table 1a:
2019-04-12T15:59:59.770Z
2007-04-20T00:00:00.000
{ "year": 2007, "sha1": "cc4fb7e6b67cf1279346d5cda4ad0e99635273fe", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cc4fb7e6b67cf1279346d5cda4ad0e99635273fe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25555489
pes2o/s2orc
v3-fos-license
Cyclometalated Ir ( III ) complexes of deprotonated N-methylbipyridinium ligands : e ff ects of quaternised N centre position on luminescence † (C^N = cyclometalating ligand; N^N = α-diimine) have been isolated and characterised as their PF6 and Cl salts. Four of the PF6 − salts have been studied by X-ray crystallography, and structures have been obtained also for two complex salts containing MeCN and Cl or two Cl ligands instead of N^N. The influence of the position of the quaternised N atom in C^N and the substituents on N^N on the electronic/optical properties are compared with those of the analogous complexes where C^N derives from 1-methyl-3-(2’-pyridyl)pyridinium (B. J. Coe, et al., Dalton Trans., 2015, 44, 15420). Voltammetric studies reveal one irreversible oxidation and multiple reduction processes which are mostly reversible. The new complexes show intramolecular charge-transfer absorptions between 350 and 450 nm, and exhibit bright green luminescence, with λmax values in the range 508–530 nm in both aqueous and acetonitrile solutions. In order to gain insights into the factors that govern the emission properties, density functional theory (DFT) and time-dependent DFT calculations have been carried out. The results confirm that the emission arises largely from triplet excited states of the C^N ligand (LC), with some triplet metal-toligand charge-transfer (MLCT) contributions. In the context of luminescence, Ir III complexes with C^N ligands derived from pyridinium species were apparently unknown until we reported those of deprotonated 1-methyl-3-(2′-pyridyl)pyridinium (3,2′-C^N). 45The only previous account of structurally related species concerns catalytic studies with hydride complexes that are unsuitable for luminescence. 46 number of reports of cyclometalated complexes of N-methylbipyridinium species with Pd/Pt 47 or Ru 48 have appeared.The bright blue or blue-green emission and aqueous solubility of our Ir III 3,2′-C^N species 45 suggests potential uses in highly efficient OLEDs and/or bio-sensing/imaging.Since the use of Ir III complexes in the latter context [16][17][18][19][20]49 is often restricted by poor water solubility, 19 increasing their charge from the usual +1 to +3 is beneficial. Ths structural novelty opens doors for designing further complexes of C^N ligands based on quaternised bipyridinium units with attractive electronic and optical properties.In the previously published complexes [Ir III (C^N) 2 (N^N)] 3+ (N^N = α-diimine), both the nature and energy of the emission are highly influenced by substituents on the ancillary N^N ligand.45 Density functional theory (DFT) calculations show that the blue emission observed when N^N = 2,2′-bipyridyl (bpy) or 4,4′-( t Bu) 2 bpy is mainly triplet ligand-centred ( 3 LC) with some triplet metal-to-ligand chargetransfer ( 3 MLCT) character from C^N. Onthe other hand, the blue-green emission observed when N^N = 4,4′-(CF 3 ) 2 bpy has 3 L′C with some 3 ML′CT nature, due to efficient inter-ligand energy transfer to N^N (L′). In addition to changing the substituents on N^N, it is of interest to study the effects on the emission properties of varying the location of the quaternised N centre in the C^N ligand.Here, we present a series of new Ir III complexes related to those described recently, but with the C^N ligands derived instead from 1-methyl-2-(2′-pyridyl)pyridinium of 1-methyl-4-(2′-pyridyl)pyridinium.Using as the ancillary ligand bpy, 4,4′-( t Bu) 2 bpy or 4,4′-(CF 3 ) 2 bpy allows detailed comparisons and reveals the importance of the position of the quaternised centre in achieving blue emission. Materials and procedures The compound 1-methyl-3-(2′-pyridyl)pyridinium hexafluorophosphate and the complex salts 4P-6P and 4Cl-6Cl were synthesised as described previously. 45All other reagents were obtained commercially and used as supplied.Products were dried overnight in a vacuum desiccator (silica gel) prior to characterisation.In each case, the bold number refers to the complex cation, while the counter-anions are denoted by P for PF 6 − or C for Cl − .Edinburgh Instruments FP920 Phosphorescence Lifetime Spectrometer equipped with a 5 W microsecond pulsed xenon flashlamp.Lifetime data were recorded following excitation with an EPL 375 picosecond pulsed diode laser (Edinburgh Instruments), using time-correlated single photon counting (PCS900 plug-in PC card for fast photon counting).Lifetimes were obtained by tail fitting on the data obtained, or by a reconvolution fit using a solution of Ludox® in the scatterer, and the quality of fit judged by minimisation of reduced chi-squared and residuals squared.Quantum yields were measured upon excitation at 420 nm by using a SM4 Integrating Sphere mounted on an Edinburgh Instruments FP920 Phosphorescence Lifetime Spectrometer. Theoretical studies DFT and time-dependent DFT (TD-DFT) calculations were undertaken on the complex cations 1-3 and 7 by using Gaussian 09. 53Geometry optimisations of the singlet ground (S 0 ) and first triplet excited (T 1 ) states and subsequent TD-DFT calculations were carried out by using the M06 functional 54 with the Def2-QZVP 55,56 basis set and pseudopotential on Ir and Def2-SVP 57 on all other atoms.MeCN was used as CPCM solvent model. 58,59Using these parameters, the first 100 excited singlet states were calculated and simulated UV-vis absorption spectra were convoluted with Gaussian curves of fwhm 3000 cm −1 by using GaussSum. 60his This article is licensed under a Creative Commons Attribution 3.0 Unported Licence. Synthesis and characterisation The new complexes 1-3 and 7-9 (Fig. 1) were synthesised by following the procedure used previously for 4-6. 45The cyclometalated chloride-bridged dimeric intermediates were isolated in crude form only, but identified by 1 H NMR spectroscopy.Reacting these dimers with the appropriate N^N ligand in the presence of AgPF 6 affords the hexafluorophosphate salts 1P-3P and 7P-9P which were isolated after purification by column chromatography on Sephadex SP C-25. Yields are in the range ca.30-60%.The chloride salts 1C-3C and 7C-9C were prepared in near quantitative yields by anion metathesis with [N n Bu 4 ]Cl in acetone.The identities and purities of the new complex salts are confirmed by 1 H NMR spectroscopy, elemental analyses and +electrospray mass spectrometry, and in several cases also by X-ray crystallography (see below).Portions of representative 1 H NMR spectra for the bpycontaining complex salts 1P, 4P and 7P are depicted in Fig. 2, showing large changes as the position of the quaternised N atom varies. In order to further characterise the system, 13 C NMR experiments were carried out for selected compounds 1P, 4P and 7P in CD 3 CN (see the ESI, Table S1 †).The signals were assigned via HMQC and HMBC spectroscopy.As expected, the 13 C NMR signals arising from the cyclometalating ring show the most variability as the quaternised N atom moves around.Notably, the cyclometalated carbon atom in 4P (175.08 ppm) is deshielded when compared with 1P (157.05ppm) and 7P (161.66 ppm).This difference is attributed to the electron-withdrawing effect of the quaternised N atom located in the paraposition in 4P.The observed low field chemical shift is similar to those found for the carbenic carbon atom in related Ir III complexes of N-heterocyclic carbene ligands, 49b,61-64 and gives an indication of the carbene-like character of the cyclometalated C atom in complexes 4-6. Structural studies Single crystals were obtained for solvated forms of 1P-3P and 7P, and also unexpectedly for the chloride complexes in 10P and 11P.Crystallographic data and refinement details are summarised in Table 1.Representions of the molecular structures of the complex cations are shown in Fig. 3, and selected distances and angles are presented in Table 2.The structures of solvates of 4P and 6P have been reported previously, 45 and data are included here for comparison purposes. All of the tris-chelate complexes in 1P-4P, 6P and 7P exhibit a pseudooctahedral geometry around the Ir centre with two cyclometalating bipyridinium ligands and one bidentate bpy, 4,4′-( t Bu) 2 bpy or 4,4′-(CF 3 ) 2 bpy ligand.The strong trans effects of the C-donor fragments affect the structures in two important ways.First, these units adopt a cis geometry, so the pyridyl rings of the C^N ligands are trans disposed.Second, the Ir-N distances to the N^N ligand are extended by ca.0.07-0.09Å when compared with those to the C^N ligand in all cases, except for one of the independent complexes in 4P.The Ir-N distances to the N^N ligand are not affected significantly by varying the R substituents.The bite angle of N^N is essentially constant at ca. 77°, while that of the C^N ligand ranges from ca. 82-89°.Inspection of the bond distances within the cyclometalating rings does not reveal any clear evidence for quinoidal character in 4P or 6P that might be expected due to their carbene-like nature indicated by other physical measurements. The structures broadly resemble those reported for related monocationic Ir III complexes, although the C^N bite angles are a little larger than those observed typically (ca.80-81°). 34,36,39,65he fortuitously obtained structure of 11P is relatively unusual.Various chloride-bridged dimeric Ir III complexes with C^N derived from 2-phenylpyridine ( ppy) or its derivatives have been characterised crystallographically. 66 However, apparently the only reported structure of a related monometallic dichloride complex is that of cyclometalated 2-(2,4-difluorophenyl)pyridine, crystallised as a monoanion with the cation [Ir III (C^N) 2 (bpym)] + (bpym = 2,2′-bipyrimidine) containing the same ligand. 67The average Ir-Cl distance in this known compound of 2.493(4) Å is slightly longer than the corresponding distance in 11P (2.464(1) Å, Table 2).Several neutral complexes Ir III (C^N) 2 Cl(MeCN) have been reported previously, 66a,68 Fig. 1 Structures of the studied tris-chelate complex salts and the bis-chelates characterised by X-ray crystallography only. Electrochemistry The results of cyclic and differential pulse voltammetric measurements on the PF 6 − salts 1P-9P recorded in acetonitrile are shown in Table 3. Cyclic voltammograms of 1P-3P are shown in Fig. 4. All potentials are quoted in V with respect to the Ag-AgCl reference electrode. Each compound shows an irreversible oxidation process in the region ca.2.2-2.5 V, which might be formally assigned to an Ir IV/III couple.The relatively high potentials when compared with related complexes of ppy 6,[11][12][13][33][34][35][36][37][38][39]42 are attributable to the presence of the electron-deficient pyridinium units. DFT calcuations (see below) show that the C^N ligands contribute to the HOMO significantly.With a given bipyridinium isomer, the E pa value increases in the order N^N = 4,4′-( t Bu) 2 bpy < bpy < 4,4′-(CF 3 ) 2 bpy, showing a modest influence (130-190 mV) of the R substituents.As N^N is kept constant, the E pa value increases in the order C^N = 2,2′-≈ 4,2′-< 3,2′-, revealing that the complexes in which the quaternised N atom is located para to the Ir centre are the most difficult to oxidise (by 180-240 mV). The reductive regions include multiple processes (Fig. 4).For the 2,2′-C^N complexes in 1P-3P, four reversible waves are observed, corresponding with sequential one-electron reductions to give a monoanionic final product.The E 1/2 values are similar for the complexes of bpy (1) and 4,4′-( t Bu) 2 bpy (3), while those for the complex of 4,4′-(CF 3 ) 2 bpy (2) are lower by 60-420 mV, because the electron-withdrawing influence of the -CF 3 groups facilitates reduction.Similar behaviour is shown by the 4,2′-C^N complexes in 7P-9P, except that 7 and 8 display only three waves instead of four.For both 2,2′-and 4,2′-C^N series, the 1+/0 E 1/2 value shows the largest dependence on R, suggesting that this third reduction is localised on the N^N ligand.Therefore, the first and second reductions may be assigned to the C^N ligands.In contrast, the reductive behaviour of the 3,2′-C^N complexes in 4P-6P is much less well-defined, with most of the processes being irreversible.Also, sharp anodic return peaks are observed for 1 and 3, indicative of adsorption onto the electrode surface.The nature of these data preclude the discernment of any further trends. Electronic absorption spectroscopy The absorption spectral data for the PF 6 − salts of the new complexes 1-3 and 7-9 recorded in acetonitrile are shown in Table 4, together with those for 4-6. 45Corresponding data for the Cl − salts in water are collected in Table S2, † and spectra of 1P-4P and 7P are shown in Fig. 5.All of complexes 1-9 show intense bands below 320 nm which are assigned to π → π* and high energy MLCT transitions involving both the C^N and N^N ligands.Weaker bands are observed also, with tails extending up to ca. 480 nm in some cases.The lowest energy (LE) band is clearly blueshifted in the 3,2′-C^N complexes 4-6 (λ max for shoulders ≈ 350-360 nm) when compared with the new complexes 1-3 and 7-9 (λ max for shoulders ≈ 425-445 nm) (Fig. 5a).DFT shows that this band in 4-6 has 1 MLCT character with some 1 ML′CT and also 1 LL′CT contributions for 4 and 5 (L = C^N; L′ = N^N). 45The slight shifts (ca.0.1 eV) when R changes (Fig. 5b) are consistent with the largely 1 MLCT assignment.The observed blue-shifts when varying the structure of the C^N ligands are attributable to destabilisation of their π* orbitals because the para-position of the quaternised N increases its neutral carbene character.Therefore, the energy gap of the MLCT transition increases in 4-6 with respect to their isomeric counterparts 1-3 and 7-9.The absorption spectra are almost unaffected by changing the counter-anions and solvent (Tables 4 and S2 †). Luminescence properties The emission spectral data for the PF 6 − and Cl − salts of the new complexes 1-3 and 7-9 recorded in deoxygenated and oxygenated acetonitrile or water are shown in Table 5, together with those for 4-6. 45Spectra of 1P, 4P and 7P are shown in Fig. 6, while those of the other compounds are in the ESI (Fig. S1 and S2 †).Changing the counter-anion and solvent has only slight effects on the excitation profiles that remain constant in all cases while monitoring at all the emission maxima.All of the spectra are structured, especially for complexes 4-6, 45 indicating significant ligand contributions to the luminescence.The profiles and emission energies of 1-3 and 7-9 are similar and red-shifted when compared to 4-6 (Fig. 6, S1 and S2 †).The importance of the position of the quaternised N is evident; when it is located meta to the cyclometalating C atom, green emission is observed (λ max = 508-530 nm), but when it is positioned para to the C, blue or blue-green emission arises (λ max = 468-494 nm).This difference is attributable to the almost carbene-like character and consequent relative orbital energies in 4-6.Various other types of Ir complexes emit in the green region, e.g.[Ir III (ppy-PBu 3 ) 3 ][PF 6 ] 2 [ppy-PBu 3 = cyclometalated 2-(5-tri-n-butylphosphoniumphenyl)pyridine] 69 and monocationic complexes of 4,4′-( t Bu) 2 bpy with -SF 5 substituents on the C^N ligands. 70he influence of the R substituents in the 2,2′-C^N (1-3) and 4,2′-C^N (7-9) complexes is small.The emission energies follow the expected trend R = t Bu < H < CF 3 , but with a difference of only ca.0.05 eV between the extremes (Table 5).This effect can be attributed to the stabilisation of the metal orbitals caused by placing electron-withdrawing groups on the ancillary ligand.These results suggest that in all of these new complexes the emission has a mainly 3 LC character involving the C^N ligand with some 3 MLCT contribution.By contrast, for the 3,2′-C^N complexes, such a situation pertains for 4 and 6, but 5 behaves differently and gives N^N-based emission. 45herefore, in that series the emission energy is decreased (and the bands red-shifted) for the -CF 3 derivative with respect to the other complexes. When keeping N^N constant, the quantum yields (Φ) of 1-3 and 7-9 are generally similar to, or a little larger than those of 4-6 (Table 5), showing that moving the quaternised N to a meta-position with respect to the cyclometalating C does not facilitate non-radiative decay.In contrast to the almost invariant emission energies, τ always increases and Φ increases in most instances (sometimes markedly) on moving from a PF 6 − salt in acetonitrile to its Cl − analogue in water.In all cases, both τ and Φ increase substantially on deoxygenation, consistent with emission originating from triplet excited states. Theoretical calculations DFT and TD-DFT calculations have been carried out on the complexes 1-3 and 7 by using the M06 functional with the Def2-QZVP basis set on Ir and Def2-SVP on all other atoms, as used for 4-6 previously. 45he optimised structures in the ground state agree well with the data obtained from X-ray crystallography (see the ESI, Table S3 †).Selected MOs are depicted in Fig. S3-S6, † and the orbital compositions are shown in Table S4.† The frontier orbitals HOMO−2 to LUMO+2 are essentially invariant within the cations 1-3 and 7.The HOMO−1 and HOMO−2 are mainly centered on the Ir atom (67-72%) with some contribution from the C^N ligand (16-22%), while the HOMO has almost the same contribution from both fragments (Ir 47-55%; C^N 43-51%).The LUMO and LUMO+1 are based on C^N almost completely (96-98%), while the LUMO+2 is located on N^N (95-97%). The S 0 → S 1 transition energies calculated in acetonitrile and the corresponding major orbital contributions are presented in Table S5 (ESI †).The simulated absorption spectra for complexes 1-3 agree well with those measured (Fig. 7).The calculated wavelengths of the LE transitions 421 (1), 412 (2), 424 (3) and 416 nm ( 7) are slightly blue shifted with respect to the experimental λ max values (437 (1), 426 (2), 445 (3) and 435 nm (7); Table 4).In all complexes, this transition has almost pure HOMO → LUMO character (92-95%).Therefore, the LE bands can be assigned to a mixture of 1 LC and 1 MLCT (L = C^N).The observed modest dependence of the LE transition on the R substituents is reproduced by the calculations on 1-3, i.e.ΔE increases in the order R = t Bu < H < CF 3 .Comparisons with the data obtained for 4-6 45 show that the level of theory applied also predicts the red-shifts of the LE band observed on moving from the 3,2′-C^N complexes to their 2,2′-C^N counterparts 1-3 (and for 4 → 7); the calculated λ values are 347 (4), 342 (5) and 348 nm (6). For 1-3 and 7, the first computed transition involving mainly the N^N ligand (i.e. a dominant component to LUMO+2; in all cases HOMO−2 → LUMO+2 1 MLCT) lies to markedly higher energy when compared with the LE transition.The electronic influence of the R substituents is manifested in the predicted energies.The transition is at 3.90 eV (318 nm) for 1, 3.99 eV (311 nm) for 3 and 3.87 eV (320 nm) for 7, but 3.69 eV (336 nm) for 2 due to the stabilisation of the LUMO+2 by the -CF 3 groups. The nature of the luminescence was addressed by optimising the first triplet excited states T 1 of 1-3 and 7.The computed geometric parameters of these states are similar to those found for the corresponding ground states S 0 (Table S3, ESI †).The lowest energy emissions calculated in acetonitrile as ΔE(T 1 − S 0 ) at 531 (1), 527 (2), 538 (3) and 521 nm (7) are close to the experimental values (526 (1), 519 (2), 530 (3) and 520 nm (7); Table 5).Also, the calculations reproduce the experimental trend in the emission wavelengths (λ em 2 < 1 < 3).The spin densities for the T 1 state for all complexes (Fig. 8) are located mainly on one C^N ligand, together with the Ir atom to a lesser extent.Therefore, the emissions can be assigned as 3 LC involving the cyclometalating ligand with some 3 MLCT contribution.These results are similar to those obtained for complexes 4 and 6, 45 showing that the only complex of the nine studied with a different nature of the emission is 5.In that case, the -CF 3 substituents stabilise the 3 L′C state, causing efficient inter-ligand energy transfer from the C^N to the emitting N^N fragment.In complex 2, the stabilisation due to the -CF 3 groups is insufficient to bring the energy of the 3 L′C below the 3 LC state. Conclusions We have synthesised and characterised a series of new Ir III complexes by using three different isomers of 1-methyl-(2′-pyridyl)pyridinium to generate cyclometalating ligands C^N. Because the latter are charge-neutral, adding an α-diimine ancillary ligand N^N affords species with a 3+ charge and unusually high solubility in water when isolated as Cl − salts.Such enhanced aqueous solubility increases the prospects for applications in bioimaging.The complexes are characterised fully as both their PF 6 − and Cl − salts, and in four cases, their structures are confirmed by single-crystal X-ray crystallography.Electrochemistry reveals for each complex an irreversible oxidation of the {Ir III (C^N) 2 } 3+ unit and multiple ligand-based one-electron reductions.The reductive behaviour is much better defined for the new 2,2′-C^N and 4,2′-C^N complexes when compared with their 3,2′-C^N counterparts, with up to four reversible waves being observed.Within each series, the redox potentials are affected significantly by varying the R substituents on the N^N ligand.All of the complex salts appear yellow coloured and their UV-vis absorption spectra show low energy bands in the region ca.350-450 nm.DFT and TD-DFT calculations using the M06 functional confirm that these are attributable to 1 MLCT transitions with some 1 LC, 1 ML′CT and also 1 analogues are reproduced theoretically.In contrast to the 3,2′-C^N complexes that show intense blue or blue-green emission, the new isomeric species are all green emitters.The emission is shown by DFT to originate from 3 LC excited states involving the cyclometalating ligand with some 3 MLCT contribution.Therefore, changing the N^N ligand has only a minor influence on the luminescence.Neither the absorption nor emission spectra show more than slight solvent dependence.In terms of future prospects, there is great scope for modifying the optical properties of these highly charged complexes, for example by changing the substituent on the quaternised N atom and/or attaching other groups to either the C^N or N^N ligands. 1 H NMR spectra were recorded on a Bruker UltraShield AV-400 spectrometer, with all shifts referenced to residual solvent signals and quoted with respect to TMS.Elemental analyses were performed by the Microanalytical Laboratory, University of Manchester, and UV-vis spectra were obtained by using a Shimadzu UV-2401 PC spectrophotometer.Mass spectra were recorded by using +electrospray on a Micromass Platform II spectrometer.Cyclic voltammetric measurements were performed by using an Ivium CompactStat.A single-compartment cell was used with a silver/silver chloride reference electrode (3 M NaCl, saturated AgCl) separated by a salt bridge from a 2 mm disc platinum working electrode and platinum wire auxiliary electrode.Acetonitrile was used as supplied from Sigma-Aldrich (HPLC grade), and [N n Bu 4 ]PF 6 (Fluka, electrochemical grade) was used as the supporting electrolyte.Solutions containing ca. 1.5 × 10 −4 M analyte (0.1 M [N n Bu 4 ]PF 6 ) were deaerated by purging with N 2 .All E 1/2 values were calculated from (E pa + E pc )/2 at a scan rate of 100 mV s −1 .Steadystate emission and excitation spectra were recorded on an 25Et 2 O) and hydrogen atoms were included in idealised positions by using the riding model, with thermal parameters 1.2 times those of aromatic parent carbon atoms, and 1.5 times those of methyl parent carbons.The asymmetric unit of 1P•1.5Me 2 CO•0.25Et 2 O contains the complex cation, three disordered PF 6 − anions, an ordered acetone molecule, a disordered acetone at 0.5 occupancy and a disordered diethyl ether molecule at 0.25 occupancy.All non-H atoms were refined anisotropically, except for the disordered diethyl ether; some restraints were applied for the disordered atoms.H atoms were included in calculated positions, except those of the disordered diethyl ether, which were omitted.The asymmetric unit of 2P•2C 4 H 8 O 2 contains the complex cation, three PF 6 − anions and two 1,4-dioxane molecules.The C-O distances in one 1,4-dioxane molecule had to be restrained to 1.4 Å and the displacement parameters for the six ring atoms were refined by using the RIGU and DELU commands.The asymmetric unit of 3P•4MeCN contains the complex cation, three PF 6 − anions, one of which is disordered, and four acetonitrile molecules, three of which are disordered.Restraints were applied to the F atoms of the disordered PF 6 − .Crystallographic data and refinement journal is © The Royal Society of Chemistry 2015 Dalton Trans., 2015, 44, 20392-20405 | 20395 Open Access Article.Published on 20 October 2015.Downloaded on 9/27/2018 1:56:05 AM. Fig. 2 Fig. 2 Aromatic regions of the 1 H NMR spectra of the PF 6 − salts of complexes 1 (blue), 4 (red), and 7 (green) recorded at 400 MHz in CD 3 CN.The asterisks denote the signals attributed via COSY studies to the protons of the N^N ligand. a Solutions ca. 1 × 10 −5 -2 × 10 −4 M; ε values are the averages from measurements made at three or more different concentrations (with ε showing no significant variation).b Denotes the position with respect to the quaternised N atom of the N-coordinated 2′-pyridyl ring.c Data taken from ref. 45. Fig. 7 M06/Def2-QZVP/Def2-SVP-calculated (blue) UV-vis spectra of (a) 1, (b) 2 and (c) 3, and the corresponding experimental data (green) for the PF 6 − salts in acetonitrile.The ε-axes refer to the experimental data only and the vertical axes of the calculated data are scaled to match the main experimental absorptions.The oscillator strength axes refer to the individual calculated transitions (red). Fig. 8 Fig.8M06/Def2-QZVP-SVP-calculated spin density plots for the T 1 state of the complexes 1, 2, 3 and 7.In each case, the N^N ligand is pointing upwards.
2017-06-21T00:17:07.937Z
2015-11-24T00:00:00.000
{ "year": 2015, "sha1": "1281fd152aa318d811447bc3551d98966867b162", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/dt/c5dt03753k", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4b4749be5f902f7e0efed504263061b702475d29", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
255344275
pes2o/s2orc
v3-fos-license
Targeting programmed cell death in metabolic dysfunction-associated fatty liver disease (MAFLD): a promising new therapy Most currently recommended therapies for metabolic dysfunction-associated fatty liver disease (MAFLD) involve diet control and exercise therapy. We searched PubMed and compiled the most recent research into possible forms of programmed cell death in MAFLD, including apoptosis, necroptosis, autophagy, pyroptosis and ferroptosis. Here, we summarize the state of knowledge on the signaling mechanisms for each type and, based on their characteristics, discuss how they might be relevant in MAFLD-related pathological mechanisms. Although significant challenges exist in the translation of fundamental science into clinical therapy, this review should provide a theoretical basis for innovative MAFLD clinical treatment plans that target programmed cell death. human MAFLD and liver fibrosis [17]. However, when CK-18 levels, histology and liver fat contents were studied for a large multi-ethnic group of people, it was found that the degree of MAFLD could not be determined based on CK-18 [18]. Despite this, it remains useful when combined with other noninvasive methods, particularly because its levels are not affected by confounding factors, including age and obesity. Thus, it has the potential for clinical applications: further research is needed on population factors and the severity of fibrosis, among other things, to establish parameters for its use as a biomarker of MAFLD [18]. The accumulation of excess saturated fatty acids leads to apoptosis through oxidative stress and endoplasmic reticulum stress [19]. Hypertriglyceridemia, which results in lipoapoptosis, is found in non-obese MAFLD patients [20]. In fact, lipoapoptosis induced by free fatty acids is the primary form of apoptosis in MAFLD. Lipoapoptosis enhances oxidative stress, which in turn increases the apoptosis, inflammation and fibrosis, creating a vicious cycle [21]. Egnatchik et al. found that primary hepatocytes and H4IIEC3 cells with palmitate and calcium chelators induced mitochondrial dysfunction by altering calcium homeostasis in the endoplasmic reticulum, which enhanced reactive oxygen species (ROS) production and apoptosis [22]. Lipoapoptosis and ERS are related, with the death receptor DR5 activated by ERSrelated protein expression (CHOP) protein or by further binding c-Jun N-terminal kinase (JNK) to activate Bim (also known as BCL2L11), PUMA and other proteins to further activate caspase 3 [23]. Wang et al. experimentally demonstrated that Asiatic acid (AAPC) inhibits the expression of CHOP, caspase 12, JNK and other proteins, thereby attenuating ERS, apoptosis and lipid metabolism disorders [24]. In a recent clinical trial aramchol, a stearoyl-CoA desaturase 1 (SCD1) modulator, was used to treat MAFLD subjects for 3 months. Patient liver fat content was reduced and the treatment was well tolerated. Aramchol may regulate SCD1 fatty acid enzymes, ultimately affecting ERS and apoptosis [25,26]. Recent studies showed that in a MAFLD mouse model, caspase 2 deletion reduced apoptosis and inhibited the profibrotic pathway, in turn inhibiting MAFLD progression [27]. Ferreira et al. evaluated subjects using the Kleiner-Brunt scale and found that the activation of caspases 3 and 2 and the DNA fragmentation in the liver of patients with severe MAFLD were significantly higher than in steatosis patients [28]. Intestinal barrier dysfunction in MAFLD leads to lipopolysaccharide (LPS) leakage of and disturbance of the intestinal flora, which may lead to excessive fat accumulation and lipoapoptosis [29]. Li et al. used LPS and free fatty acids (FFAs) to induce MALFD in vitro and found that the treatment upregulated Bax, cleaved caspase 3 and 8, and downregulated Bcl-2, leading to apoptosis, which was TNF-α-and caspase-dependent [30] (see Fig. 2 for details). MAFLD-related external death receptor pathway Increased Fas expression is observed in patients with MAFLD [31]. Fas and FasL expressions and the rate of apoptosis are also higher in obese children with obstructive sleep apnea (OSA) than in obese children without OSA [32]. A study of MAFLD patients diagnosed by liver biopsy found that their CK-18 and TNF-α levels were higher than those of the control group [33]. The expressions of external death receptor pathway-related proteins are higher in MAFLD patients, and the use of relevant drugs improves diseaserelated indicators by inhibiting these proteins. Studies have found that transaminase levels, insulin resistance, adiponectin levels, liver fibrosis and steatosis improve in MAFLD patients when TNF-α levels are attenuated [34]. Furthermore, inhibition of Fas reduces hepatocyte apoptosis and reduces associated liver damage [35]. In addition, Kroy et al. found that deletion of c-Met in the liver cells of mice fed a methionine-and cholinedeficient (MCD) diet led to upregulation of fatty acid metabolism genes. The increase Page 6 of 27 Zhao et al. Cell Mol Biol Lett (2021) 26:17 in TUNEL-positive cells and superoxide anions aggravates the progression of MAFLD, while knockout of caspase 8 inhibits progression [36]. Interestingly, Physi et al. found that gastric bypass surgery (Roux-en-Y gastric bypass, RYGB) yields similar improvements in disease-related indicators as can be achieved with apoptosis-inhibiting drug therapy,and the levels of glucose-regulated protein-78 (Grp78), X-box binding protein-1 (XBP-1), spliced XBP-1, fibroblast growth factor 21 (FGF21), other ERS-related proteins and excessive apoptosis were all reduced compared to the control group [37]. Aerobic exercise improves MAFLD by inhibiting TNF-α and reducing ROS and cytochrome c levels [38]. MAFLD-related internal mitochondrial apoptosis The internal mitochondrial apoptosis pathway is also activated by the abovementioned factors. Resistance to lipoapoptosis in MAFLD is partly due to the existence of a hedgehog autocrine survival signaling pathway [39]. Geng et al. used Smad4 knockout mice to demonstrate the protective effect of Smad4 deletion on MAFLD: it blocks the mitochondrial apoptotic pathway by inhibiting expression of the pro-apoptotic genes Bax and caspase 3 [40]. Disorders in glucose metabolism in MAFLD can easily lead to hyperinsulinemia and hyperglycemia. Hyperglycemia can cause increased oxidative stress and trigger mitochondrial dysfunction, including mitochondrial depolarization, cytochrome c release, and changes in Bax and Bcl-2 expression. Recent studies have shown that casein kinase 2-interacting protein-1 (CKIP-1) may regulate insulin signaling by inhibiting the phosphorylation of JNK1. The CKIP-1-deficient MAFLD model exhibits more severe fatty liver through an increase in phosphorylated JNK1, which further inhibits insulin receptor substrate-1 (IRS-1) serine phosphorylation and IRS-1 tyrosine phosphorylation, ultimately aggravating insulin resistance, hyperglycemia, apoptosis and MAFLD [41]. Cyanidin-3-O-β-glucoside (C3G) inhibits the caspase 3, caspase 9, Bax and JNK pathways by reducing hyperglycemia-induced oxidative stress and related mitochondrial disorders to improve MAFLD [42]. Other, similar studies indicate that intermittent hyperglycemia (IHG) in the setting of lipotoxicity may lead to oxidative stress and hepatocyte apoptosis by increasing mitochondrial permeability transition (MPT) and mitochondrial dysfunction, which promotes MAFLD [43]. MAFLD-related miRNAs regulate apoptosis Analysis of various MAFLDs have enabled researchers to determine multiple miRNAs with expression changes related to the condition. It is believed that there may be therapeutic potential in regulating apoptosis through relevant miRNAs [44,45]. Castro et al. identified a relationship between the miR-34a-SIRT1-p53 pro-apoptotic pathway and hepatocyte apoptosis: miR-34a, apoptosis and acetylated p53 levels increased in MAFLD livers, while the SIRT1 level decreased. This pathway is specifically regulated by ursodeoxycholic acid (UCDA) [46]. One study found that after treatment with creatine, the miR-34a-SIRT1-p66shc anti-apoptotic pathway may reduce high-fat diet or palmitic acid-induced increases in cleaved caspase 3, caspase 3 and caspase 9, thereby improving MAFLD [47]. MiR-296-5p and miR-615-3p regulate lipid-associated apoptosis levels in MAFLD by negatively regulating PUMA, a pro-apoptotic protein, and CHOP [48,49]. Notably, miR-21 may play a distinct role in MAFLD. Rodrigues et al. found that combined ablation of miR-21 with obeticholic acid improves MAFLD-related pathology, including steatosis, inflammation and lipoapoptosis [50]. Necroptosis Necroptosis was first observed in experiments by Ray et al. in 1996 [51]. It is characterized by morphological changes similar to those observed in necrosis, but is distinct in that it is a controllable form of death. Its morphological changes include organelle swelling, damage to the plasma membrane and release of cellular contents, which may lead to the occurrence of secondary inflammation [12]. Key molecules for necroptosis include mixed lineage kinase domain-like (MLKL), and RIP protein kinase family members RIPK1 and RIPK3. TNF is a key cytokine in inflammation and other aspects of biology. TNFR1 has been widely studied in regulating cell survival, apoptosis and necroptosis [52]. TNFR1-mediated signal transduction is an example of the molecular mechanism of necroptosis and the conversion between apoptosis and necroptosis (see Fig. 3 for details). MAFLD and necroptosis The destruction of membrane integrity during necroptosis releases various proinflammatory mediators and promotes the progression of MAFLD [53]. Ding et al. found that AS1842856 improves MAFLD by inhibiting forkhead box protein O1 (FOXO1), ERS and necroptosis [54]. TNF-α-mediated necroptosis is widely recognized as the most classical pathway. Both human MAFLD samples and experimental animal MAFLD livers can show elevated TNF-α [55] and TNFR1 [56] levels. In addition to TNF-α, other pattern recognition receptors, such as toll-like receptor 4 (TLR4), are also involved in the activation of necroptosis. In MAFLD, hepatocytes are exposed to a large number of TLR4 ligands, such as gut-derived LPS [57], which subsequently activate the TLR4 receptor. It activates RIPK3 and MLKL, which trigger necroptosis [58]. RIPK3 in MAFLD RIPK3 kinase is an indispensable component of necroptosis. Studies have shown that the expression levels of RIPK3 correlate with the sensitivity of cells to necroptosis and are very low in the liver under normal physiological conditions. This indicates that under normal circumstances, necroptosis does not frequently occur in the liver and may be a back-up programmed death regulatory mechanism when apoptosis fails [59][60][61]. Human MAFLD liver samples show strong upregulation of RIPK3 [62]. Gautheron found that RIPK3 mediates MAFLD development through a positive feedback loop form of programmed cell death to apoptosis, exacerbating damage [64]. Saeed et al. also used a high-fat diet to induce a model of RIPK3 knockout, which aggravated liver steatosis but partially inhibited inflammation [65]. Due to the complexity of RIPK3, it seems that direct targeted inhibition of RIPK3 kinase is not an ideal strategy for MAFLD therapy. Further research is needed to clarify the role of RIPK3, and other key molecules should be examined as well, including RIPK1 and MLKL. RIPK1 and MLKL in MAFLD Studies have shown that RIPK3 self-oligomerization is sufficient to induce necroptosis. RIPK1 kinase acts as a positive regulator of RIPK3 by forming a amyloid-like oligomers with RIPK3. It can also act as a negative regulator of RIPK3 in cells to promote cell survival [66]. Studies have shown that using RIAP-56, a potent inhibitor of RIPK1, improves the histological characteristics of MAFLD in HFD mice through MLKL, reducing liver inflammation, fibrosis and liver lipid accumulation. Its mechanism may involve ameliorating mitochondrial dysfunction and promoting β oxidation [67]. In addition, MLKL deficiency may be regulated by phosphatidylinositol (3,4,5)-trisphosphate (PIP3) in liver cells, which reduces insulin resistance and glucose intolerance. Paradoxically, it was found that neither MLKL nor RIPK1 inhibition reduced inflammation [68]. Saeed et al. found that patients with MAFLD exhibited increased MLKL levels compared to the non-MAFLD group and that MLKL −/− mice induced with a high-fat diet showed decreased transferase levels, triglycerides, MAFLD activity scores, steatosis score, inflammation, balloon degeneration, and expression of de novo lipogenesis (DNL) genes in the liver [69]. However, Suda found that RIPK1 antisense knockdown caused α-galactosylceramide-treated C57BL/6 mice to undergo large-scale apoptosis-type immune liver injury. Lethality and was not associated with the inhibition of NF-kB and necroptosis [70]. The molecular mechanism of autophagy and its relationship with MAFLD. (1) Under normal conditions, mTORC1 phosphorylates and combines with ULK1/2, mAtg13 protein and FIP200 protein to form a functional silencing complex. When ER stress, oxidative stress, etc. activate autophagy, the functional silencing complex dissociates, and autophagosomes begin to recruit Atg protein, ULK1/2, mAtg13 and FIP200, to form a functional complex. (2) The autophagy-inducing complexes PtdIns3K class III, hVps34, Beclin1, and P150 are transported to the ER through microtubules to begin autophagic nucleation to induce the formation of restricted membranes. (3) Two ubiquitin-like conjugation systems, Atg12-Atg5-Atg16 L and PE/LC3, form a complex and participate in membrane elongation and isolate membrane-formation events to finally close into a double-layer membrane vesicle structure. (4) The fusion of autophagosomes and lysosomes leads to cargo degradation. The degraded products (amino acids, etc.) are then released into the cytosol through the lysosomal membrane permease, which produces a subsequent response that may be related to MAFLD-induced lipid accumulation, fibrosis and inflammation Most current research involves knockout or inhibition of key molecules of systemic necroptosis in disease models. Therefore, further studies on pure liver-specific knockout and immune system-related connections are needed. Autophagy The research history of autophagy spans several decades. It was first officially named autophagy by de Duve, who was building on earlier discoveries [71][72][73]. The name derives from the Greek roots phagía = eat or consume and auto = self [74]. Cell autophagy involves the formation of autophagosomes (bilayer membrane vesicles that originate as omegasomes on the endoplasmic reticulum) and autophagolysosomes. Their function is to recover and reuse unwanted cellular proteins [75,76], carbohydrates [77] and lipids [78]. Autophagy is an important form of death that controls the degradation of intracellular waste and efficiently recycles substances. Molecular mechanisms of autophagy Selective autophagy can be targeted to degrade functionally damaged or excessive organelles, microorganisms, lipids, etc. [79]. There are two key steps in the metabolic autophagy pathway in the liver: formation and degradation (autophagic flux). Three major forms of autophagy coexist: macroautophagy, chaperone-mediated autophagy and microautophagy. For more detailed molecular mechanisms, see the extensive reviews on autophagy [80,81]. Herein, we primarily focus on macroautophagy (hereinafter referred to as autophagy) and its selective forms (see Fig. 4 for details). MAFLD and autophagy Under physiological conditions, autophagy is primarily induced by starvation. It maintains the body's metabolic homeostasis by regulating biochemical reactions, such as gluconeogenesis and fatty acid oxidation [82]. The liver is one of the principal vital organs of metabolism. Studies show that starvation stimuli activate autophagy in the liver peaks in just a few hours [83]. Autophagy is indispensable for liver metabolism and has been considered to have a protective effect on the liver. When mice lack key Atg proteins of autophagy, liver cells are more susceptible to damage [84]. MAFLD patients often have a high-fat diet. Shortterm high-fat intake will activate autophagy to prevent the occurrence of lipotoxicity. However, long-term chronic high-fat intake restricts autophagosome and lysosome fusion, increasing the risk for MAFLD development [85]. Autophagy and lipid metabolism in MAFLD The selective form of autophagy for fat is referred to as lipophagy [86]. Under physiological conditions, it primarily works through the synergistic effect of cytosolic and lysosomal lipases and the lipid droplet transport of RAB7 to engulf lipids and increase free fatty acid content. In pathological conditions, impaired autophagy can lead to fat accumulation. Ubiete-Franco et al. found that the levels of glycine N-methyltransferase (GNMT) in patients with MAFLD were reduced and that GNMT knockout mice exhibited increased levels of methionine and its metabolite S-adenosylmethionine (SAMe) along with inhibited autophagic flux through methylated PP2A, which may be one of the mechanisms leading to increased liver fat production [87]. Byun et al. found that phosphorylation of Jumonji-D3 (JMJD3) at Thr1044 by FGF21 signal-activated PKA increases its nuclear localization and interaction with the nuclear receptor PPARα to transcriptionally activate autophagy genes, such as Tfeb, Atg7 and Atgl, causing lipolysis. It also reduces the liver expression of JMJD3, Atg7, LC3 and ULK1 in MAFLD [88]. Wang et al. found that formononetin causes lipophagy to reduce fat accumulation by activating AMPK and promoting nuclear translocation of TFEB in HFD mice [89]. Tang et al. found that osteopontin (OPN) was elevated in an MAFLD mouse model and that this, combined with integral αVβ3 and αVβ5, can reduce FFA-induced autophagy in HepG2 cells, leading to lipid accumulation, which is reversed by inhibiting OPN [90]. Some traditional Chinese herbal medicine seems to have advantages in treating MAFLD with autophagy as the target [91,92]. Ren et al. used catalpol, an iridoid glucoside derived from the rehmannia root, to improve hepatic steatosis in ob/ob fatty liver mouse models induced by a high-fat diet, and the authors speculated that catalpol's anti-fat denaturation may enhance nuclear translocation of TFEB through phosphorylation activation of AMPK [93]. However, there are also studies showing that impaired autophagy reduces fat production. For example, when autophagy in the mouse liver is impaired, triglyceride levels decrease and ketone body production is impaired [94]. Autophagy, metabolic stress and insulin resistance in MAFLD Obesity models often exhibit decreased expression of the ATG7 protein. Impaired autophagy is usually accompanied by ER stress and insulin resistance. The latter leads to elevated insulin, which will in turn aggravate autophagy dysfunction [97,98]. This may be controversial because Yan et al. found that chlorogenic acid inhibits autophagy through the JNK pathway to reduce insulin resistance and improve MAFLD [99]. Zhang et al. found that increased levels of P62 and LC3-II in experimental MAFLD mice indicate the inhibition of autophagy and cause increases in GRP78, PDI, p-PERK, p-eIF2a and eIF2a, indicating the emergence of ER stress [100]. Similarly, Lee et al. found that Eucommia ulmoides leaf extract improves both autophagic flux and HFD-induced steatosis in mice by inhibiting mTOR and the ER stress-related proteins PERK, p-eIF2a, GRP78 and CHOP [101]. Unconventional activation of sterol regulatory element-binding protein 2 (SREBP-2) leads to MAFLD cholesterol accumulation. Deng et al. found that inhibition of SREBP-2 reduces ERS by enhancing the LC3-II-to-LC3-I ratio, autophagic flux and lipolysis, and inhibiting PERK-P-EIF2α signaling [102]. However, Kim et al. reported that administration of lovastatin and ezetimibe increased SREBP-2 in HFD mice and resulted in the interaction of patatin-like phospholipase domain-containing enzyme 8 (PNPLA8) with LC3 to promote autophagy and reduce hepatic steatosis [103]. Additionally, Jiang Zhi granules (JZGs) inhibit palmitate-induced autophagosome flux damage and promote the fusion of autophagosomes and lysosomes to restore autophagy, protecting liver cells from oxidative stress damage and mitochondrial disorders [104]. Irbesartan inhibits PKC and activates AMPK and ULK1, increasing the number of autolysosomes and autophagosomes. Upregulation of the autophagy proteins Atg5 and LC3BII/I reduces lipid deposition, improving mitochondrial function and reducing ROS levels [105]. Autophagy and MAFLD-related liver fibrosis Lipid bilayer stress can induce ER stress and control autophagy to mediate the steady state of the unfolded protein response (UPR) via the IRE-1-XBP-1 axis in MAFLD [106]. XBP1-mediated UPR activates HSCs to secrete collagen 1-α in a TGF-β-independent manner through autophagy. This effect is disrupted by inhibiting autophagy [107]. Transforming growth factor-β activated kinase-1 (TAK1) can be triggered in response to different cytokines, including IL-1, TNF-α and TGF-β. When TAK1 is enabled, it enhances autophagic activity by inhibiting mTORC1 activity and the AMPK pathway. The authors found that mTORC1 activity was enhanced, while AMPK and autophagy were inhibited, in TAK1-deficient mice, leading to spontaneous liver fibrosis and even cirrhosis. The inhibition of mTORC1 reversed the abovementioned phenotype [108]. It is useful to note that the role of autophagy in MAFLD-related fibrosis may have two sides. Autophagy can engulf lipid droplets in HSCs through lipid phagocytosis, providing energy substrates, such as ATP for HSC trans differentiation, which ultimately ameliorates liver fibrosis. Furthermore, autophagy-deficient cells exhibit lower HSC activation rates [109,110]. Others In addition to the three PCDs mentioned above, there may be other PCDs in MAFLD. Because the overall experimental evidence on their potential functions in MAFLD seems not so advanced, we recommend that researchers continue to conduct in-depth investigations of these pathways. Pyroptosis and MAFLD Pyroptosis is a newly discovered form of programmed death that is characterized by cellular content release mediated by caspases. It was officially named pyroptosis from the Greek roots pyro and ptosis to reflect its proinflammatory activity [111]. The morphological characteristics of pyroptosis are distinct from other forms of programmed death. It involves rapid rupture of the plasma membrane and the release of proinflammatory intracellular contents. Nuclear DNA lysis also occurs in pyroptosis, similarly to apoptosis [112]. Pyroptosis involves caspases 1, 4, 5 and 11 sensing different pathogen-associated molecular pattern (PAMP) or damage-associated molecular pattern (DAMP) activation and forming a pore-like structure on the cell membrane. Leakage of inflammatory mediators, such as IL-18/1β, leads to cell lysis and death [113]. Pyroptosis may occur through the classic caspase 1-dependent pathway [114] and the nonclassical caspase 4/5 (mouse caspase-11) pathway [115]. Its key molecules include NOD-LRR and pyrin domain-containing 3 (NLRP3); gasdermin D (GSDMD); and caspase 1, 4, 5 or 11 (see Fig. 5 for details). Recent research suggests that pyroptosis represents a key link to MAFLD [116]. Zhong et al. found significant pyroptosis in a mouse model of MAFLD induced by HFD and a MAFLD model in liver cells induced by FFA. Genipin(GNP)reduced the expressions of pyroptosis-related genes and release of lactate dehydrogenase by inhibiting uncoupling protein-2 (UCP2).Overexpression of UCP2 upregulated the degree of pyroptosis. Finally, they proved that GNP, a natural water-soluble cross-linking agent, alleviates MAFLD by inhibiting UCP2-mediated pyroptosis [117]. Inhibiting activation of the NLR family to mediate pyroptosis may be a potential therapy for MAFLD. A large induction of NLRP3 activates MAFLD [116]. Studies have shown that berberine has a therapeutic effect on MAFLD through inhibition of the ROS-TXNIP axis, NLRP3, caspase 1 activity, and GSDMD-N expression [118]. In addition to NLRP3, the NLR family members NOD-LRR and pyrin domain-containing 4 (NLRP4) are also related to MAFLD. Recently, Chen et al. constructed a MAFLD cell model in vitro and found that NLRP4 is regulated by TNF-α levels and can be ectopically transferred to the mitochondria after activation by free fatty acids, and the rest of the process was similar to NLRP3. Caspase 1 activation and lysing induces expression of IL-18 and IL-1β, eventually leading to pyroptosis and the release of proinflammatory cytokines [119]. It is worth noting that the IL-18 and IL-1β released by MAFLD during pyroptosis seem to be a double-edged sword [113]. Studies by Yamanishi essential for normal lipid metabolism in the liver and that its deficiency causes fat accumulation [120]. However, IL-1β is considered a pro-inflammatory cytokine that promotes the development of MAFLD. Studies have shown that GSDMD and GSDMD-N levels are higher in human MAFLD liver samples and are related to the MAFLD activity score (NAS) and fibrosis, while GSDMD knockout in the MCD diet-induced MAFLD model relieves MAFLD and fibrosis by reducing NF-KB activation and proinflammatory cytokines, such as TNF-α, IL-1 and MCP-1 [116]. Ezquerro et al. found that inhibition of TNF-α-induced apoptosis, autophagy and pyroptosis may exert protective effects against MAFLD [121]. Wang et al. discovered that GSDME converts caspase 3-mediated apoptosis into pyroptosis [122]. Ferroptosis and MAFLD Ferroptosis was first observed in experiments by Dolma et al. in 2003 [123]. Until 2012, the oncogenic RAS-selective lethal small molecule erastin, which was observed in experiments by Dixon et al., was known to induce this form of death, which is distinct from apoptosis [124]. Morphologically, ferroptosis is characterized by cytological changes that include reduced cell volume and increased mitochondrial membrane density [125]. It is characterized by iron dependence and lipid peroxidation. It involves the pharmacological cross-regulation of l-glutathione (GSH) and glutathione peroxidase 4 (GPX4). The classical pathway primarily includes enzymatic reactions (the lipoxygenase pathway) and nonenzymatic reactions (the Fenton reaction) [126]. System xc-regulated GSH production through glutamate-cysteine transport inside and outside the cell. GPX4 is one of the GPXs with antioxidant activity. It can remove lipid peroxides formed by polyunsaturated fatty acid phospholipids with the help of system xc-regulated glutathione content, thereby reducing lipid peroxidative damage in the cell membrane [127,128] (see Fig. 6 for details). Therefore, classic ferroptosis inducers (FINs) are divided into two categories. Class I inducers are primarily systemic xc inhibitors, which are characterized by reduced synthesis of GSH and exhaustion of GSH content, leading to damage to the antioxidant system, peroxidative damage and ferroptosis [129]. Class I includes erastin and SAS. Class II inducers, such as RSL3 and ML210, directly inhibit GPx4 activity to trigger ferroptosis [130]. The two primary regulatory pathways of ferroptosis are the mevalonate pathway (primarily regulates GPX4 through isopentenyl pyrophosphate (IPP), which stabilizes the selenocysteine-specific tRNA) [131] and the sulfur-transfer pathway (regulates the body's methionine, and sulfur-containing amino acid levels to ensure conversion to cysteine to synthesize GSH to help GPX4 regulate ferroptosis) [132], although there are others [133]. More detailed molecular mechanisms can be found in reviews on these topics [133]. Whether ferroptosis occurs depends on the balance between the accumulation of iron ROS and the body's antioxidant system. When hydrogen atoms are extracted from unsaturated fatty acids, lipid peroxidation reactions begin, including a destructive chain reaction that produces heterogeneous groups of lipid peroxides [134], eventually resulting in cell dysfunction and the production of malondialdehyde (MDA) and 4-hydroxy-2,3-nonanal (4-HNE). 4-HNE was noted in the cytoplasm to disrupt liver cells in MAFLD patients through the Fenton reaction [135]. MDA and 4-HNE are elevated in 90% of MAFLD patients [136]. Imai et al. suggested that GPX4 antioxidant properties may play a key role in MAFLD [137]. Carlson et al. found that mice with specific deletion of the GPX4 gene in hepatocytes died during the embryonic stage and had extensive hepatocyte degeneration [138]. Combined with the results of studies by Kim et al., who found high regulation of Gpx4 in the liver [139], these findings suggest a correlation between ferroptosis and MAFLD. In addition, the use of RSL-3, a GPX4 inhibitor, may affect lipid peroxidation in the liver through a 12/15-Lox-AIF-related pathway. 12/15-LOX activation aggravates endoplasmic reticulum stress, inflammation, liver steatosis, and liver damage, and MAFLD is improved by the use of iron chelators [140]. Studies have found that liver iron deposition in MAFLD patients positively correlates with histological severity and can lead to the development of MAFLD [141]. Both lipid peroxidation and excess iron deposition may exacerbate MAFLD through ferroptosis. Fe 3+ found in people's daily diets is reduced to Fe 2+ and then absorbed by the divalent metal transporter 1 (DMT-1) protein in the small intestine and stored in the intestinal cells or excreted outside the cell base. It is subsequently transported out through ferroportin and further reoxidized to Fe 3+ by hephaestin [142]. In this process, hepatocytes control the production of hepcidin by sensing the iron content in the body. Hepcidin can reduce the expression of DMT-1 and thus reduce the intestinal absorption of iron [143]. In patients with MAFLD, hepcidin levels in the serum and in white fat were accompanied by upregulation of DMT-1 [144]. Upregulation of transferrin receptor 1 (TRE1) was also detected in a fatty liver mouse model [145]. TRE1 is a receptor that binds transferrin and is expressed on activated hepatic stellate cells (HSCs). It aids ferroptosis by increasing iron intake and reducing iron output through transferrin [146]. Excessive iron content deposition promotes liver fibrosis through lipid peroxidation. Ramm et al. identified a relationship between iron load and the activation of HSCs. Iron loading increased the numbers of activated HSCs and collagen deposition [147]. Table 1 The relationship between cell death and MAFLD Name Relationship with MAFLD Apoptosis Intestinal barrier dysfunction, oxidative stress, and ER stress lead to lipoapoptosis and activate the external death receptor pathway and internal mitochondrial pathway Glucose metabolism disorder activates the internal mitochondrial pathway and is regulated by several miRNAs Necroptosis Oxidative stress and intestinal barrier dysfunction trigger TNF-α-mediated necroptosis The key molecule RIPK1 is involved in the regulation of RIPK3 function and in the mutual transformation with apoptosis The interaction between RIPK3 and JNK is involved in disease progression, although the specific role is not clear Inhibition of MLKL improves insulin resistance, regulates fat metabolism, etc Pyroptosis Intestinal barrier dysfunction, ER stress, and oxidative stress all activate the assembly of NLRP3, and the secretion of inflammatory factors IL-1β and IL-18 leads to pyroptosis Ferroptosis Imbalance in the intracellular antioxidant system caused by excessive iron deposition and oxidative stress leads to disorders of the ferroptosis regulatory system and further affects lipid accumulation, inflammation, liver fibrosis, etc Autophagy Autophagy affects insulin resistance, fat metabolism, inflammation, liver fibrosis, etc. by regulating ER stress, oxidative stress, etc Crosstalk between different PCDs in MAFLD As mentioned earlier, apoptosis and necroptosis transform each other. Wang et al. discovered that GSDME converts TNFα-induced apoptosis into pyroptosis [122]. Bcl2 anti-apoptotic proteins can inhibit autophagy, whereas other pro-apoptotic proteins can promote autophagy [148]. Mitophagy is a form of autophagy that selectively clears damaged mitochondria. When damaged or inhibited, the failure to clear damaged mitochondria results in mitochondrial dysfunction, which produces excessive ROS and causes NLRP3 activation of pyroptosis. It promotes the production of pro-inflammatory factors to create a pro-inflammatory environment [149]. The activation of autophagy can be considered anti-inflammatory, possibly through inhibition of the activation of NLRP3 [150] and control of mitochondrial homeostasis [151]. Qiu et al. found that arsenic trioxide (AsO) induces MAFLD in mice, accompanied by NLRP3 activation, autophagy and increased lipid accumulation. However, supplementation with taurine (Tau) reduced MAFLD levels, possibly by reducing CTSB-dependent NLRP3 activation and pyroptosis [152]. In addition, lipid peroxidation acts as a bridge between autophagy and ferroptosis [153]. Park et al. found that: autophagy induces ferroptosis by degrading ferritin and inducing TFR1 expression; the ferroptosis inducers eastsin and RSL3 promote the Cytokeratin-18 It alone is not enough as a valuable biomarker, but it may have clinical significance when combined with other non-invasive methods of invasion, such as with serum adiponectin, serum resistin, uric acid (NCT01068444) [18,164,165] Fas/TNF-α Inhibitor of death receptor-associated protein, including pentoxifylline, YLGA(Try-Leu-Gly-Ala) peptides [35,166] Caspase enzymes Inhibitor of apoptosis-related caspase enzymes, including VX-166, GS-9450 (NCT00740610), PF-03491390 (NCT02077374), and emricasan (NCT02686762, NCT02077374) [34,156,157,167] R-3032 A ctsb inhibitor [168] Aramchol Aramchol inhibits the liver enzyme stearoyl coenzyme A desaturase (SCD) [25] Selonsertib ASK1 inhibitor [169] The farnesoid X-activated receptor (FXR) Its agonist (NCT01265498) can improve serum transaminase levels in patients with MAFLD [170] Thioredoxin (TRX) Oxidative stress can lead to a variety of PCD. TRX is induced by oxidative stress. Compared with healthy controls, the level of TRX rises significantly, but whether it can be used as a biomarker still needs further research [171] XIAP XIAP antisense oligonucleotide (AEG35156) increases progressionfree survival and overall survival [172] RIPK1/3 Although RIPK1/3 currently has inhibitors (GSKʹ840, GSKʹ843, GSKʹ872), according to different preclinical studies, it seems that directly targeting RIPK1/3 is not a better treatment for MAFLD, and further studies are needed [173] MLKL Theoretically, the MLKL inhibitor (necrosulfonamide) may improve MAFLD, but evidence from well-designed clinical studies is still needed [173] NLRP3 inflammasome Inhibiting the activation of the MAFLD inflammasome is an innovative treatment method, including an inhibitor of the NALP3 inflammasome (glyburide), a caspase 1 inhibitor (Pralnacasan), and an IL-1β antibody (canakinumab) or endogenous IL-1β inhibitor (anakinra) [174] Page 19 of 27 Zhao et al. Cell Mol Biol Lett (2021) 26:17 assembly of autophagosomes and autophagy; and the inhibition of autophagy can induce intracellular iron consumption, reducing the occurrence of lipid peroxidation and ferroptosis [154]. However, Takamura et al. used autophagy-deficient mice to demonstrate that direct inhibition of autophagy leads to tumorigenesis [155]. Therefore, it may not be a useful therapeutic approach to directly inhibit autophagy. It may be better to indirectly regulate the related signaling cascade of upstream pathways, such as the AMPK/mTORrelated signaling pathway. Future directions: drugs targeting PCD to treat MAFLD The pathogenesis of MAFLD is very complex, and thus far, there are no FDA-approved drugs on the market to treat this condition. Some fundamental molecular experiments have shown different degrees of therapeutic effects in MAFLD by targeting PCD. Further conversion into clinical applications is urgently needed. The most widely used apoptosis-related pharmacological inhibitors, such as IDN-6556 and GS-9450, are being used in current clinical trials, and exciting results have been obtained (see Table 1 for details). Twenty-eight days of treatment with emricasan significantly reduced levels of ALT and CK-18, showing good safety and tolerance [156]. In addition, patients have been recruited for the phase IIb clinical trial of emricasan for MAFLD, with the primary outcome being improvement in fibrosis without MAFLD worsening (NCT02686762). The latest announcement said there was no statistically significant difference between all treatment groups and the placebo group (Table 2). Although the results of emricasan clinical trials for MAFLD are not ideal, it is undeniable that combined with the cascade reaction of caspase enzymes, the future research direction of direct pharmacological inhibitors targeting apoptosis may be more focused on inhibiting multiple targets or multiple PCD-related targets, due to the interaction between different PCD. A feasible strategy may be the selection of appropriate biomarkers, and then based on these, accurate and specific selection of drugs to achieve personalized treatment. Another effective method may be targeting pathways that regulate PCD, like AMPK/mTOR. In addition, the design of MAFLD clinical trial schemes, with unified diet and exercise management of subjects, accurate selection of clinical endpoints and so on, may improve the success rate of clinical trials. In the MAFLD models induced by methionine-choline-deficient diets (MCDs) and high-fat diets (HFDs), apoptosis increased significantly, but VX-166 reduced caspase 3-and TUNEL-positive cells and decreased inflammation and other related indicators [157]. The caspase inhibitors IDN6556 and PF-03491390 reduce transaminase activity in patients with HCV [34,158]. Pan-caspase inhibitors are currently available to study the clinical inhibition of apoptosis, whereas pharmacological targets for necroptosis are needed. Research shows considerable liver protection in MAFLD in the Ripk3 −/− genotype or with the use of dabrafenib, which inhibits RIPK3 kinase activity [159]. In addition, most pharmacological inhibitors that target pyroptosis concentrate on NLRP3, which reduces pyrolysis and the release of IL-1β and IL-18 and may have an improved effect on MAFLD. Inhibitory strategies for ferroptosis include directly inhibiting lipid peroxidation. Representative inhibitors of RTAs include α-tocopherol (α-TOH) [160], ferrostatin-1 (fer-1) [161] and liproxstain-1 (lip-1) [161]. The other method is to enhance Gpx4 activity by inhibiting ACSL4 and preventing activation of PUFAs for esterification to lysophospholipids by LPCAT3 [162] or by supplementing with nonoxidizing fatty acids, such as D-PUFA or MUFA [163]. It is worth noting that the role of autophagy in MAFLD seems to require further research. However, most current research drugs have both advantages and limitations for preclinical trial interventions and animal model studies of MAFLD. There are currently MAFLD models including dietary models, like HFD, MCD, choline-deficient l-amino acid-defined (CDAA); genetic models; and chemical models, like CCL4 and tetracycline. These can be easily analyzed to a certain extent and may partially reflect the disease characteristics and drug efficacy for MAFLD. However, due to the multiple complex pathological factors of MAFLD, these confounding factors are difficult to reproduce in animals. At the same time, there are some structural differences between animals and human beings and great differences in their MAFLD characteristics and drug response. Therefore, to assure the feasibility, safety and efficiency of therapy, the development of novel drugs for the treatment of MAFLD to target PCD should be mainly based on clinical experimental studies with better experimental designs as well as disease models for preclinical studies that can truly reflect the characteristics of human MAFLD. Conclusions Over the past 10 years, our understanding of various types of programmed cell death has changed dramatically. Under normal physiological conditions, the various forms of programmed death play unique roles in maintaining the steady state of the normal body. However, when one or more of these processes is disrupted, disease may occur. The pathogenesis of MAFLD is very complex. A variety of pathological factors, including oxidative stress and endoplasmic reticulum stress, activate various forms of programmed cell death and play key roles in this process. In other words, various pathogenic factors activate cell death programs. Therefore, the future direction of innovative treatment for MAFLD should include directly targeting activated cell death programs to improve disease, using compounds such as pan-caspase inhibitors (see Fig. 7 for details). Here, we primarily reviewed and summarized the possible forms of programmed death in MAFLD, but there are still many issues that need to be addressed: for example, the specific interaction between the various forms of programmed cell death and their impact on disease. More importantly, to more accurately control the input and output of signals and accurately select one or more forms of death, we need a better understanding for targeting programmed death to treat MAFLD. This will provide a theoretical basis and guide future research.
2023-01-02T15:03:58.047Z
2021-05-07T00:00:00.000
{ "year": 2021, "sha1": "86e038201e964b5b26cd9f541ed1483701053cd1", "oa_license": "CCBY", "oa_url": "https://cmbl.biomedcentral.com/counter/pdf/10.1186/s11658-021-00254-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "86e038201e964b5b26cd9f541ed1483701053cd1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
57783551
pes2o/s2orc
v3-fos-license
Nutritional management of phenylalanine hydroxylase (PAH) deficiency in pediatric patients in Canada: a survey of dietitians’ current practices Background Phenylalanine hydroxylase (PAH) deficiency is one of 31 targeted inherited metabolic diseases (IMD) for the Canadian Inherited Metabolic Diseases Research Network (CIMDRN). Early diagnosis and initiation of treatment through newborn screening has gradually shifted treatment goals from the prevention of disabling complications to the optimization of long term outcomes. However, clinical evidence demonstrates that subtle suboptimal neurocognitive outcomes are present in the early and continuously diet-treated population with PAH deficiency. This may be attributed to variation in blood phenylalanine levels to outside treatment range and this, in turn, is possibly due to a combination of factors; disease severity, dietary noncompliance and differences in practice related to the management of PAH deficiency. One of CIMDRN’s goals is to understand current practices in the diagnosis and management of PAH deficiency in the pediatric population, from the perspective of both health care providers and patients/families. Objectives We investigated Canadian metabolic dietitians’ perspectives on the nutritional management of children with PAH deficiency, awareness of recently published North American treatment and nutritional guidelines in relation to PAH deficiency, and nutritional care practices within and outside these guidelines. Methods We invited 33 dietitians to participate in a survey, to ascertain their use of recently published guidelines and their practices in relation to the nutritional care of pediatric patients with PAH deficiency. Results We received 19 responses (59% response rate). All participants reported awareness of published guidelines for managing PAH deficiency. To classify disease severity, 89% of dietitians reported using pre-treatment blood phenylalanine (Phe) levels, alone or in combination with other factors. 74% of dietitians reported using blood Phe levels ≥360 μmol/L (6 mg/dL) as the criterion for initiating a Phe-restricted diet. All respondents considered 120-360 μmol/L (2–6 mg/dL) as the optimal treatment range for blood Phe in children 0–9 years old, but there was less agreement on blood Phe targets for older children. Most dietitians reported similar approaches to diet assessment and counseling: monitoring growth trends, use of 3 day diet records for intake analysis, individualization of diet goals, counseling patients to count grams of dietary natural protein or milligrams of dietary Phe, and monitoring blood Phe, tyrosine and ferritin. Conclusion While Canadian dietitians’ practices in managing pediatric PAH deficiency are generally aligned with those of the American College of Medical Genetics and Genomics (ACMG), and with the associated treatment and nutritional guidelines from Genetic Metabolic Dietitians International (GMDI), variation in many aspects of care reflects ongoing uncertainty and a need for robust evidence. Background Phenylketonuria (PKU; OMIM 262600) is an autosomal recessive inborn error of phenylalanine metabolism caused by a deficiency of the phenylalanine hydroxylase (PAH) enzyme. PAH deficiency encompasses a spectrum of biochemical phenotypes from classic PKU (severe PAH deficiency) to mild hyperphenylalaninemia (with varying degrees of residual PAH activity). Untreated, PAH deficiency is characterized by elevated phenylalanine (Phe) levels in the blood and brain, resulting in neurological damage via impaired neurotransmitter metabolism and direct phenylalanine neurotoxicity [1]. Ground-breaking universal newborn screening for PAH deficiency, and treatment with a Phe-restricted diet and Phe-free or low Phe medical foods (formulas), have virtually eliminated severe PAH deficiency-related complications in early and continuously treated individuals in many populations throughout the world. This important achievement has shifted the goals of treatment from prevention of profound intellectual disability to optimization of health outcomes. Nutrition therapy, which aims at both maintaining blood Phe concentrations within treatment goals and meeting individual nutritional needs, remains a cornerstone of the management of PAH deficiency [1][2][3][4]. If administered appropriately and adhered to consistently, currently available treatment modalities are expected to result in health outcomes comparable with the general population. However, despite the medical and public health success story of the treatment of PAH deficiency, evidence suggests that long-term patient outcomes are not always optimal. Individuals living with PAH deficiency have higher risks of deficits in neurocognitive domains such as working memory, attention, processing speed and motor control, behavioural and psychosocial issues, growth and nutritional deficiencies, brain and bone pathology, and quality of life [5][6][7][8]. Delayed age at initiation of therapy, as well as variable lifelong blood Phe levels and nonadherence to treatment, have been identified as major contributors to the development of suboptimal outcomes [5,9]. It has been argued that delivery of health care that is not aligned with established best practice, uncertainty in clinical decision making, and inconsistent access to care may also contribute to suboptimal outcomes for some patients [7,10,11]. Relatively robust published evidence exists to support recommendations for many areas of management of PAH deficiency, such as diagnosis, treatment onset and duration, therapeutic goals, treatment targets and organization of care [12,13]. However, as with other rare diseases, high quality empirical evidence is not always available to support treatment decisions, resulting in several areas of uncertainty and inconsistencies in clinical decision-making that may ultimately lead to variability in health outcomes. For example, it is commonly agreed that life-long nutrition treatment should start as soon as possible for infants with initial untreated blood Phe levels > 600 μmol/L (10 mg/dL) [3]. However, the evidence regarding the possible beneficial effect of a life-long Phe-restrictive diet in children whose initial untreated blood Phe levels are 360-600 μmol/L (6-10 mg/dL) is sparse, leading to rather provisional recommendations for this patient subgroup [3,14]. Variability in the initiation of diet therapy, and other management practices related to PAH deficiency, have been reported both across countries and across centres within the same countries [1,15]. This may reflect in part the differences in treatment guidelines for PAH deficiency, developed by different groups and in different jurisdictions [1,15]. The lack of uniformity in the management of PAH deficiency and new published evidence prompted the development of updated broad-based clinical guidelines (published by the American College of Medical Genetics and Genomics, ACMG) [3] and companion recommendations for the nutritional management of PAH deficiency (published by Genetic Metabolic Dietitians International, GMDI) [2], with the goal of improved patient care in North America. Both guidelines relied on independent evidence reviews conducted by experts from the National Institutes of Health and Agency for Healthcare Research and Quality [16,17]. Both guidelines also integrated this evidence with a consensus of expert opinion in clinical practice areas for which evidence was lacking. For example, the development process for the nutritional management guidelines from GMDI included published evidence reviews, clinical protocols, consensus of experts via Delphi surveys and nominal group expert meetings, an external review, field testing, and revision, to reach at least 75% agreement [18]. The recent publication of these guidelines, coupled with previous research documenting variation in care, presented an opportunity to investigate how the guidelines are perceived by Canadian healthcare providers, and to identify important variations in care. In this study, we aimed to ascertain Canadian metabolic dietitians' awareness of published guidelines for PAH deficiency and their approaches to nutritional management of PAH deficiency in the pediatric population. Identifying uncertainties in the nutritional management of pediatric PAH deficiency in Canada, from practitioners' perspectives, is important for understanding the impact and uptake of the new guidelines, identifying areas where knowledge translation and mobilization are needed, and prioritizing questions about treatment effectiveness for future research. This survey was distributed in 2016, and thus our primary comparison was with the recently published North American management guidelines for PAH deficiency [2,3]. Questionnaire Notwithstanding the challenges of management of adult phenylketonuria (PKU), especially of maternal PKU, we developed the survey with the pediatric population in mind. Many treatment concerns are different, and our focus on pediatric phenylalanine hydroxylase (PAH) deficiency is consistent with one of the goals of the Canadian Inherited Metabolic Diseases Research Network (CIMDRN); to understand current practices in the diagnosis and management of PAH deficiency in the pediatric population, from the perspective of both health care providers and patients/families. The team of investigators included experienced registered metabolic dietitians from several Canadian metabolic centres, as well as a metabolic physician and investigators with expertise in survey research methods. We developed a study-specific questionnaire with 52 questions that covered self-reported awareness and use of the most recent published North American guidelines, personal and practice characteristics, and the following topics related to the nutritional management of PAH deficiency: classification of disease severity; frequency of monitoring and target ranges for surrogate biomarkers; recommended dietary intakes of key nutrients and methods recommended for patients to self-monitor intake of these nutrients; recommended use and accessibility of medical foods (formulas); use of vitamin/mineral supplements; frequency of clinic visits and communication with patients and their families; and methods of encouraging and monitoring patient adherence to therapy. The survey questionnaire is available as a supplementary material. Sample selection and survey implementation Eligible participants were metabolic dietitians who provided care to children with PAH deficiency in Canada. Based on their clinic listing on the Genetic Metabolic Dietitians International (GMDI) website, we identified 33 Canadian metabolic dietitians in nine Canadian provinces and three territories. We could not be certain, based on the available information, that these dietitians specifically provided care to children with PAH deficiency; this eligibility criterion was thus incorporated into the questionnaire as a screening question. Adapting Dillman's tailored design method [19], we made up to six contacts (between March and May 2016) to invite Canadian metabolic dietitians to participate in the survey. These included (a) a pre-notification email message sent out by one of the study investigators who is a metabolic dietitian; (b) an initial mailed invitation with a copy of the survey; (c) an initial email invitation with the link to the online survey; (d) a mailed reminder letter with a copy of the questionnaire, sent to non-respondents; (e) an email-reminder with the link; and (f ) a final reminder email message, sent to remaining non-respondents. Dietitians could respond to the survey by mail, using a prepaid return envelope that was included with each of the two questionnaires that were mailed; or online, through a REDCap platform, hosted on a secure BC Children's Research Institute server with participant access through a unique identification number and password. In accordance with existing evidence regarding monetary incentives [20], we offered a $25 iTunes gift card to each participant who completed the survey; and this was mentioned in the invitation letters and subsequent reminders. Analysis of survey data Data were entered into a REDCap database and exported to SAS ® 9.4 Software for descriptive analysis. We report proportions as all survey questions were categorical. Many questions used 4 or 5-point Likert-type scales and single-answer option; alternatively, some questions incorporated multiple answers which were expected to add up to more than 100%. Where necessary and applicable, we grouped categories (e.g., "all" with "most", "excellent" with "good", "sometimes" with "rarely") to account for small numbers. Results Response rate and distribution of sample characteristics Of 33 Canadian metabolic dietitians invited to participate, we received twenty surveys of which nineteen had been completed. One respondent indicated on an initial screening question that he/she did not provide care for pediatric phenylketonuria (PKU), and therefore did not complete the full questionnaire (response rate, 19/32, 59%). Ten surveys (53%) were submitted on paper and 9 (47%) were completed online. We received responses from 14 centres, located in nine of the ten Canadian provinces. Of the 14 centres, 10 had only one respondent and a further 4 centres had multiple respondents. The majority of respondents had worked in metabolic nutrition services for more than 6 years (74%), were full time (68%), and dedicated at least half of their time to the care of children with phenylalanine hydroxylase (PAH) deficiency (53%) ( Table 1). At the centre level: respondents indicated that the majority of centres (79%) followed more than 20 patients with PAH deficiency who required regular nutrition services; and cared for both pediatric and adult populations (79%). Only three centres (21%) were reported to have a comprehensive multidisciplinary team that includes a metabolic physician, metabolic dietitian, metabolic nurse, psychologist, social worker and clinical biochemist ( Table 1). Use of published management guidelines of PAH deficiency (PKU) All respondents were aware of published PKU guidelines, referencing the ACMG PKU consensus guideline 3 and the companion recommendations for the nutrition management of PAH deficiency [2]. Other guidelines that participants mentioned included: "SERC-GMDI PKU Nutrition Management Guidelines" [21], "NIH Consensus Guideline for Management of PKU" [16], "European Guidelines (not specified)", "Publications by Anita Macdonald (not specified)" and "Nutrition Management of Inherited Metabolic Diseases" [22]. Opinions on classification of PAH deficiency severity To classify the severity of PAH deficiency 9 of 19 respondents (47%) reported using only newborn pre-treatment blood phenylalanine (Phe) levels and 8/19 (42%) used pre-treatment blood Phe levels in combination with either Phe tolerance, PAH genotype or all three (Fig. 1). One respondent also indicated using blood Phe levels when the patient is catabolic. We also asked respondents to indicate the specific pre-treatment blood Phe levels that they used to categorize PAH deficiency severity, using typical classification terminology of classical, moderate, and mild PKU, and mild HPA [23] ( Table 2). Definitions of these categories varied among respondents. Blood phenylalanine levels in management and monitoring of phenylalanine hydroxylase deficiency The majority of respondents (74%) reported that they initiate dietary treatment at blood Phe levels of ≥360 μmol/L (≥6 mg/dL), although some dietitians support initiating treatment at higher Phe levels (Table 3). Blood Phe and tyrosine were reported as being monitored by all dietitians, with 95% also monitoring ferritin (data not shown). More than half also routinely monitor pre-albumin, albumin and vitamins. Forty-seven percent reported routinely monitoring bone density while a small minority reported routine monitoring of essential fatty acids. Among other routinely monitored surrogate biomarkers, homocysteine, carnitine, full amino acid quantification, alkaline phosphatase, complete blood count, trace elements (zinc, selenium, manganese), folate, B12, and 25-hydroxyvitamin D were reported by some respondents (data not shown). For younger patients, all respondents indicated that the target range for blood Phe levels was 120-360 μmol/dL, but opinions varied slightly for patients aged > 10-18 years old: most dietitians recommended 120-360 μmol/L, while some recommended higher target Phe levels, up to 600 μmol/L ( Table 3). The majority of respondents consider 120 μmol/L to be the lowest acceptable average level for blood Phe, in the long-term (Table 3). A majority would rarely recommend keeping blood Phe levels at the lower end of therapeutic range, by means of a more phe-restricted diet, and specifically would not be comfortable with patients having blood Phe levels lower than 120 μmol/L (Table 3). Nearly half of the respondents (47%) recommend maintaining blood Phe levels at a higher-end of the therapeutic range for "some patients" (clinical case scenarios were not specified) ( Table 3). Clinic visits and team communication As expected, clinic visits were most frequent in infants 0-12 months old, and declined in older age groups (Table 4). After the first year of life, the majority of dietitians indicated seeing their patients less often than once per month, but at least once per year. Similarly, a majority of respondents reported that between-visit communications took place most often with parents of the youngest patients (Table 4). With respect to the means of communication with families between visits, the telephone was used by more dietitians (100%), than was email (89%), mail (58%), fax (32%), and phone texts (16%) (data not shown). All respondents reported discussing individual patients' nutritional management with other members of the health care team. However, only slightly more than half (11/19, 58%) indicated discussing most of their patients on a regular basis, and under a half of respondents (8/19, 42%) reported that these discussions do not occur routinely. Just over one quarter (5/19, 26%) consider multidisciplinary healthcare team communication to be "highly effective", while the majority of respondents (13/19, 68%) report that they find within-team communication to be "somewhat effective", and one dietitian considers it to be not effective (1/19, 6%). Dietary prescription and assessment The most important factors reported to influence the prescription for medical food (formula) were the nutrient composition of formula, patient's age, preferences of the patient or family and availability of the product, reported by 95, 89, 89 and 79% of dietitians, respectively ( Table 5). The most commonly prescribed formulas (proportion of dietitians including the formula as within their "top 3") were: Periflex Infant (53%) and Phenyl free 1 (37%) for infants < 1 year old; Phenyl free 1 (26%) and Periflex Junior (26%) for 1-2 year-olds; Periflex Junior (21%) and Periflex Junior Plus (21%) for children aged 3-9 years; and Periflex Advance (21%) and Phenylade Essential (21%) for children aged 10-18 years (some of One of the two centers reporting using mg/dL, uses mg/dL in older patients and umol/L in younger patients the responses with regard to the different Periflex products reflect periods of transition in their availability). One third of participants (32%) reported that their choice of formula is limited by the hospital formula contract. Full provincial coverage of the costs of low protein foods was reported by dietitians from 4 centres, while the remainder reported only partial coverage. The discontinuation of medical formula was reported as "never" considered by 8/19 (42%) of respondents, while 11/19 (58%) respondents would consider discontinuing formula in some cases; for example, patients with mild PAH deficiency and those who are good responders to Kuvan (sapropterin dihydrochloride, BH4) (data not shown). With regard to low protein food, good and excellent accessibility was reported by the majority of responders (17/19, 89%). A minority of dietitians (4/ 19, 21%) reported prescribing large amino-acid supplements (LNAAs) to their pediatric patients. Respondents most frequently reported home sample collection as the method of collecting blood samples for routine monitoring of phenylalanine (95%), followed by a "local lab or hospital close to patient's house" (68%) and "metabolic clinic" (63%) (data not shown). Monitoring adherence to the medical formula and low protein foods To assess patients' adherence to formula intake, dietitians most often reported relying on the verbal report of the parent and/or caregiver (89%), followed by monitoring blood Phe levels (84%), monitoring weight and height (79%), checking how much formula was released by the dispensing authority (63%) and analyzing written dietary questionnaire (53%) ( Table 5). As expected, a majority of respondents consider high blood Phe levels to be the most reliable indicator of patients' non-adherence to the diet and/or drug therapy (10/19, 53%), followed by "not pulling formula from the sources that supply formula" (5/19, 26%), "not doing blood dots on a regular basis" (3/19, 16%) and "not showing up in clinic" (1/19, 5%) (data not shown). To improve a patient's adherence to the diet, dietitians employ several strategies, including individualized nutrition counseling (reported by19/19, 100%), motivational , and regular reminders to collect/submit blood Phe dots (10/19, 53%). However, regular reminders to collect/submit blood Phe dots were reported to be the least successful of the strategies ( Table 5). Intake of dietary Phe, protein, calories, minerals, and vitamins are routinely monitored for most patients, as reported by the majority of participants (Table 5). All participants reported performing anthropometric measurements at every clinic visit; while both diet analysis and nutrition education were reported as always/often included in routine visits by 90% of respondents (Table 5). Reported use of guidelines All respondents were aware of the ACMG and GMDI PAH deficiency consensus guidelines, and almost all respondents reported use of these guidelines. With respect to the GMDI nutrition guidelines, in particular, more detailed information and discussion is provided online at the SERN-GMDI PKU Nutrition Guidelines website including a PKU tool kit with detailed patient diet examples for dietitians [21]. The guidelines are in widespread use but, given the lack of evidence, they often do not recommend a specific course of action related to the most uncertain clinical practice questions (e.g., diet initiation in mild PAH deficiency). These areas of uncertainty were among the most variable aspects of nutritional management reported by the Canadian dietitians in our survey. Human resources and services in metabolic centres Consistent with a previous report [24], our survey identified variation in the organization of care within Canadian metabolic centres. Although the evidence with respect to Two respondents utilize pre-treatment Phe as follows: PKU is diagnosed when pre-treatment Phe is ≥1200 μmol/L (classical PKU, included in "classical PKU") and HPA < 1200 μmol/L (not included in the table) For most/nearly all patients 5 (26) For some patients 9 (47) Rarely/never 5 (26) Recommend maintaining lower-end therapeutic range blood Phe levels and more restricted natural protein intake (n = 19) For most/nearly all patients 4 (21) For some patients 4 (21) Rarely/never 11 (58) a If yes, participants were asked to explainopen-ended responses: "If levels were consistent and testing was done weekly, I would be ok it with somewhat lower levels, perhaps as low as 80; I would be more comfortable with an older child (> 2 years), but this rarely happens; On Kuvan & hard to increase Phe intake; On restricted diet but growing well; I would be comfortable with <120umol/L in maternal PKU where I was certain formula and calorie intake was optimized and the patient was careful with foods they chose to ensure good nutrition; If patients are experiencing rapid growth (usually in infancy); Only for super responders to Kuvan tolerating DRI total protein from regular protein foods with minimal or no PKU foods" the impact of a coordinated team approach on improved outcomes in the treatment of PAH deficiency is very scarce, one Canadian retrospective study reported that a multidisciplinary centralized approach results in better outcomes in terms of improved adherence to the diet, control of blood Phe, and fewer patients lost to follow up [13]. Both recent American and European guidelines recommend a multidisciplinary coordinated approach to the management of PAH deficiency, where the health care team should include a metabolic physician, dietitian, specialized metabolic laboratory and access to a psychologist and a social worker. Our survey indicated that only 3 out of 14 centres have a metabolic physician, dietitian, biochemist and access to a psychologist; indicating a lack of multidisciplinary care. Only two centres reported having dietitians whose time is fully dedicated to the care of patients with PAH deficiency, but seven centres reported at least a half time dedicated position. These differences likely reflect patient numbers but may also reflect differences in staff time available for patient care. With regard to the communication within the healthcare team, only one quarter of survey respondents regarded this as highly effective, highlighting a need to improve existing communication practices within healthcare teams who provide care to patients with PAH deficiency. PAH deficiency phenotype classification Our survey revealed limited consensus among Canadian dietitians on the definition of the severity of PAH deficiency. To identify the type of PAH deficiency, the majority of dietitians reported use of pre-treatment blood Phe levels alone, or in combination with Phe tolerance and/or genotype. A few dietitians either do not use pre-treatment blood Phe levels for this purpose, or else use them for a modified classification, such as "HPA or classical PKU". Such a lack of clarity most likely created a discrepancy in reporting the use of pre-treatment blood Phe levels, to define the severity of PAH deficiency: Fig. 1 indicated that only two dietitians do not use pre-treatment blood Phe levels for this purpose, but the number increased to four in Table 2, in response to a request to provide a range for each classification of PAH deficiency (PKU); mild HPA, mild PKU, moderate PKU, classical PKU. Several authors have recommended against each of the indicators as a means of classifying disease severity in the neonatal period [1,25]. For example, pre-treatment blood Phe levels typically do not reach a maximum because of prompt diagnosis and treatment onset [1]. In addition, precise Phe tolerance is difficult to determine at the clinic visit setting because of inconsistencies between actual and prescribed dietary Phe intake and other factors, such as a patient's age and/or metabolic state during the period of interest [25]. Finally, PAH genotypes are often difficult to interpret because several mutations are responsible for wide range of clinical phenotypes [3,26]. Since none of the above criteria are fully appropriate as a standard for the classification of PAH deficiency, the most recent north American guideline 3 referred to a previous NIH consensus guideline 16 that suggested a simplified classification which is based on pre-treatment blood Phe levels [27]. Therefore the respondents to this survey were generally following established practice. Determining the severity of PAH deficiency might not seem crucial in the clinical setting where a patient's management is rather dynamic, and directed by the most current blood Phe levels. However, there is a small but feasible risk that overestimation of the degree of severity of PAH deficiency could initially result in over-restriction of the intake of natural protein, until Phe tolerance is empirically determined. Furthermore, if an individual is assumed to have minimal residual PAH activity, and therefore potentially a low chance of responding to sapropterin, he or she may also not be given the opportunity for a BH4 responsiveness trial [28,29]. Frequency of clinic visits and provider-family communication Individual adherence to nutrition therapy depends on numerous patient-and healthcare-related factors, and appears to decline with increasing patient age [10,30]. There is some evidence that gaps in communication between health care providers and patients / families may contribute to nonadherence [11]. As recommended by the 2014 treatment guidelines for PAH deficiency from GMDI, frequency of communication with patients, aged 8-18 years, should occur weekly to monthly [2]. However, nearly half of the survey respondents reported communicating with patients of this age and their families, less frequently than is recommended. Aligned with recommendations though, was the contact with 3-10 year old patients and their families: 68% of survey respondents reported their frequency of communication to be 1-3 times per month. Decreased frequency of contact with older children is likely due to decreased frequency of home blood Phe monitoring, especially as patients learn to become independent in managing their daily diets and home blood draws. However, other factors may also offer an explanation for the failure to meet recommendations: staff shortages in the metabolic clinic and subsequent time limitations; disappointment with non-adherent patients; other social, psychological, economical and human resource related barriers [31][32][33]. The decline in the frequency of communication that we observed might contribute to non-adherence with treatment in adolescents. The evidence suggests that continuing communication and education throughout childhood, and perhaps reinforcement of the frequency and quality of communication might promote better adherence and subsequently may improve long-term outcomes in older children and adults [11]. We did not ask participants about the frequency of blood Phe measurements. However, we believe that there is a relatively close correspondence between the frequency of communication of dietitians with patients/ families and the frequency of blood Phe measurements, since typically each blood Phe result triggers communication with the patient / family. Treatment initiation There is a good evidence and expert agreement that treatment for PAH deficiency should be initiated at ≥600 μmol/L (10 mg/dL) [3,34]. However there is lack of conclusive evidence on the balance between "added benefit" and "no harm" of treatment initiation at ≥360-600 μmol/L (6-10 mg/dL). This uncertainty translates into provisional practice recommendations [3,12,34]. Not surprisingly, our survey found that the majority of dietitians set the threshold for initiation of therapy at ≥360 μmol/L (≥6 mg/dL), and several others at higher blood Phe levels. In alignment with the evidence and published guidelines, all would begin dietary Phe restriction when the blood Phe level is ≥600 μmol/L (10 mg/dL). Prescribing LNAA Since this was a survey of pediatric practice, less than a quarter of respondents reported prescribing supplements of large neutral amino acids (LNAA). Animal and human studies show that Phe competes with LNAAs for the protein carrier through the intestinal wall and the blood brain barrier. Thus the lack of LNAAs, in of itself, might promote higher Phe levels in the central nervous system [35,36]. There is mainly positive but limited evidence on the benefit of LNAA supplementation in treatment of PAH deficiency. Therefore, as the LNAA content in PKU medical foods (formulas) can vary, more research especially on the safety and long-term outcomes of treatment with LNAAs is clearly needed [37,38]. As mentioned in the ACMG guidelines, current use of LNAA is limited to adolescents and adults, with avoidance in pregnancy. A European panel of PKU experts gave no statement on the use of LNAAs [3,14]. Limitations Our survey focused on PAH deficiency in the pediatric population and thus we cannot comment on the transition to adult care, nor to adult nutrition management. While our response rate was reasonable (59%) for a survey of health care providers and represented almost all Canadian metabolic centers (14 out of 16) and provinces and territories, with the exception of Nunavut and Newfoundland and Labrador, the views of participants may not represent those of all metabolic dietitians in Canada; for example, the individuals who did not respond to the survey may be less aware of, or adherent to, current guidelines. Another major limitation of this survey was that we did not address how much Phe, tyrosine and protein (medical foods/formulas and natural protein), in relation to age, was prescribed in each center; nor what proportion of these prescriptions were aligned with recommendations. We believed that such detailed nutritional data should be derived from clinical reviews (e.g., from chart reviews), which was outside the scope of this publication. Conclusion We found that Canadian metabolic dietitians generally follow published guidelines in their nutritional management of pediatric PAH deficiency. Dietitians responded with some variation, both across and within centres. The most striking differences were in approaches to defining PAH deficiency phenotype, treatment targets for blood Phe levels, frequency of clinic-patient communication with older children, and organization of care in metabolic centres. More research is needed to generate better evidence, with which to address the current gaps in knowledge in relation to treatment of PAH deficiency, variation in laboratory monitoring and clinic visit frequency; with subsequent translation into practice.
2019-01-17T06:16:30.754Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "55c121c50a86744c90c1ad411bee5972a1b71233", "oa_license": "CCBY", "oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-018-0978-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55c121c50a86744c90c1ad411bee5972a1b71233", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211081242
pes2o/s2orc
v3-fos-license
The column procedure preserves elbow stability on biomechanical testing Purpose The effect of open release of a post-traumatic elbow contracture on the stability of the joint has not been so far studied in vivo. Resection of elbow joint capsule, the key element of surgery, was reported to have no effect on the stability of cadaveric elbows. The joint capsule is yet known to participate in maintaining elbow stability as one of secondary stabilizers. Methods We assessed elbow joint laxity in 39 patients who underwent an open contracture release via the ‘column procedure’ described by B. Morrey and P. Mansat within the preceeding three to nine months. The measurements were taken with an apparatus designed particularly for this experiment according to the predetermined protocol. A preliminary part of the experiment showed that there was no significant difference between laxity of two elbow joints in healthy volunteers. Laxity of the operated elbows could be then compared with the contralateral joints. Results Mean absolute difference of laxity between healthy and operated elbows was 1.55° (0.1°–4.1°, SD = 1.1) being significantly lower than 2°, p = 0.0056. The difference of the joint laxity between the operated and healthy elbows did not differ statistically significantly by more than 0.6° from the difference of the laxity of two healthy elbows and, therefore, is not clinically noticeable. Conclusions Our experiment confirmed that the ‘column procedure’ is a safe procedure which does not compromise the stability of the elbow joint. Introduction Elbow stability results primarily from the integrity of relevant anatomical structures which maintain physiological laxity of the joint. Laxity can be defined as range of motion of the joint in the coronal plane and differs broadly between individuals [1,2]. Physiological amounts of elbow laxity provide stability of the joint, which is a clinically assessed feature and implies correct biomechanics of the joint. Following biomechanical testing of cadaver elbows, Nielsen [3] and Dos Remedios [4] independently concluded that resection of elbow joint capsule, which is the key element of surgery for a post-traumatic contracture, does not affect the joint laxity, hence would not lead to instability in clinical setting. Those findings have not been confirmed in vivo even though the joint capsule is reported to participate in maintaining the stability of the elbow [2]. In our study, we assessed elbow joint laxity in 39 patients who had undergone an open release of a post-traumatic contracture. The measurements were taken using a specially constructed apparatus which allowed full and safe immobilization of the upper limb and precise biomechanical testing. Elbow stability derives from the congruence of the articular surfaces in roughly 50%, while the remaining half depends on the integrity of ligaments, capsule, interosseous membrane and, to a lesser degree, muscles of the arm and forearm which act as dynamic stabilizers [2,5,6]. Those structures are also classified as either primary or secondary stabilizers; the former are those whose injury leads directly to increased laxity of the This article does not contain any studies with animals performed by any of the authors. joint, and the latter are the structures whose damage would increase laxity only after the relevant primary stabilizers had also been injured. Primary stabilizers include the anterior bundle of the medial collateral ligament, lateral collateral ligament complex and the congruence of the ulnohumeral joint [1,[7][8][9]. Important secondary stabilizers comprise the congruence of the radiohumeral joint, joint capsule and muscular attachments of the pronator and flexor muscles of the forearm and wrist to the medial humeral epicondyle and extensor muscles to the lateral epicondyle, respectively. An important part of the lateral collateral ligament complex, the lateral ulnar collateral ligament, serves as a primary stabilizer acting against posterolateral rotatory instability of the elbow [10]. Morrey and An in their biomechanical experiments examined the amount of contribution of the particular anatomic structures to the joint stability depending on the position of the elbow [2]. They noted that the articular capsule is responsible for 30% of valgus stability in full extension and that its importance diminishes with the flexion of the joint, dropping to near-zero values in 90°of flexion. The capsule also provides 32% and 13% of varus stability in elbow extension and 90°of flexion, respectively. It serves as the main stabilizer against joint distraction forces, providing 85% of resistance in extension and 8% in 90°of flexion. There is also an important relation between forearm rotation and elbow laxity, which is minimal in full supination, as demonstrated by Pomianowski [11]. The post-traumatic contracture of the elbow is a common complication of fractures and dislocations around the elbow caused by thickening and scarring of the articular capsule, altered shape and incongruence of bony surfaces, osteophyte formation, presence of intra-articular loose bodies and heterotopic ossification. Its occurrence is related to initial damage to the joint structures, intra-articular haematoma formation and individual propensity but can usually be effectively prevented by correct treatment and particularly by avoiding immobilization of the joint for longer than three weeks. Indications for surgical release include failure of physiotherapy after four to six months or presence of a surgical pathology, e.g. loose bodies within the joint, ossifications or a radioulnar synostosis. There are also patients with the so-called 'stiff elbow' who do not benefit from physiotherapy and who should be treated surgically without the usual delay. Our experience involves 279 patients who have undergone an open release of post-traumatic elbow contracture in our centre since 2003 ( Table 1). The relatively high percentage of patients who developed a contracture following a radial head fracture was mainly due to prolonged immobilization of the joint and lack of adequate physiotherapy. Our surgical technique follows the outline of the 'column procedure' described by B. Morrey and P. Mansat [12]. The joint is approached via the anterolateral route between the extensor carpi radialis longus and brevis and the extensor digitorum communis anteriorly to the lateral collateral ligament. The articular capsule is bluntly dissected and excised until the anterior aspect of the ulna is visualized. The posterior compartment of the joint can be opened through the same incision if needed, and posterior capsulectomy can also be performed. Scar tissue, osteophytes and intra-articular loose bodies are removed. If deemed necessary, the procedure can involve other steps, e.g. radial head excision, interpositional arthroplasty, excision of synostosis or ulnar nerve transposition via a medial approach. Although there is no agreement on the superiority of the radial head replacement over simple excision [13], we always replaced the radial head with a prosthesis (KPS) designed by the senior author. Release of the ulnar nerve was consistently combined with its anterior transposition and was never performed prophylactically despite recent reports underlining its benefit [14][15][16]. Arthrolysis of the elbow can be performed using other approaches, most often Kocher approach, as well as combined with exposing the joint from the medial or posterior aspect [17][18][19][20] depending on the underlying pathology [21]. Sequelae of complex injuries, e.g. terrible triad of the elbow, are particularly demanding. Protocols involving extensive arthrolysis, radial head excision and temporary external fixation have been elaborated [22,23]. Due to the complexity of elbow injuries, optimal treatment of their sequelae varies among different centres according to the individual experience [23][24][25]. The results of the operative treatment vary from excellent to poor being closely dependent on the pre-operative status of the elbow, namely, on bone alignment and articular incongruence, as reported in other series [26][27][28]. An analysis of the 213 cases in which full medical record was available including the final assessment 12 months after the operation showed that mean MEPI score following surgery was 86.3 points compared to 63.2 points preoperatively (mean gain 22.9 points). If the articular surfaces were intact or healed in anatomic alignment, the results were usually spectacular with a significant operative gain regarding both the range of motion and the antalgic effect. Mean gain in flexion-extension range of movement after 12 months was 29°(− 10 to 95°), and mean improvement of forearm rotation was 26.8°(0 to 140°). In 27 cases, the surgical procedure involved other steps listed in Table 2. Seven patients required re-operation due to recurrence of the contracture, and two patients developed a deep infection. Materials and methods Assessment of elbow joint stability was performed using an apparatus designed particularly for this experiment, which measured the laxity of the joint by pivoting the forearm of the immobilized upper limb into valgus and varus alignment. The combined deviation angle was subsequently calculated by the device and was established as the physiological laxity of the examined elbow joint. The apparatus consisted of a chair equipped with an extending arm and a moveable frame containing the elbow and wrist immobilizers, the measuring module and the control panel ( Figs. 1 and 2). The examined limb was immobilized in a padded two-part holder covering the distal part of the arm above the humeral epicondyles, which was tightened after the limb had been placed in the correct position. Padding increased the comfort of the patient and prevented the arm from rotating in the holder. The axilla was additionally fixed by an adjustable-length strap. The tested limb was positioned in 30°o f elbow flexion to increase the contribution of the joint capsule to the stability of the elbow. The distal part of the forearm and the wrist was blocked in full supination to reduce the inherent laxity of the joint, as we assumed that any increase in the laxity would be more evident in this position (Fig. 3). The maximum valgus and varus forearm deviation was recorded based on the analysis of the increase in the moment of resistance. The measuring module recorded the current angle of the forearm deviation and the instantaneous torque to monitor the value of the torque derivative (Fig. 4). Deflection of the forearm would be stopped when either the torque itself or torque derivative reached the maximum value defined by the investigators. These values had been predetermined experimentally to ensure that the forearm was deflected strongly enough, but the applied force did not cause the pain or arm movement in the immobilizing holder (Fig. 3). Minimum value of the torque derivative was also adjusted, which prevented the position of the maximum deviation from being recorded too early, e.g. due to muscle tightening by the examined patient. The selected parameters were identical for all measurements in the entire study. According to the established protocol, the measuring apparatus performed a cycle consisting of six transitions from maximum valgus to maximum varus deviation. At each extreme position, the angular value was recorded for the difference between neutral position and extreme deviation. Two results the highest valgus and varus deviationswere rejected. The remaining values were used to calculate the mean joint laxity, being the sum of the mean valgus and varus deviations. High reproducibility of results was obtained; the difference between the results rarely exceeded 0.5°for individual deviations in the same direction. The aim of our study was to determine if a correctly performed 'column procedure' leads to an increase in the physiological laxity of the joint, which in turn could result in instability of the elbow. In our reasoning, we assumed that the laxity of the joint following the operation could be assessed by comparing it with the contralateral normal elbow. The goal of the preliminary part of the experiment was therefore to define if there is any difference in laxity between the two elbow joints in a healthy individual. A series of measurements was taken on a group of volunteers recruited from 52 healthy individuals (29 males, 23 females) of mean age of 34 years (24 to 68) [29]. The selection criteria were normal anatomy and function of both elbow joints with no history of trauma to upper limbs or rheumatologic disease. Owing to the construction of the measuring apparatus, also individuals of exceptionally sturdy or slender build as well as those in whom their natural valgus angle of elbows exceeded 20°could not participate in the study. The results led to a conclusion that although the range of physiological elbow laxity defined as the deviation from the maximal valgus to maximal varus position varies significantly (10.6°-26.5°, mean value 17.8°), the difference in the laxity between two elbows of the same volunteer was only slight, amounting to a mean value of 1.19°(0.1°-3.8°). Moreover, there was no correlation between the side of the elbow with greater laxity and the dominant limb. We therefore concluded that the contralateral elbow could be used as a reference to assess a possible change of the joint laxity following surgery. The main part of the experiment consisted of examination of 39 patients who had undergone a standard open release of a post-traumatic elbow contracture in our centre during the preceeding 12 months. The patients were recruited according to precise criteria ( Table 3). The measurements took place after completion of the physiotherapy and reaching the optimal range of motion of the elbow. The proposed time frame of three to nine months between the surgery and the examination was short enough to ensure that the results would not be influenced by any degenerative changes in the joint. There were 39 patients of mean age of 33 years included in the study, 25 (64.1%) of whom were males and 14 (33.9%) females. Thirty-six patients (92.4%) were right-handed, and three patients were left-handed (7.6%). The most prevalent injuries to the elbow were radial head fracture (11 cases -28.2%) and distal humeral fracture (9 cases -23%), which was in accordance with our overall experience. The mean flexion and extension of the elbow joint before the surgery were 117.4°and 44°, respectively, compared to 125.7°and 19.4°after the surgery. The resulting mean increase in the range of motion with respect to the flexion and extension arc was 32.8°. The mean increase in the rotational Results For each patient, total range of laxity was calculated separately for the healthy and operated elbow. The absolute difference between the laxity ranges of both elbows was then calculated. Two coloured dots show mean valgus and varus deviation value. It can be appreciated that the valgus dot is located on the maximal torque line, while the varus dot is slightly lower. This shows that whereas the valgus deviation was registered upon reaching maximal torque value, the varus deviation was recorded when maximal torque derivative was reached As stated earlier, analogous data had been previously obtained in healthy volunteers. Wilcoxon signed-rank test was used to verify the hypotheses concerning mean values of absolute differences between healthy elbows and between healthy and operated elbows by comparing the mean differences with the values of 1°and 2°. The equality of the absolute differences in the laxity of elbows between the healthy and operated groups was checked using the Mann-Whitney U test. Statistical significance was assumed at p < 0.05. The difference of 1°between laxity of the two elbows was considered as clinically noticeable. The results of previously performed examinations assessing range of laxity in healthy subjects showed that the mean elbow valgus deviation was 11.2°(6.4°-16.1°) and mean elbow varus deviation was 6.6°(3°-10.7°). The laxity range of the elbow was 17.8°(10.6°-26.5°). The mean difference in laxity between the two opposite elbows in the same person was 1.19°(0.1°-3.8°, SD = 0.84) and was significantly lower than 2°(p < 0.0001). It should be noted that there is no correlation between the side with greater elbow laxity and the dominant side. Although there were 94.2% right-handed patients in the healthy group, only in 57.6% of the volunteers the right elbow was the joint with a greater range of motion in the coronal plane than the left one. The results of the study in patients after surgical release of elbow joint contraction indicated that the mean absolute difference in the laxity range between healthy and diseased elbows was 1.55°(0.1°-4.1°, SD = 1.1) and was also significantly lower than 2°, p = 0.0056. The comparison between the two groups showed that the difference in the elbow laxity range between the operated and healthy elbows in the operated patients did not differ statistically significantly from the difference in the elbow laxity range between two healthy elbows in the healthy group (p > 0.1). The collected data allowed for detection of a difference of 0.3°in the healthy group and 0.46°in the group of operated patients with a statistical power of 80%. With 52 and 39 observations in the healthy and operated groups respectively, the mean value of 1.19°in a group of 52 healthy volunteers and the standard deviation in both groups of 1°, a difference between means of 0.6°could be considered as statistically significant. It has been found that the difference in the total deviation between the operated and healthy elbows does not differ statistically significantly by more than 0.6 from the difference in the total deviation of the two healthy elbows and, therefore, is not clinically noticeable. Discussion Column procedure is a well-established and efficient technique in elbow surgery associated with favourable results. Although joint instability is not recognized as a common complication of this type of surgery, this issue has received little attention. The aim of our study was to analyse in an objective manner whether column procedure affects elbow laxity, which could in turn lead to joint instability. The vast experience of our centre allowed us to operate in a reproducible manner and to obtain surgical results concomitant with those of other authors, which additionally validates our conclusions. According to our knowledge and thorough search through the available medical databases, there were only two reports involving similar studies conducted in cadaver labs. Nielsen et al. examined the effect of anterior and posterior capsulectomy on elbow laxity in seven cadaver specimens, finding no convincing influence. Dos Remedios et al. in another study came to the same conclusions. These experiments were not, however, continued in vivo, and we suppose that the clinical value of these studies was limited as some of the biomechanical aspects, e.g. dynamic forces exerted by muscles around the elbow, could not have been well reproduced. As far as we are concerned, our experiment was the first biomechanical assessment of elbow laxity following column procedure with the use of a dedicated measuring device. The main limitation of our study is its narrow field. The study was not designed to analyse operative results of a widely used procedure nor to present improvements to the surgical technique but to prove using an impartial approach and sophisticated equipment that joint laxity is not violated. Conclusion Our experiment confirmed that the 'column procedure' is a safe procedure which does not compromise the stability of the elbow joint. Funding information There were no funding nor other sources of support to be disclosed. Neither the authors nor any members of their families have received any financial remuneration related to the subject of the article. Compliance with ethical standards Conflict of interest There was no conflict of interest. Ethical approval All procedures were in accordance with the ethical standards of the institutional research committee (Centre of Postgraduate Medical Education) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The measuring apparatus had been approved as a diagnostic device in living subjects by the Polish Ministry of Health (decision number 2/10/PLW). Informed consent, as stated above, was obtained from all individual participants included in the study. Statement of Informed consent Informed consent was obtained from all individual participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-02-12T16:07:54.144Z
2020-02-12T00:00:00.000
{ "year": 2020, "sha1": "7ae42739bf9abb01cb79654d50e349ba1cddd0e5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00264-020-04494-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7ae42739bf9abb01cb79654d50e349ba1cddd0e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231740798
pes2o/s2orc
v3-fos-license
Non-SUSY heterotic string vacua of Gepner models with vanishing cosmological constant to heterotic strings. Namely, starting from the generic Gepner modelsforCalabi–Yauthree-folds,weconstructnon-SUSYheteroticstringvacuawithvanishing cosmologicalconstantattheone-looplevel.Weespeciallyfocusonasymmetricorbifoldingbasedonsomediscretesubgroupofthechiral U ( 1 ) action which acts on both the Gepner model and the SO ( 32 ) or E 8 × E 8 sector. We present a classification of the relevant orbifold models leading to the string vacua with the properties mentioned above. In some cases, the desired vacua can be constructed in a manner quite similar to those given in the previous paper for the type II string, in which the orbifold groups contain two generators with discrete torsions. On the other hand, we also have simpler models that are just realized as asymmetric orbifolds of cyclic groups with only one generator. Introduction and summary Exploring non-supersymmetric vacua with the vanishing cosmological constant has been a subject of interest in superstring theory (at the level of one loop, at least), probably motivated by the cosmological constant problem. Consistent type II string vacua with such a non-trivial property were first constructed in Refs. [1][2][3] based on some non-Abelian orbifolds of higher-dimensional tori, followed by studies such as Refs. [4][5][6][7][8][9]. More recently, several non-SUSY vacua with this property have been constructed as asymmetric orbifolds [10] by simpler cyclic groups in Refs. [11,12]. The purpose of the current study is to construct non-SUSY heterotic string vacua with the vanishing cosmological constant at the one-loop level based on non-toroidal models. The method we adopt is a natural generalization of those given in our previous work [27]. That is, we start from the generic Gepner models [28,29] for Calabi-Yau three-folds, and construct non-SUSY heterotic string vacua by implementing some asymmetric orbifolding. Since we have various U (1) symmetries in the Gepner model, as well as SO (32) or the E 8 × E 8 sector in the left mover (which we assume bosonic), it would be quite natural to make the orbifolding associated with some cyclic subgroup of these U (1) PTEP 2021, 033B03 K. Aoyama and Y. Sugawara actions. Indeed, let us denote the generator of such a cyclic subgroup as δ L . Then, it is possible to construct non-SUSY string vacua by making the asymmetric orbifolding defined by the operator where F R denotes the spacetime fermion number (in other words, (−1) F R acts as the sign flip on the right-moving Ramond sector). It is obvious that the orbifold projection generated by the δ action completely breaks the Bose-Fermi cancellation in the untwisted Hilbert space. Moreover, any spacetime supercharges 1 cannot be constructed even if incorporating the degrees of freedom in the twisted sectors, so far as we assume the chiral forms of supercharges, namely, the integrals of conserved world-sheet current Q α = dz J α R (z), as addressed in Ref. [27]. At this point it is crucial that the relevant twisted sectors are associated with the left-moving operator δ L , whereas the possible supercharges should originate from the right-moving degrees of freedom. In the end, it is enough to ask whether or not the total partition function that contains all the twisted sectors vanishes. We will clarify the criterion for this aim, and present a classification of the relevant orbifold models leading to string vacua with the desired properties. In some cases, the desired vacua can be constructed in a manner similar to those given in Ref. [27] for the type II string, in which the orbifold groups contain two generators equipped with some discrete torsions [30][31][32]. On the other hand, we also find simpler models which are just realized as asymmetric orbifolds of cyclic groups with only one generator, in contrast to the type II string cases. Preliminaries We begin with a very brief review of heterotic string vacua, including the Gepner models for the CY 3 compactifications, and describe the notation to be used in the main section. Heterotic string vacua of Gepner models The Gepner model [28,29] describing some CY 3 compactifications is defined as the superconformal system where M k denotes the N = 2 minimal model of level k,ĉ ≡ c 3 = k k+2 . We set where lcm means the least common multiplier. To describe the building blocks of the torus partition function, we start with the simple products of the characters of the N = 2 minimal model [33][34][35][36] PTEP 2021, 033B03 K. Aoyama and Y. Sugawara in the NS sector: 2 Those for other spin structures are defined by acting the half spectral flows z → z + r 2 τ + s 2 (r, s ∈ Z 2 ): where we setĉ = 3. Note that the label I ≡ {( i , m i )} of the building blocks (and the spectral flow orbits introduced below) expresses the quantum numbers for the NS sector even for F (R) I . To construct the Gepner models, we need to make the chiral Z N × Z N orbifolding by g L ≡ e 2π iJ tot 0 and g R ≡ e 2π iJ tot 0 , where J tot (J tot ) expresses the total N = 2 U (1) current in the left (right) mover acting over ⊗ i M k i . Recall that the zero-mode J tot 0 takes the eigenvalues in 1 N Z for the NS sector. The chiral Z N orbifolding (in the left mover) is represented in a way respecting the modular covariance by considering the "spectral flow orbits" [37] defined as follows: We also use the abbreviated notation F Assuming the standard embedding of spin connection, the SO(32) heterotic string vacuum compactified on CY 3 is described by the following modular invariant partition function: To avoid complexities, we shall assume the modular invariant coefficient N I L ,I R to be diagonal throughout this paper: Here, the summations of σ L and σ R are taken over the chiral spin structures. We also set to describe the free fermion contributions including the SO(32) sector. The E 8 × E 8 heterotic string vacuum is likewise described as where χ E 8 0 (τ ) denotes the character of the basic representation of affine E 8 , written explicitly as Constructions of non-SUSY heterotic string vacua In this section we present our main analysis. Namely, we discuss how we can construct non-SUSY string vacua with the vanishing cosmological constant at one loop (or the vanishing torus partition function) based on the heterotic string compactified on CY 3 given in Eqs. (11) and (13). We start by specifying the relevant orbifold action. Orbifold actions Let us fix a subsystem of the minimal models ⊗ i∈S M k i , S ⊂ {1, 2, . . . , r}, on which the orbifold operators act non-trivially. We set The total central charge of the subsystem S is written in the form We fix a positive integer L dividing N , and set for later convenience. We will shortly define the orbifold action δ that satisfies δ 4K = 1 on the untwisted sector. We also define S 1 ⊂ S by For the SO(32) (E 8 × E 8 ) heterotic string, we have the SO(26) (SO(10) × E 8 ) symmetry after making the standard embedding of the spin connection. We will adopt the relevant orbifold action as a cyclic subgroup of for the SO(32) case, and for the E 8 × E 8 -case. Now, let us specify the relevant orbifold action. For the SO(32) heterotic string, we define where J L are those for the U (1) s factor in Eq. (19). In other words, δ L acts on the left-moving characters of M k i , i ∈ S, as the integral spectral flow z → z + L(ατ + β): (α, β) ∈ Z N /L × Z N /L , which yields the modular covariant actions on the spectral flow orbits F F R denotes the spacetime fermion number of the right mover. In other words, the operator (−1) F R acts as the sign flip of the right-moving R sector. On the other hand, δ L acts on the Jacobi theta functions associated with the U (1) s factor as follows: Here, the inclusion of the phase factor e 2π i αβ 8 is necessary for the modular covariance as in the minimal sector, Eq. (23). The explicit forms of Eq. (24) are also summarized in Appendix B. We similarly define the orbifold action δ in the in the same way as Eq. (21). 5/15 PTEP 2021, 033B03 K. Aoyama and Y. Sugawara Since the δ orbifold action is defined so as to respect the modular covariance, it is easy to write down the modular invariant partition functions of our asymmetric orbifolds. For example, for the SO(32) heterotic string and in the cases of Ks ∈ 2Z + 1, the δ orbifold is found to be of order 8K, and the modular invariant parttion function is written as Here, we set which originate from the GSO phases (σ R ) modified by the (−1) F R actions included in Eq. (21). Also, we again made use of the abbreviated notations θ [NS],(α,β) (τ ) ≡ θ 3,(α,β) (τ ) ≡ θ 3,(α,β) (τ , 0), and so on. The modular invariants in other cases are obtained similarly. Criterion for the desired models At this stage let us clarify the "criterion" to search for heterotic string vacua with the desired properties. To this end, we denote the contributions to the torus partition function from each twisted sector as Z (α,β) (τ ) (α, β ∈ Z 4K ). That is, we define for convenience. By our definition of the orbifold action δ presented above, the building blocks Z (α,β) (τ ) behave covariantly under the modular transformations: We require the following conditions: • For the "even sectors," α, β ∈ 2Z, each building block Z (α,β) (τ ) separately vanishes: • The partition function for the untwisted sector does not vanish: • For all the twisted sectors of δ α with α ∈ 2Z + 1, we require Note that Eq. (31) just implies that due to the modular covariance, Eq. (28). Thus, combining it with the requirement in Eq. (29), we can conclude that the total partition function should vanish. We also note that, in this situation, the Bose-Fermi cancellation can only occur among different twisted sectors because of the condition in Eq. (30). On the other hand, the possible spacetime supercharges should be of a form such as Q α = dz J α R (z), which is consistent with the conservation on the world-sheet. However, any operators of this form cannot induce the expected Bose-Fermi cancellation, because the relevant twisted sectors are associated with the left-moving operator δ L . In this way, we conclude that we do not have any spacetime supercharges as the operators consistently acting on the whole Hilbert space and conserved on the world-sheet. This is the reason why we claim that the heterotic string vacua that satisfy the above requirements are non-supersymmetric ones. Classification of the models We study here aspects of the orbifolds of heterotic string vacua in Eqs. (11) and (13) by the cyclic actions of δ given in Eq. (21). We classify the models according to the positive integer N /L. First of all, we note that for all the cases we will discuss below, since δ 2 obviously preserves the spacetime supercharges. One can also readily confirm that, for the untwisted sector α = 0, in all cases. Now, let us describe the classification. for the (α, β)-twisted sector with β ∈ 2Z. Here, we set d i := N k i +2 for all i ∈ S, and c j = 1 (c j = −1) for j = 3, 4 (j = 2). We also note that, when β ∈ 2Z + 1, a similar phase factor is gained, while the θ 3 η s factor is exchanged with θ 4 η s . Fixing the value α ∈ 2Z + 1, let us evaluate the summation β Z (α,2β ) (τ ). It acts as the projection imposing The arguments are almost the same for the E 8 Consequently, we obtain the next classification. • Ks ∈ 4Z [SO(32)]; Ks 1 , Ks 2 ∈ 4Z or Ks 1 , In these cases the aspects are almost parallel to those of Ref. [27]. The constraint in Eqs. (36) or (37) implies where S 1 was defined in Eq. (18), that is, S 1 ≡ {i ∈ S : d i ∈ 2Z + 1}. We then find that β Z (α,2β ) (τ ) = 0, since we generically possess many states satisfying the condition in Eq. (38). This means that δ orbifolding cannot satisfy Eq. (31) by itself. However, as shown in Ref. [27], 3 we can make it possible by further introducing the Z 2 orbifold action γ , which commutes with δ: on the right-moving minimal characters i∈S ch , and (−1) F L denotes the sign flip of the left-moving R sector. We shall also introduce the discrete torsion [30][31][32] with respect to the γ and δ actions: where a, b label the spatial and temporal twistings by γ , while α, β are those associated with δ as above. Then, for any fixed α ∈ 2Z + 1, we readily obtain In these cases, the constraint in Eqs. (36) or (37) in place of Eq. (38). Therefore, we can make the criterion in Eq. (31) be satisfied by taking again the δ and γ orbifolds but with the different discrete torsion In these cases, no state can satisfy the condition in Eq. (36), and thus the criterion in Eq. (31) is trivially achieved by only making the δ orbifolding. In these remaining cases, both Eq. (38) and Eq. (42) are possible, depending on which theta function factors (θ j ) s 1 (θ k ) s 2 the operator δ acts. Thus, Eq. (31) cannot be satisfied even if incorporating the γ orbifolding. We conclude that string vacua with the desired properties are not constructed in these cases. • Otherwise: In the remaining cases, we have β Z (α,2β ) = 0. Moreover, Eq. (31) cannot be satisfied even if the γ orbifolding is incorporated with any discrete torsion. The desired string vacua are not constructed in these cases. To summarize, we have obtained non-SUSY heterotic string vacua with the property Z 1-loop (τ ) ≡ 0 based on orbifolding by δ (and γ in some cases) as follows: The desired vacua can be constructed only by making the δ orbifolding. The order of orbifolding is 8K, although δ 4K = 1 if restricting on the untwisted Hilbert space. The desired vacua are again constructed only by δ action as in case (1). However, we obtain an order 4K orbifold in this case. The desired vacua are constructed as the Z 4K × Z 2 orbifold defined by δ and γ actions with the next discrete torsion included (a, b ∈ Z 2 for γ twists, and α, β ∈ Z 4K for δ twists): [SO (32)], (46) Some comments In this paper, as an extension of our previous work in Ref. [27], we have studied the construction of non-SUSY heterotic string vacua with the vanishing cosmological constant at the one-loop level, based on the asymmetric orbifolding of the Gepner models. In the string vacua we constructed, we could not make up the spacetime supercharges that are conserved on the world-sheet and consistently realize the Bose-Fermi cancellation expected from the one-loop partition functions. We would like to emphasize here that Z one-loop (τ ) ≡ 0 just implies Bosefermi cancellation under the free string limit. Therefore, even if they might induce some low-energy effective field theories with unbroken SUSY, the absence of supercharges in the above sense should imply that they could not be supersymmetric ones when turning on the string interactions described by general world-sheets with higher genera. It would thus be possible for them to generate small nonvanishing cosmological constants after incorporating the (perturbative or non-perturbative) stringy quantum corrections, although such analyses still look very hard to carry out due to the complexities of the spectra arising from various twisted sectors. When being motivated by the cosmological constant problem, it would be more desirable, though much more non-trivial, to have the vanishing one-loop cosmological constant without the Bose-Fermi cancellation at each mass level (in other words, Z(τ ) ≡ 0, but ≡ d 2 τ τ 2 2 Z(τ ) = 0). On the other hand, a characteristic feature of the string vacua given in the present paper (and those given in Ref. [27]) is that we have the Bose-Fermi cancellation among the different twisted sectors of the relevant orbifolding, as was emphasized several times. We would like to discuss elsewhere 10/15 PTEP 2021, 033B03 K. Aoyama and Y. Sugawara the possibility of realizing such "desirable situations," at least in some point particle theories with infinite mass spectra (not necessarily string theories), by implementing this feature (Y. Satoh and Y. Sugawara, work in progress). Appendix A. Summary of conventions We summarize the notations and conventions adopted in this paper. We set q ≡ e 2π iτ , y ≡ e 2π iz . Here, we have set q := e 2π iτ , y := e 2π iz (∀τ ∈ H + , z ∈ C), and used the abbreviations, Appendix B. Explicit forms of the building blocks and their orbifold twistings We summarize here the explicit expressions for the spectral flow orbits in Eqs. (7)-(10) playing the role of building blocks of relevant modular invariants. We also describe the orbifold actions δ, γ on the spectral flow orbits, as well as the δ twistings on the theta function factor, denoted as θ i,(α,β) (τ , z). We make use of the abbreviated index I ≡ {( i , m i )} ( i + m i ∈ 2Z) again, and set for convenience. F On the other hand, the γ twisting of F (σ ) We next describe explicitly the Jacobi theta functions twisted by the δ actions given in Eq. (24), that is, They are explicitly written down as follows: α, β ∈ 2Z:
2021-02-02T17:46:51.383Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "71512972b2a177228746825f0ff8e2744e641853", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/ptep/article-pdf/2021/3/033B03/36647696/ptab016.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c80bac4a64d7fcf6f59a01678024f824311a9c8c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267707845
pes2o/s2orc
v3-fos-license
Influence of Sleep Deprivation on Cognitive Performance : Sleep deprivation is a pervasive phenomenon affecting millions worldwide, with profound implications for cognitive performance. This article explores the intricate interplay between sleep and cognition, examining the multifaceted effects of sleep deprivation on various aspects of cognitive function. Drawing upon neuro scientific research and clinical studies, we delve into the biological mechanisms underlying the impact of sleep loss on memory, attention, executive function, and emotional regulation. Additionally, we explore the implications of sleep deprivation for academic and occupational performance, as well as its potential long-term consequences for cognitive health. This article aims to provide a comprehensive understanding of the influence of sleep deprivation on cognitive performance, highlighting the importance of prioritizing sleep for optimal cognitive function and overall well-being. Introduction: Sleep, an essential physiological process, plays a fundamental role in maintaining cognitive function and overall well-being.Yet, in the fast-paced, interconnected world of modern society, sleep deprivation has become a pervasive concern with profound implications for cognitive performance.As we navigate demanding schedules, technological distractions, and the pressures of daily life, prioritizing adequate rest often takes a backseat, leading to a staggering prevalence of sleep deprivation.The consequences of this widespread sleep deficit extend far beyond mere fatigue.Sleep deprivation permeates every aspect of daily life, affecting productivity, decision-making, emotional regulation, and interpersonal relationships.Its impact reverberates through academic pursuits, professional endeavors, and personal interactions, shaping the trajectory of individual lives and societal dynamics. In this article, we delve into the intricate relationship between sleep deprivation and cognitive performance, unraveling its multifaceted effects on the human mind.From the biological mechanisms underlying sleep regulation to the nuanced interplay between sleep and various cognitive domains, we explore how inadequate rest disrupts neural processes, impairs cognitive function, and compromises overall mental well-being. By shedding light on the influence of sleep deprivation on cognitive performance, we aim to deepen our understanding of the critical role sleep plays in optimizing brain function.Through empirical research, clinical insights, and practical strategies, we seek to equip readers with the knowledge and tools necessary to safeguard their cognitive health in an increasingly sleep-deprived world.Join us on this journey as we uncover the intricate interplay between sleep, cognition, and the pursuit of a balanced, fulfilling life. Objectives: To study the multifaceted effects of sleep deprivation on various aspects of cognitive function including memory, attention, executive function, and emotional regulation. Hypothesis: Sleep deprivation adversely affects cognitive performance, including memory, attention, executive function, and emotional regulation. Assumption: 1. Sleep deprivation leads to cognitive deficits across multiple domains.2. Sleep deprivation causes health related issues. Research Methodology: Qualitative research methodology used for this study.Secondary data is used to analyze and interpret the sources.Further the researcher with the help of psychometric test the researcher is going to explore and find out the correlation between various factors associated with Sleep Deprivation and Cognitive performance in the vicinity of employees working in hazardous factory. Review of literature: Sleep deprivation (SD) is recognized as a significant challenge to cognitive function, prompting extensive exploration into its underlying biological mechanisms.This inquiry encompasses critical cognitive components such as memory, attention, judgment, decision-making, and alertness, all of which are profoundly affected by sleep loss (Killgore, 2010).Distinguishing between partial and total SD, researchers have highlighted the cumulative impact of chronic sleep restriction on cognitive abilities (Banks & Dinges, 2007).This distinction underscores the importance of understanding how disruptions in circadian rhythms and homeostatic processes contribute to cognitive impairments observed in sleepdeprived individuals (Czeisler, 2013). Memory consolidation emerges as a central focus in studies of sleep deprivation, revealing how SD disrupts molecular and synaptic processes essential for encoding and retrieval (Diekelmann & Born, 2010).Attention and alertness are also critically affected, with disruptions in brain activity patterns elucidating attentional deficits in sleep-deprived individuals (Borbély & Achermann, 1999). The synaptic homeostasis hypothesis provides a theoretical framework for understanding how sleep regulates synaptic strength, emphasizing the importance of sleep for synaptic plasticity and cognitive function (Tononi & Cirelli, 2006).Research in this area offers valuable insights into the complex relationship between sleep and cognition, highlighting the biological underpinnings of SD-induced cognitive deficits.Despite these advancements, inconsistencies in findings persist within the literature, particularly regarding the specific cognitive domains affected by sleep deprivation.While some studies report global declines in cognitive performance, others suggest selective impairments in certain cognitive functions (Lim & Dinges, 2008).Moreover, emerging evidence indicates that some cognitive deficits persist despite the restoration of alertness with stimulants, suggesting that sleep loss may affect specific cognitive systems beyond global declines (Killgore et al., 2007). Emotion processing has also garnered attention in the context of sleep deprivation, with studies indicating alterations in emotional perception, control, comprehension, and expression during sleep loss (Walker, 2009).This highlights the interconnectedness of cognitive and emotional processes, further emphasizing the need for a comprehensive understanding of the effects of sleep deprivation on cognition. In conclusion, research into the effects of sleep deprivation on cognitive performance has provided valuable insights into the intricate interplay between sleep and cognition.By unraveling the biological mechanisms underlying SD-induced cognitive deficits, researchers aim to develop interventions to mitigate these effects and promote cognitive well-being in sleep-deprived individuals.However, further investigation is needed to elucidate the specific cognitive domains affected by sleep loss and to develop targeted interventions to address these deficits effectively. Interpretation: Biological Basis of Sleep Deprivation Effects: The review examines the biological foundations of sleep deprivation (SD), distinguishing between partial and total SD while emphasizing the long-term consequences of chronic sleep restriction.It explores the interaction between circadian rhythms and homeostatic mechanisms, elucidating how disruptions in these systems lead to cognitive decline.From a neurobiological perspective, the review investigates the effects of SD on crucial brain areas involved in cognition, including the amygdala, medial prefrontal cortex, and hippocampus, revealing changes in functional connectivity and neurotransmitter activity that contribute to cognitive deficits. Effects on Memory: The article is devoted the intricate interplay between SD and memory processes, elucidating how sleep loss disrupts memory consolidation mechanisms mediated by synaptic plasticity and protein synthesis.Through a detailed analysis of signaling pathways and molecular cascades implicated in memory formation, the review offers insights into how SD compromises the encoding, consolidation, and retrieval of memories, with profound implications for learning and cognitive performance. Attention and Executive Function: Sleep deprivation significantly impairs attention, focus, and decision-making abilities.Research shows that sleep loss disrupts attentional processes and leads to difficulties in making efficient and accurate decisions due to compromised cognitive resources and increased impulsivity.Furthermore, studies indicate that sleep deprivation negatively affects cognitive flexibility and problem-solving skills, highlighting the importance of adequate sleep for maintaining optimal cognitive functioning. Emotional Regulation and Mood: Sleep deprivation disrupts emotional regulation, worsening mood disorders and overall mental wellbeing.Inadequate sleep heightens emotional reactivity and stress while increasing the risk of depression and anxiety.The bidirectional relationship between sleep and emotional regulation underscores the importance of prioritizing sufficient sleep for emotional resilience and mental health. Neurocognitive Performance: Sleep deprivation significantly impairs cognitive tasks like reaction time, processing speed, and working memory.This leads to decreased performance in academic and occupational settings, affecting learning, productivity, and decision-making abilities.Prioritizing sufficient sleep is crucial for optimizing cognitive function and overall performance. Individual Differences and vulnerabilities: Individual differences such as age, genetics, and lifestyle play key roles in determining susceptibility to the effects of sleep deprivation.Younger individuals may show more resilience, while genetic variations and lifestyle factors like stress levels and sleep hygiene can influence vulnerability to cognitive and emotional consequences.Understanding these differences is crucial for tailoring interventions to mitigate the negative impacts of sleep deprivation. Impact on Academic and Occupational Performance: Sleep deprivation significantly impacts academic and occupational performance.In academia, it disrupts learning, memory consolidation, and leads to lower grades.In the workplace, sleep loss reduces productivity, concentration, and decision-making, increasing errors and compromising work quality.Prioritizing sufficient sleep is crucial for optimizing both academic and occupational success. Conclusion: In conclusion, this research underscores the critical importance of addressing sleep deprivation in both academic and occupational settings.The findings reveal that inadequate rest profoundly impacts cognitive performance, affecting crucial aspects such as memory, attention, executive function, and emotional regulation.This emphasizes the necessity of prioritizing sufficient sleep to optimize overall well-being.Within the workplace, sleep deprivation poses significant challenges, reducing productivity, impairing concentration, and compromising decision-making.These effects ultimately lead to increased errors and diminished work quality, highlighting the importance of recognizing sleep hygiene as a crucial component of occupational health. Furthermore, the research demonstrates the need for proactive measures to address sleep deprivation in the workplace.By recognizing the importance of sleep and implementing strategies to ensure adequate rest, employers can significantly enhance employee well-being, productivity, and job satisfaction. Incorporating sleep-related interventions into occupational health programs is essential for fostering a supportive work environment conducive to optimal performance and overall success. Overall, prioritizing sleep is crucial not only for individual cognitive function but also for maintaining a healthy and productive workforce.By acknowledging the significance of sleep hygiene and its impact on occupational health, organizations can take proactive steps to mitigate the adverse effects of sleep deprivation and create a workplace culture that values employee well-being.This approach not only enhances productivity and performance but also contributes to a more positive work environment and overall success.
2024-02-17T16:10:43.428Z
2024-02-15T00:00:00.000
{ "year": 2024, "sha1": "d2e2388a1996dfafc064ea3b658a439481c7831f", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2024/1/13443.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "53a27400e10f22bc49caf553d8cfb6bab7a8fd9d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
3926110
pes2o/s2orc
v3-fos-license
Pro-tumor activities of macrophages in the progression of melanoma ABSTRACT Macrophages are located in essentially all tissues due to their “janitor” function. Macrophages can exert either anti- or pro-tumor activities depending upon the specific tumor microenvironment they inhabit. Substantial evidence indicates that macrophages, owing to their plasticity, can be reeducated to adopt a protumoral phenotype within a tumor microenvironment through the help of growth factors in the microenvironment and intercellular interactions. As the lethality of malignant melanoma is due to its aggressive capacity for metastasis and resistance to therapy, considerable effort has gone toward treatment of metastatic melanoma. In the present review, we focus on the pro-tumor activities of macrophages in melanoma. Based upon the information presented in this review it is anticipated that new therapies will soon be developed that target pro-tumor activities of macrophages for use in the treatment of melanoma. Introduction Malignant melanoma is considered as one of the most lethal cancers due to its aggressive metastasis and resistance to therapy. Worldwide it has been reported to affect 232,000 people, leading to 55,000 deaths in 2012. 1 In western countries, people, especially those with diminished skin pigment, are at increased risk as a result of sunbathing. 1 In mainland China, it is anticipated that melanoma will result in 20,000 new cases annually, 2 of which primary skin melanoma will account for 50-70% and mucous membrane 22.6% of the cases. 3 Macrophages, originally identified by Elie Metchnikoff, 4 can engulf and digest malignant cells within the body. 5 Initially, macrophages were considered to be the guardians of our body against microbes as well as tumors, 6 including melanoma. 7 Macrophages are classified into 2 different extremes of a continuum ranging from M1 to M2 macrophages in terms of the Th1/Th2 paradigm. M1 is a pro-inflammatory macrophage exhibiting defense reactions against tumors while M2 is an anti-inflammatory macrophage which is beneficial to tumors. 8,9 Recently, evidence has been presented which suggests that macrophages within the context of a tumor microenvironment show a preference to adopt to a protumoral phenotype (M2)in primary and/ or metastatic sites in vivo with the help of growth factors in the microenvironment and through intercellular interactions. 10 We have described targeting macrophage anti-tumor activity to suppress melanoma progression in another review. 11 In the current review, we focus on discussing the protumor activities of macrophages in melanoma and how such activities can be used for therapeutic purposes in the treatment of melanoma. Macrophage recruitment to melanoma Melanomas release molecules that can recruit macrophages to melanoma sites. Alterations in macrophage population patterns are observed during the progression of a malignant melanoma. Monocyte chemoattractant protein-1 (MCP-1) MCP-1, acting as a potent macrophage-recruiting molecule, 12 is expressed in human malignant melanoma. 13 A mutant of MCP-1 that lacks the amino acids 2-8 at the N-terminal was reported to be overexpressed when transfected in thigh muscle and secreted into the systemic blood circulation. 13 Such an effect in turn leads to a reduction in MCP-1 expression by melanoma cells. 13 Blocking of MCP-1 function inhibits macrophage recruitment and partially reduces the angiogenesis and growth of malignant melanomas. 13 The capacity of MCP-1to enhance tumor angiogenesis is related with inducing the secretion of TNF-a, IL-1a and vascular endothelial growth factor (VEGF) through macrophage recruitment as well as exerting potential direct autocrine/paracrine effects upon the melanoma cells. 13 In a melanoma xenograft study, the tissue growth was substantially reduced, which is due to no production of MCP-1 of human melanoma cell line IIB-MEL-J. 14 When transfected with an MCP-1-expression vector, MCP-1 was produced and in vivo tissue growth increased. 14 The application of MCP-1 inhibitor, as well as macrophage depletion with clodronateladen liposomes, have been shown to reduce tumor growth and macrophage recruitment, which then induces necrotic tumor masses. 14 Anti-tumor effects with restraint stress could reduce macrophage trafficking by suppressing MCP-1 production. 15 However, MCP-1 may exert a biphasic effect in melanoma, with high levels promoting tumor rejection, whereas low or intermediate levels of MCP-1 support tumor growth. 16 VEGF-C Vascular endothelial growth factor C (VEGF-C) is a protein that is a member of the platelet-derived growth factor / vascular endothelial growth factor (PDGF/VEGF) family. 17 A substantial number of human tumors express VEGF-C, including malignant melanomas. 17 One of the most critical steps in tumor progression is completed through the interactions of tumor cells with lymphatic vessels. VEGF-C-overexpressing human melanomas result in enhanced macrophage recruitment as well as melanoma progression. 18 Furthermore, in skin areas surrounding VEGF-Ctransfected melanomas, increased levels of peritumoral macrophages have been observed. VEGF-C does not appear to exert any direct effects on tumor cells, as VEGF-C-overexpressing cells do not change the proliferation of control cells, and addition of recombinant VEGF-C to control cells did not affect their growth rate in vitro. 18 Like MCP-1, VEGF-C may exert biphasic effects in melanoma. An increase in the recruitment of macrophages, which can enhance host-tumor defense capabilities, may be related to the reduction in growth observed in VEGF-C-overexpressing melanomas. 18 This conclusion follows from results showing that increased densities of peritumoral macrophages were correlated with tumor growth suppression. 18 Polarization to pro-tumor M2 type Polarization of macrophages to M2 plays a vital role in the outcome of melanoma patients. It results from the presence of growth factors and exsomes that can be released by both melanoma cells and macrophages, or from Treg cells, as well as by intercellular interactions. Transforming growth factor-b1 All 3 isoforms of TGF-b has been found to be expressed in cultured malignant cells [19][20][21] and in situ. [22][23][24] TGF-b1, that can be produced by M2 type macrophages, plays a pivotal role in macrophage polarization to the pro-tumor M2 type. 9 It has been reported that tumor cells which produce high levels of TGF-b1 can stimulate monocytes/macrophages, and, in this way, support tumor growth and immune escape. 25,26 Alternative activation (M2) was shown to be mostly associated with responses of macrophages to anti-inflammatory mediators, such as glucocorticoids. 27 This mechanism appears to represent a major component involved with increasing surface contact sites of the TGF-b1 receptor. 28 In the presence of glucocorticoids, TGF-b1 stimulates mature macrophages to polarize to the M2 type. 28 TGF-b1 can also suppress nitric oxide release resulting from M1 polarized macrophages 29 and suppress M1 polarized macrophages to increase melanoma survival through stroma remodeling. 30 However, it has also been reported that tumor cells exposed to TGF-b1 experience enhanced susceptibility to NK-mediated extermination. 31 IL-10 Interleukins are a group of cytokines (secreted proteins and signal molecules) that were first seen to be expressed by white blood cells (leukocytes). 32 Alternative activation (M2) can be a response of macrophages to Th2 cytokines, such as IL-10. IL-10 acts on macrophages by down-regulating class-II MHC antigens 32 and expression of the co-stimulator molecule, B7 on macrophages, 33 which then inhibit cytokine production by Thl cells. 34 IL-10 production is not confined to Th2 cells, as it has been reported that IL-I0 could be produced by melanoma cells. 35 Melanoma cells, in turn, then have the potential to use IL-10 as a means to modulate immune responses such as induction of M2 polarized macrophages. 35 Moreover, it has been shown that M2 macrophages also release IL- 10. 36 Adrenomedullin (ADM) ADM is a multifunctional molecule involved with tumor angiogenesis and widely expressed in a variety of tumor types, [37][38][39] such as melanoma. 40 Levels of ADM and its receptor are increased in human melanoma, suggesting a role in melanomagenesis. Tumor-associated macrophages (TAM) are identified as the major source of ADM in melanoma. 41 TAM-derived ADM can induce the phosphorylation of endothelial nitric oxide synthesis in endothelial cells via a paracrine mechanism and polarize macrophages to an M2 phenotype via autocrine mechanisms to enhance melanoma tumor angiogenesis and tumor growth. 41 Based upon data generated from a mouse melanoma study, 41 a mathematical model has been derived to assess these interactions among mouse melanoma cells, Th2/ Th1 cells and M2/M1 macrophages. With this model it is possible to investigate the role of re-polarization between M1 and M2 macrophages on tumor growth, and the findings obtained indicate that melanoma growth is associated with a type-II immune response as it can result from large numbers of Th2 and M2 cells. 42 CD73 Nucleotidase CD73 expression is upregulated in melanoma. 43 Tumor macrophage infiltration can be dramatically decreased and the microenvironment substantially altered following inhibition or knockdown of tumor CD73, due to their effects upon the polarization of M1 or M2 macrophages. 44 Yegutkin et al reported that host CD73 knockout did not affect B16F10 melanoma infiltration by macrophages in B16 melanoma. 45 However, suppression of tumor cell CD73 or chemical inhibition of CD73 decreases macrophage infiltration. One conclusion from such results is that CD73 plays a role in the regulation of macrophage infiltration. Pro-neoplastic and pro-angiogenic M2 phenotypes of tumor-associated macrophages are often observed in hypoxic regions of a tumor 46 ; and, a downregulation of pro-M1 cytokines are found in response to any reductions in CD73. 44 Such changes in the microenvironment contribute to macrophage polarization resulting in a pro-neoplastic M2 phenotype that can then regulate the progression of a tumor. 46 Arginine metabolism Arginine metabolism plays a role in macrophage polarization. This relationship follows from findings which show that macrophages using arginine can induce nitric oxide synthase to produce nitric oxide (NO) as M1 types and ornithine through arginase as M2 types. 47 The number of M1 type macrophages varies as a function of tumor progression and location. 48 Mostly within peritumoral locations, considerable numbers of M1-type macrophages were reported to be present in situ and in thin melanomas. In contrast, within tumors of advanced stages and in melanoma metastases decreased numbers of these macrophages were found in peritumoral, as well as in intratumoral locations. 48 In both peritumoral and intratumoral locations, the percent of M2 type macrophages (arginase-positive) was lower than that of M1 type macrophages in thin melanomas. Macrophages -induced NO release was shown to be dependent on tumor microenvironment, with high levels being observed as associated with IFN-g while low levels associated with more advanced tumors. 48 The macrophage mannose receptor (MR) MR is upregulated in the alternative anti-inflammatory/protumoral M2 macrophage and has been shown to be essential for cytokine production. 49 In the mouse melanoma model with lung metastasis, recruitment of CD68CCD11bCCD11c¡ monocytes was abrogated in C57BL/6 mice without MR (i.e., MR¡/¡) and fewer lung colonies were observed in MR¡/¡ mice as compared with that in the wild type. 49 Exosome Exosomes are microvesicles of 20-100 nm diameters which can be released by tumor cells. 50 As a result of their nanoscale size, exosomes readily penetrate and interact with local tumor cells. Moreover, these microvesicles can affect other cell types distal to the advancing tumor cell front. 51 Melanoma cells release exosomes which influence the tumor immuno-microenvironment, 50 via effects upon the cytokine and chemokine profiles of the macrophages. 52 Tumor cells treated with melanoma cellderived exosomes respond vastly different from those induced by either LPS or IL-4. 52 Tregs Tregs can promote the differentiation of monocytes to tumorpromoting M2 macrophages. The interaction of Tregs and M2 macrophages represents a mutually beneficial effect, as the M2 macrophages directly induce Tregs, which then suppresses tumor specific cytotoxic T-cells. 53 Human malignant melanoma cells with monosomy of chromosome 3 can produce chemokines such as macrophage-derived chemokine, thymus-and activation-regulated chemokine and MCP-1, all of which contribute to Treg migration and can also be produced by M2 macrophages. 54 Macrophages promote tumorigenesis of melanoma by cytokines Macrophages recruited to the melanoma can, in turn, produce melanoma-stimulating molecules such as IFN-g, angiotensin, cyclooxygenase-2 (COX-2) and IL-1b to support the growth and metastasis of melanoma. IFN-g Interferon gamma (IFNg) is a dimerized soluble cytokine that is the only member of the type II class of interferons. 55 IFN-g has also been reported that IFN-g may have pro-tumorigenic effects in solid tumors under certain conditions. 55,56 Although interferon g, reduces cellular growth in vitro, when inoculated with B16 melanoma cells intravenously, it stimulates lung colonization along with an enhanced expression of class I major histocompatibility complex antigens. 55,56 These effects are more frequently observed in advanced melanoma and are related to an increased risk of metastasis in primary melanoma. 56 Elevated levels of IFN-g show promise as being an independent predictor of disease recurrence. In addition, they may serve as a means for identifing early-stage melanoma patients that are more vulnerable to disease recurrence and who may then benefit from adjuvant therapies, such as immunotherapies. 57 Indeed, a randomized clinical trial by the Southwest Oncology Group observed an adverse effect of IFN-g on melanoma relapse and mortality rates. 58 Moreover, in a mouse skin cancer model, as induced by ultraviolet B, macrophageproduced IFN-g promoted melanoma growth by inhibiting apoptosis. 59 Pro-tumorigenic effects of IFN-g may, in part, be due to a pro-expression of CD74 in melanoma. 60 Angiotensin Macrophages express angiotensin II type 1 and type 2 receptors during the process of monocyte differentiation to macrophages. 61 Tumors produce angiotensin II to enhance the amplification of macrophages to stimulate cancer-promoting immunity. 62 Angiotensin-converting enzyme (ACE) is a peptidase which is responsible for the cleavage of angiotensin I. Mice with enhanced macrophage ACE levels show increased production of interleukin-12 and nitric oxide but reduced interleukin-10, and are resistant to melanoma. 63 However, ACE inhibitors as a pharmacological tool to inhibit tumor angiogenesis is controversial. 64 Angiotensin II type 1 receptor expression on TAM is related with increased melanoma tumor growth. 64 It is possible that the angiotensin II type 1 receptor pathway may play an important role in promoting tumor angiogenesis and growth via a macrophage and VEGF-dependent mechanism. 64 COX-2 Cyclooxygenase-2 (COX-2), also known as prostaglandinendoperoxide synthase 2, is involved in the conversion of arachidonic acid to prostaglandin H2. 65 As shown with use of immunohistochemical analysis, COX-2-positive macrophages, are rare in common nevi and "dysplastic nevi," but found in high levels in situ and in thin melanoma. COX-2-positive macrophages were also found in more advanced tumors and metastatic melanoma, although at much lower levels than that observed in situ or in thin melanoma. As demonstrated in vitro, COX-2 has been shown to be expressed in peritoneal macrophages when exposed to B16 murine melanoma cells, but not following exposure to normal murine fibroblasts. Taken together, results obtained from both in vivo and in vitro studies indicate that not only may COX-2 expressed in macrophages have the potential to provide a valid and reliable biomarker of melanoma progression, but also the possibility that melanoma cells themselves might stimulate COX-2 in macrophages. 65 IL-1b IL-1b as a pleiotropic pro-inflammatory cytokine contributes to cell growth, differentiation and regulation of immune responses. 66 The IL-1b gene or its protein expression are associated with the degree of invasiveness and metastasis of melanoma. 67 Interestingly, although metastatic melanoma cell lines do not secrete IL-1b, they do promote IL-1b production from macrophages. 68 Therefore, more work directed toward revealing the mechanisms and consequences of IL-1b production by infiltrating macrophages may be of interest for the development of IL-1b targeted therapy, such as an anti-IL-1b antibody (Canakinumab) of metastatic melanoma. 68 Moreover, IL-1b as generated from tumor cells may be considered as a threat to the host's immune system. In this regard, IL-1b-producing melanoma cells can induce reduced tumor growth by recruiting immune cells. 69 The cancer cell fusion theory The cancer cell fusion theory initially proposed by Prof. Aichel, 70 and currently accepted by many investigators, states that the fusion of cancer cells with macrophages or other phagocytes could underlie cancer metastasis. The Aichel's hypothesis can be simplified by the modelwhite blood cell C nonmetastatic cancer cell D metastatic cancer cells. 70 Cancer cells, especially non-adherent ones, favor destruction by white blood cells, preferably by phagocytes like macrophages or neutrophils. If a captured cancer cell escapes to be digested by a predator phagocyte, the 2 cells would fuse, pooling their chromosomes to form a white blood cell-tumor cell hybrid. At least some of these hybrids become metastatic, exhibiting both motility and continuous cell division. 70 It has been demonstrated that hybrids of weakly metastatic Cloudman S91 mouse melanoma cells and normal mouse or human macrophages can be created in vitro with use of polyethylene glycol -induced fusion. 71 Compared to the parental melanoma cells, most daughters of the 35 hybrids tested were found to be more aggressive, with the result being that metastases onset was more rapid and observed in greater numbers of mice. 71 The majority of these hybrid clones showed markedly enhanced chemotactic motility toward a variety of attractants in 2-chambered culture systems, a hallmark of metastatic cells. 72 Most notably, the expression of macrophagelike glycosylation patterns showed an increase in oligosaccharide chains conjugated with b1,6-branched oligosaccharides and the responsible glucosyltransferase, b1,6-N-acetylglucosaminyltransferase (GNT-V). 73 The significance of this finding is twofold: 1) b1,6-branched oligosaccharides and GNT-V are highly associated with malignant transformation as has been shown in rodent and human cells and 2) these melanoma patients show a poor prognosis. 74 Conclusion Whether macrophages in the tumoral microenvironment are anti-or pro-tumor has been enigmatic. The primary reason for this controversy regarding macrophages in tumor progression can be traced to the contrasting results obtained regarding the effects of macrophages in the literature. It remains, however, widely accepted that macrophages, owing to their malleability, can be domesticated as a tumor's handyman, as summarized in Figure 1. Melanoma progression and pro-tumor activities of macrophages. ① Macrophage recruitment to melanoma. Melanomas release many different types of macrophage-recruiting molecules, such as MCP-1 and VEGF-C, to attract macrophage migration to melanoma sites. ② Polarization to pro-tumor M2 type. Macrophages can be induced and educated to adopt a protumoral phenotype (M2) in the text of melanoma, which is co-made up by both melanoma cells and macrophages. ③ Cytokines by macrophages promote tumorigenesis of melanoma. Macrophages recruited to the melanoma can produce melanoma-stimulating molecules such as IFN-g, angiotensin, COX-2, IL-1b and S100A4 to support the growth and metastasis of melanoma. ④ The cancer cell fusion theory. Macrophages in the melanoma microenvironment can devour melanoma cells, if digestion fails, then would likely form a hybridoma of the macrophage-melanoma cell, that results in metastasis of the melanoma. Fig. 1. Interactions between 2 categories of cells often can offer mutual benefits and achieve the common goal of melanoma progression. It is anticipated that the application of such interactions could be used to develop a feasible anti-melanoma strategy which incorporates a combination of macrophage recruitment inhibition and the "re-education" of macrophage polarization. Disclosure of potential conflicts of interest No potential conflicts of interest were disclosed.
2018-04-03T04:26:07.408Z
2017-04-25T00:00:00.000
{ "year": 2017, "sha1": "85aee3edc08e129f8e5c1615ad9ff164d6283275", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1080/21645515.2017.1312043", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85aee3edc08e129f8e5c1615ad9ff164d6283275", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
145068183
pes2o/s2orc
v3-fos-license
Evidence Based Library and Information Practice Impact of the Reading Buddies Program on Reading Level and Attitude Towards Reading Objective – This research examines the Reading Buddies program at the Grande Prairie Public Library, which took place in July and August of 2011 and 2012. The Reading Buddies program pairs lower elementary students with teen volunteers for reading practice over the summer. The aim of the study was to discover how much impact the program would have on participating children’s reading levels and attitudes towards reading. Methods – During the first and last sessions of the Reading Buddies program, the participants completed the Elementary Reading Attitudes Survey (ERAS) and the Graded Word Recognition Lists from the Bader Reading and Language Inventory (6th ed., 2008). Participants were also asked for their grade and sex, and the program coordinator kept track of attendance. Results – There were 37 Reading Buddies participants who completed both the pre- and post-tests for the study. On average, the program had a small positive effect on participants’ reading levels and a small negative effect on their attitudes towards reading. There was a larger range of changes to the ERAS scores than to the reading test scores, but most participants’ scores did not change dramatically on either measure. Conclusions – Although findings are limited by the small size of the data-set, results indicate that many of the Reading Buddies participants maintained their reading level over the summer and had a similar attitude towards reading at the end of the program. On average, reading levels increased slightly and attitudes towards reading were slightly more negative. Many factors could not be taken into account during the study (e.g., the amount of reading done at home). A study with a control group that did not participate in the program could help to assess whether the program helped to combat summer learning loss. Introduction The Reading Buddies program was a new program in 2011 at Grande Prairie Public Library. This program is modelled on the Partners in Reading program that took place at this library from 1990 to 2008. In 2011, the program was adapted to reflect the current needs of the community. The new program was intended to pair teen volunteers with lower elementary students for reading practice and fun activities over the summer. In 2011, Grade 1 to 4 students were invited to participate in the program. In 2012, this was changed to Grades 1 to 3, as there was greater demand for the program from families of younger students in 2011. The large age range also made it difficult to plan developmentally appropriate group activities. The program was marketed towards struggling readers, but children at any reading level could participate in the program. Volunteer recruitment expanded to include college students and some adults when it became clear that we would have far more child participants than teenage volunteers. In 2011 there were 19 teen and 9 adult volunteers. (As some of the teens volunteered for more than one session, 28 of the 37 pairs had teen volunteers.) In 2012, there were 29 teen and 5 adult volunteers and of the 44 pairs, 39 had teen volunteers. In 2011, volunteers attended an hour-long training session before the start of the program, in which they learned ways to facilitate the reading process. In 2012, we extended the training session to one and a half hours to accommodate activities and discussion about strategies for reading with their partners, rather than the simple presentation we had done the year before. Each year, the program ran for seven weeks during the summer. Each session of Reading Buddies was an hour and a half long. Approximately one hour of this time was spent in one-on-one reading. The pairs also had the option of using literacy-based games and activities during this time. The other half hour was spent in group activities, including storytimes, puppet shows, and interactive storybased activities. Reading Buddies gives children the opportunity to practice reading throughout the summer, a time when many children fall behind in reading fluency. In order to be successful, Reading Buddies should have an impact on the children who participate. The study was designed to assess the program's impact on the children's reading abilities and attitudes towards reading. The Summer Reading Gap There are few who doubt the importance of the ability to read. Reading is necessary for success in a world in which text is a major medium for communication. Children who are fluent readers will be more successful in school and as adults, but attaining that level of reading ability requires practice (Ross, 2006). As elementary students, children will naturally learn at different rates and be subject to outside influences such as socio-economic status and family literacy. Research in education has identified what is known as the "summer reading gap." This is a phenomenon in which some children maintain or increase their reading level over the summer holiday, whereas other students seem to go backwards in development (Roman, Carran, & Fiore, 2010). This effect is cumulative, leading to greater and greater discrepancies in ability as children progress through school. The summer reading gap has also been linked to socioeconomic status, as students from higher income families tend to have greater access to libraries and other learning opportunities during the summer months (McGill-Franzen & Allington, 2003). As Heyns (1978) initially pointed out, public libraries are in a unique position to address the summer reading gap. Not only are they open during the summer, but libraries have been offering variations of the summer reading program for over a century (Roman et al., 2010). Today, almost all libraries offer free, structured reading programs for children of all ages. This programming serves to motivate children to continue reading while they are out of school, and thus may serve to prevent or limit summer learning loss. Although there is a field of research addressing the summer reading gap from the education perspective, relatively little literature directly examines how summer reading programs in libraries impact student achievement. Heyns's (1978) study found that children who participated in summer reading programs gained more vocabulary than children who did not, regardless of socioeconomic status, gender, or number of books read. Roman et al. (2010) recently conducted a large-scale longitudinal study, comparing students who participated in summer reading programs at libraries with students who did not. Overall, this study showed that children who participated in voluntary summer reading programs increased their reading levels more than children who did not. In the research that does exist, it seems that voluntary participation in a reading program has more impact than forced reading, whether at home, summer school, or the library. It appears that the greatest factor in summer reading achievement may be access to and regular use of library materials and programs. All of these programs showed an improvement in the students' reading abilities. Burns et al. (2008) studied the long-term effects of a reading program, and found that two years after the Help One Student to Succeed (HOSTS) program, HOSTS students had higher fluency, comprehension, and reading progress scores than non-HOSTS students. The length of the program is an important factor. Fitzgerald's (2001) study of a tutoring program compared a group of students who received tutoring for a full term and students who were tutored for less than the full term. The students who were tutored for the full term showed higher gains in reading ability. Fitzgerald also noted that students showed greater growth in the second half of the program, and that different skills improved at different points in the program: during the first half, students showed more improvement in phonological awareness, whereas in the second half there was greater improvement in reading words. The tutors also impacted the effectiveness of the programs. The age of tutors does not appear to be an important factor: programs with volunteers who were peers (LaGue & Wilson, 2010), older students (Block & Dellamura, 2001;Marious, 2000;Paterson & Elliot, 2006;Theurer & Schmidt, 2008), college students (Fitzgerald, 2001), adults (Jalongo, 2005), or a mix of community volunteers (Gattis et al., 2010;Vadasy et al., 1997), all showed improvements in students' reading. In all of these studies, tutors received some form of training. Vadasy et al. (1997) studied a program with very structured lesson plans and found that the "children whose tutors implemented the lessons as designed demonstrated significantly higher reading and spelling achievement" (Lesson Content section, para. 2). Though not studied in depth, Theurer and Schmidt (2008) noted that while some of the "fifth-grade buddies were naturals and interacted comfortably with the first graders, others seemed uncertain and tentative, not quite knowing what was expected of them" (p. 261). They integrated training on choosing books, reading strategies, and interpersonal skills into the program. Because these studies look at programs that are based in schools and run throughout the school year, the programs are longer than our summer Reading Buddies program, which runs for seven weeks. As shown in Fitzgerald's (2001) study, the length of the program can impact the students' gains in reading. The structure of the programs studied varied, and it is difficult to compare the effects of each program. Vadasy et al.'s (1997) conclusions support a more structured program. Our Reading Buddies program was loosely structured, with the majority of the time spent reading one on one with the volunteers, so it is important to have a closer look at the effects of a loosely structured program on students' reading abilities. Reading Abilities and Attitudes Reading Buddies aims to improve children's reading abilities, but also to instill a positive attitude about reading. The two factors are intricately related. It seems that students who have a negative attitude about reading are less likely to read voluntarily and will read less overall than their reading-positive companions (Sainsbury & Schagen, 2004). Over time, this leads to larger and larger gaps in ability between students. Research has indicated that reading achievement and attitudes about reading are related among elementary students (Diamond & Onwuegbuzie, 2001). Indeed, McKenna and Kear (1990) developed the Elementary Reading Attitudes Survey (ERAS) as another way (besides reading tests) for teachers to assess their students. Logan and Johnston (2009) studied over 200 students in order to compare reading abilities and attitudes between boys and girls. They found that girls had more positive attitudes towards reading overall, and that this was correlated with their reading ability. Interestingly, the relationship between reading attitude and ability was found to be weaker in boys than in girls. The Dominican Study (Roman et al., 2010) revealed that most librarians perceived that their programs had a positive effect on students' reading levels and attitudes about reading. Block and Dellamura (2001) also observed that children placed a higher value on reading at the end of their tutoring program. However, the students' attitudes about reading were never directly tested in either program. Aims The goal of the study was to test two hypotheses:  Hypothesis 1: Children enrolled in the Reading Buddies program will have better reading skills at the end of the program than at the start of the program.  Hypothesis 2: Children enrolled in the Reading Buddies program will have a more positive attitude towards reading at the end of the program than at the start of the program. Reading Test We used the Graded Word Recognition Lists from the Bader Reading and Language Inventory (6th ed., 2008) to test the participants' reading skills. The Graded Word Recognition Lists "can serve as a quick check of the student's word recognition and word analysis abilities" (Bader & Pearce, 2008, p. 4). They do not measure other reading skills such as comprehension. The test consists of several lists of progressively more difficult words. This test was chosen because it covered a wide range of reading levels (preschool to high school), had been updated recently, and could be easily administered within the limited time we had available. While the test is American, the words chosen did not reflect any regional spelling variations. Differences in the American and Canadian school systems may have made the grade level results inaccurate; however, we were interested only in the change in reading level, not the grade levels themselves. The test was administered one on one during the first and last sessions of the Reading Buddies program. Elementary Reading Attitudes Survey We used a modified version of the ERAS, or Elementary Reading Attitudes Survey (McKenna & Kear, 1990), to evaluate how participants' attitudes about reading changed over the duration of the Reading Buddies program. This survey was originally developed as a way for teachers to determine how their students felt about reading. It has also been used for research studies about reading attitudes (Black, 2006;Martinez, Aricak, & Jewell, 2008;Worrel, Roth, & Gabelko, 2002), mostly in school settings. The ERAS uses images of the popular comic book character, Garfield, to elicit participants' emotional responses about reading. Questions ask "How do you feel …?" about a readingrelated activity and participants circle one of four images of Garfield that corresponds with their feeling. The ERAS was extensively tested during its development to determine its validity and reliability. After the format and items had been decided upon, the researchers administered the test to over 18,000 first-to sixth-grade students across the United States. Calculation of Cronbach's Alpha revealed high internal consistency of items within each sub-scale. To determine the validity of the survey, participants were asked directly about their reading habits and other activities. High scores on the survey, indicating a very positive attitude towards reading, were correlated with literary activities such as good access to school and public libraries. Low scores on the survey were correlated with non-literary activities such as large amounts of television-watching. The survey contains two sub-scales, one measuring recreational reading and one measuring academic reading. For the purposes of this research, we only used the first sub-scale. We chose to eliminate the second sub-scale because of frequent references to the school context, which are not suited to our purpose. In each year of the study, the ERAS survey was administered to the groups of Reading Buddies participants during the first and last sessions of the program. The 10 questions of the first subscale were read aloud to the participants, who completed their own paper copy of the survey. Demographics and Program Participation As part of the ERAS, participants were also asked for their grade and sex. During the program, attendance records were kept, so there was a record of how many sessions each child attended. Results In 2011, 19 out of the 37 children participating in the program completed both the pre-and posttests. In 2012, there were 18 Reading Buddies participants who took part in the study (although only 17 completed both the pre-and post-test of the ERAS), for a total of 37 study participants over two years. Nineteen of the study participants were boys and 18 were girls. The breakdown of grades they had just completed was as follows: During registration, we asked that parents register their children in Reading Buddies only if they expected to be able to attend at least five of the seven sessions. Figure 2 shows the number of participants grouped by the number of sessions they attended. Reading Test Participants were given a score on the reading test between -1 (preschool) and 9 (high school). The score is intended to reflect a normal reading level for a student's grade (e.g., a score of 2 is a second-grade reading level). Half scores could also be given (e.g., 1.5). We subtracted the participants' pre-test reading scores from their post-test reading scores to determine the change in reading level. On average, there was a small increase in the participants' reading levels over the course of the program. The average change in reading test scores was 0.08. The range for the change in reading test scores was from -1.5 to 2. Ten participants showed an increase in reading score, 8 showed a decrease, and 19 showed no change. As Figure 3 shows, few children's reading levels changed by more than 0.5 in either direction. Table 1. Few children attended fewer than five sessions (over half the study participants came to six sessions), so results here are also not conclusive. Grade level also appeared to make a difference to changes in reading levels. Between kindergarten and Grade 2, the change in reading level became more positive as the grade level increased. However, the correlation coefficient was not significant at -0.01. While it appears that the program's positive effects peak around Grade 2, it is important to keep in mind that the majority of the study participants were in first and second grade (only two third-grade students participated in the study). 42 The program also had a bigger impact on girls' reading scores than on boys', though overall it did have a small positive impact on both. The average change in score for girls was 0.14 and for boys was 0.03. Reading Attitudes Survey All participants were given a reading attitudes score between 10 and 40, with higher scores indicating a more positive attitude towards reading. Contrary to our expectations, the average change in ERAS scores between the first and last sessions was a decrease of 1.17 points. The change in ERAS scores ranged from -14 to 15. Sixteen participants showed an increase in their ERAS score, 19 showed a decrease, and 1 showed no change. While there was a wide range in changes to the ERAS scores, large changes in ERAS scores were uncommon: the majority or participants remained within 5 points of their pre-test score. See Figure 4. There was no correlation between the number of sessions attended and changes in ERAS scores. There appeared to be a relationship between the grade level of the child and the changes in their attitude toward reading in the 2011 group -the positive effects of the program increased up until the third grade -however, this was not so evident once the 2012 data was added. The correlation coefficient for last completed grade and change in ERAS score was 0.29. As there were only two third-grade students and three kindergarten students who participated in the study, it is difficult to draw any conclusions about these results. The program had slightly less impact on girls' ERAS scores than on boys', although both sexes show a slight decrease in attitude towards reading over the summer. On average, the female participants' scores decreased by 0.35 points and the male participants' scores decreased by 1.89 points. See Figure 5. Figure 4 Changes in ERAS scores between the first and last session of Reading Buddies In general, the boys also had lower raw scores on the ERAS test than the girls. In the pre-test, girls scored an average of 35.88 versus boys' average scores of 30.42. The post-test revealed similar results, with girls scoring 35.53 and boys scoring 28.53. Discussion The small number of participants in this study makes drawing any strong conclusions difficult. Our results show some interesting trends with regard to what effect the Reading Buddies program has had on its participants, but it is difficult to declare whether the program was successful or not. On average, the participants showed a slight increase in their reading level over the summer and their attitudes about reading became slightly more negative; however, the changes were very small. Many participants maintained the same reading level and a similar attitude towards reading. Although the average change in score for the reading test was slightly positive, there were some participants whose reading levels decreased between the pre-and post-test. This may be a symptom of the overall learning loss that occurs during the summer. Since we have no control that occurs during the summer. Since we have no control group for comparison, it is difficult to evaluate whether our program made a significant difference in combating summer learning loss. The research demonstrates that the more sessions the children attended, they more likely it was that their reading abilities would increase. This should be emphasized to parents, so that fewer sessions are missed during the summer. It is also possible that a longer program would have a more positive impact (e.g., a program run during the school year). We suspect that the short duration of the program will prohibit it from ever causing large increases in reading ability; however, the number of sessions seems to be sufficient to help maintain reading levels. For several of the participants, the program had a negative impact on their attitude towards reading. Though it is impossible to say why this was the case, the child's attitude towards participating in the program may have been a factor. Participants may have attended the program at the behest of parents or teachers, rather than of their own volition. Selection bias may also have been a factor, as the program was Sex differences between ERAS scores marketed towards struggling readers, who may have a more negative attitude towards reading than the general population. However scant the data may be, this information may point in the direction of potential changes to the program. On average, the boys entering the program had less positive attitudes towards reading than the girls, and also saw less positive effects from the program on both measures. This is consistent with research indicating that boys generally fall behind girls in reading level as they progress through school (Taylor, 2005). Better results for boys might be achieved if more attention were paid to their particular needs and interests. There were many factors that could not be measured in this study. The Reading Buddies sessions were loosely structured, and the pairs had choices with regards to how much time they would spend reading, discussing the books, and playing literacy-based games. The impact of supplementary activities versus time spent in one-on-one reading during the program was not measured. The task of keeping a record of the time spent on various activities may have distracted volunteers from their most important task: engaging with their younger partners. Additionally, some activities (e.g., reading and discussion) are so intertwined that measurements of time spent on them were unlikely to be accurate. The volunteers' skill as reading partners was also not taken into account. Volunteers all received the same training; however, many other factors affected their performance, such as personality, previous experience in similar programs, comfort levels with children, willingness to ask for help, and improvement over the course of the program. Quantifying the volunteers' skill as reading partners was impractical given the number of factors involved. There were also factors outside of the program that we were unable to measure. As discussed, voluntary reading is more effective than forced reading at reducing the summer reading gap (Roman et al., 2010). It stands to reason that participants who were motivated to read on their own may have had more success in the program than those who did not read voluntarily. Unfortunately, we had no way of accurately measuring how much voluntary reading participants were doing outside of the program. During the program, it was casually observed that some of the participants' parents were more enthusiastic about reading than others. This behaviour included making an effort to attend every session, encouraging children to check out books, bringing the family to other reading programs at the library, and reading books themselves while waiting for their children. It would be very interesting to see if this parental influence was related to improvements in reading level and attitudes, however we had no way of determining this during the first two years of the program. For future years, we hope to provide parents with information or training at the start of the program to emphasize the importance of modelling reading behaviour within the family. Conclusions This study was undertaken to determine how our library's summer reading mentor program would influence the participants' reading abilities and attitudes about reading. Our first hypothesis about the Reading Buddies program was supported: on average, the effects of the program on reading skills were positive. However, due to the small number of participants, further study will be needed to confirm these results. It is also clear that, while reading levels may improve slightly during Reading Buddies, maintaining children's reading levels is a more realistic goal for this program. The second hypothesis, which postulated that the program would lead to an increase in positive attitudes about reading, was not supported by the data gathered. Some participants did demonstrate a higher score on the post-test, as compared to the pre-test, but on average the study showed a small negative impact on attitude towards reading. Due to the small number of participants, further study will be needed to confirm these results. It appears that Reading Buddies helps to combat summer learning loss, both reading abilities and attitudes; however a study with a control group would provide stronger evidence for this finding.
2019-05-06T14:07:15.647Z
2013-03-14T00:00:00.000
{ "year": 2013, "sha1": "a81a1c9f06ed952e2cd1c1e95dd583440abbf9d8", "oa_license": "CCBYNCSA", "oa_url": "https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/download/18442/14791", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b62096c21d02a1ec3f8b47a90c11fb51538fa2ab", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }